• 1 Post
  • 160 Comments
Joined 2 years ago
cake
Cake day: July 5th, 2023

help-circle


  • I think in terms of cultural exchange of ideas and the enjoyment of being on the internet, 2005-2015 or so was probably the best. The barrier to entry was lowered to where almost anyone could make a meme or post a picture or upload a video or write a blog post or even a microblog post or forum comment of a single sentence and it might go viral through the power of word of mouth.

    Then when there was enough value in going viral people started gaming for that as a measure of success, so that it no longer was a reliable metric for quality.

    But plenty of things are now better. I think maps and directions are better with a smartphone. Access to music and movies is better than ever. It’s nice to be able to seamlessly video chat with friends and family. There’s real utility there, even if you sometimes have to work around things that aren’t ideal.



  • They’ve basically brought over the broken ladder of the management track, over to the technical track of increased technical expertise (without necessarily increasing management/administrative responsibilities).

    Currently, each generation of executives doesn’t come from within the company. There’s no simple path from mail room to executive anymore. Now, you have to leave the company to go get an MBA, then get hired by a consulting firm, then consult with that company as a client, before you’re on track to make senior management at the company.

    If the technical track is going this way, too, then these companies are going to become more brittle, and the current generation of entry level workers are going to hit a lot more career dead ends. It’s bad for everyone.


  • No, I don’t think you owe an apology. It’s a super common terminology almost to the point where I wouldn’t really even consider it outright wrong to describe it as a SoC. It’s just that the blurred distinction between a single chip and multiple chiplets packaged together are almost impossible for an outsider to tell without really getting into the published spec sheets for a product (and sometimes may not even be known then).

    It’s just more technically precise to describe them as SiP, even if SoC functionally means something quite similar (and the language may evolve to the point where the terms are interchangeable in practice).


  • When I plug my phone into the wall, there are chips in the wall charger and on both sides of the cable, because the simple act of charging requires a handshake and an exchange of information notifying the charger, the cable, and the phone what charging modes are supported, and how to ask for more or less power.

    Seriously? Am I the only one thinking this could be done with less than 10 chips at most?

    How many chips are in a fully configured desktop computer? There’s like dozens of any given motherboard, controlling all the little I/O requirements. Each module of RAM is several chips. If you use external cards, each card will have a few chips, too. Meanwhile, the keyboard and the mouse each have a few chips, and the display/monitor has a bunch more.

    I’d be surprised if the typical computer had less than 100 chips.

    Now let’s look at the car functions. A turn signal that blinks, oscillating between on and off? That’s probably a chip. A windshield wiper that can do intermittent wiping at different speeds? Another chip or more. Variable valve timing that’s electronically controlled? Another few chips. Each sensor that detects something, from fuel tank status to engine knocking to air/fuel mixture? Probably another chip. Controllers that combine all this information to determine how to mix the fuel and air, whether to trigger a warning light on the dash, etc.? Probably more chips. What about deployment of airbags, or triggering of the anti-lock braking systems? Cruise control requires a few more chips, as speedometers and odometers are not electronic rather than the old analog systems. Smart cruise control and lane detection has even more chips. Hybrid drivetrains that charge or discharge batteries need dozens of chips controlling the flow of power (and the logic of when power should flow in which direction).

    By the time Toyota was in the news in 2011 for potential throttle sticking problems that killed people, it was typical for even economy cars to have something like 30 ECUs controlling different things, with each ECU and its associated sensors requiring multiple chips.

    Some modern perks require even more chips. Automatic lights? High beam dimming? Automatic wipers? Remote start or shutting off the engine at idle?

    And that’s just for driving. FM tuner? Chips. AM tuner? More chips. Bluetooth and Carplay/Android Auto? More chips. Rear view camera, now mandated on all cars? More chips. A built-in GPS or infotainment system? A full blown computer.

    All the little analog controllers that were present in cars in the 80’s are now more efficiently performed on integrated circuits, including analog circuits. Each function will require its own chip. If you’re trying to recreate the exact functionality of a typical car from the 1990’s, you’d probably still need a minimum of a few hundred chips to pull it off. And it’s probably smart to segment things so that each module does one thing in a specialized way, isolated from the others, lest an unexpected input on the radio mess up the spark plug timing.

    The world is run by chips, and splitting up the functions into multiple computers/controllers, with multiple chips each, is just the easier and more efficient way to do things.


  • Tags interfere with human readability. Open any markdown file with a text editor in plain text and you can basically read the whole thing as it was intended to be read, with possibly the exception of tables.

    There’s a time and a place for different things, but I like markdown for human readable source text. HTML might be standardized enough that you can do a lot more with it, but the source file itself generally isn’t as readable.


  • the only option for top performance will be a SoC

    System in a Package (SiP) at least. Might not be efficient to etch the logic and that much memory onto the same silicon die, as the latest and greatest TSMC node will likely be much more expensive per square mm than the cutting edge memory production node from Samsung or whatever foundry where the memory is being made.

    But with advanced packaging going the way it’s been over the last decade or so, it’s going to be hard to compete with the latency/throughout of an in-package interposer. You can only do so much with the vias/pathways on a printed circuit board.


  • Can humans actually do it, though? Are humans actually capable of driving a car reasonably well using only visual data, or are we actually using an entire suite of sensors in our heads and bodies to understand our speed and orientation, road conditions, and our surroundings? Driving a car by video link is considerably harder than just driving a car normally, from within a car.

    And even so, computers have a long way to go before they catch up with our visual processing. Our visual cortex does a lot of error correction of visual data, using proprioceptive sensors in our heads that silently and seamlessly delete the visual smudges and smears of motion as our heads move. The error correction adjusts quickly to recalibrate things when looking at stuff under water or anything with a different refractive index, or when looking at reflections in a mirror.

    And we maintain that flow of visual data by correcting for motion and stabilizing the movement of our eyes to compensate from external motion. Maybe not as good as chickens, but we’re pretty good at it. We recognize faulty sensor data and correct for it by moving our heads around obstructions, of silently ignoring something that is just blocking one eye, of blinking or rubbing our eyes when tears or water make it hard to focus. We also know when to not trust our eyes (in the dark, in fog, when temporarily blinded by lights), and fall back to other methods of understand the world around us.

    Throw in our sense of balance in our inner ears, our ability to direction find on sounds, and the ability to process vibrations in our seat and tactile feedback on a steering wheel, the proprioception of feeling forces on our body or specific limbs, and we have an entire system that uses much more than visual data to make decisions and model the world around us.

    There’s no reason why an artificial system needs to use exactly the same type of sensors as humans or other mammals do. And we have preexisting models and memories of what is or was around us, like when we walk around our own homes in the dark. But my point is that we rely on much more than our eyes, processed through an image processing system far more complex than the current state of AI vision. Why hold back on using as much sensor data as possible, to build a system that has good, reliable sensor data of what is on the road?


  • But the big one here is the characteristic word. By adding Fenyx Rising, it could be argued that that, in addition to the material differences between the products, there is enough separation to ensure there is no risk of confusion from audiences. There are also multiple Immortals trademarks which could make that word in and of itself less defensible depending on the potential conflict.

    That’s basically it right there. The word “immortal” has multiple dictionary definitions tracing back long before any trademark, including a prominent ancient military unit so any trademark around that word isn’t strong enough to prevent any use of the word as a normal word, or even as part of another trademark when used descriptively.

    The strongest trademark protection comes for words that are totally made up for the purpose of the product or company. Something like Hulu or Kodak.

    Next up are probably mashed up words that might relate to existing words but are distinct mashups or modifications, like GeForce or Craisins.

    Next up, words that have meaning but are completely unrelated to the product itself, like Apple (computers) and Snickers (the candy bar) or Tide (the laundry detergent).

    Next up are suggestive marks where the trademark relies on the meaning to convey something about the product itself, but still retains some distinctiveness: InSinkErator is a brand of in-sink disposal, Coffee Mate is a non-dairy creamer designed for mixing into coffee, Joy-Con is a controller designed to evoke joy, etc.

    Some descriptive words don’t get trademark protection until they enter the public consciousness as a distinct indicator of its origin or manufacture. Name-based businesses often fall into this category, like a restaurant named after the owner, and don’t get protection until it’s popular enough (McDonald’s is the main example).

    It can get complicated, but the basic principle underlying all of it is that if you choose a less unique word as the name of your trademark, you’ll get less protection against others using it.



  • Networking standards started picking winners during the PC revolution of the 80’s and 90’s. Ethernet, with the first standards announced in 1983, ended up beating out pretty much other LAN standard at the physical layer (physical plugs, voltages and other ways of indicating signals) and the data link layer (the structure of a MAC address or an Ethernet frame). And this series of standards been improved many times over, with meta standards about how to deal with so many generations of standards through autonegotiation and backwards compatibility.

    We generally expect Ethernet to just work, at the highest speeds the hardware is capable of supporting.


  • all the quadratic communication and caching growth it requires.

    I have trouble visualizing and understanding how the Internet works at scale, but can generally grasp how page-by-page or resource-by-resource requests work. I struggle to understand how one could efficiently parse the firehose of activity coming from every user on every instance that your own users follow, at least in user-focused services like Mastodon (or Twitter or Bluesky). With Lemmy, there will be many more people following the biggest communities with the most activity, so caching naturally scales. But with Twitter-like follows of individual accounts, there are going to be a lot of accounts on the long tail, with lots of different accounts being followed only by a few people. The most efficient method is to just ignore the small accounts, but obviously that ends up affecting a large number of accounts. But on the other hand, keeping up with the many small accounts will end up occupying all the resources on stuff very few people want to see.

    A centralized service has to struggle with this as well, but might have better control over caching and other on-demand retrieval of content in lower demand, without inadvertently DDoSing someone else’s server.



  • I wonder if someone could set up some form of tunneling through much more mundane traffic, perhaps even entirely over a legitimate encrypted service through a regular browser interface (like the browser interface for services like Discord or slack or MS Teams or FB Messenger or Zoom or Google Chat/Meet) where you can just literally chat with a bot you’ve set up, and instruct the bot to do things on its end, and then forward the results through file sending in that service. From the outside it should look like encrypted chat with a popular service over that https connection.


  • If you’re 25 now, you were 15 during the early wild west days of smartphone adoption, while we as a society were just figuring that stuff out.

    Since that time, the major tech companies that control a big chunk of our digital identities have made pretty big moves at recording family relationships between accounts. I’m a parent in a mixed Android/iOS family, and it’s pretty clear that Apple and Google have it figured out pretty well: child accounts linked to dates of birth that automatically change permissions and parental controls over time, based on age (including severing the parental controls when they turn 18). Some of it is obvious, like billing controls (nobody wants their teen running up hundreds of dollars in microtransactions), app controls, screen time/app time monitoring, location sharing, password resets, etc. Some of it is convenience factor, like shared media accounts/subscriptions by household (different Apple TV+ profiles but all on the same paid subscription), etc.

    I haven’t made child accounts for my kids on Meta. But I probably will whenever they’re old enough to use chat (and they’ll want WhatsApp accounts). Still, looking over the parent/child settings on Facebook accounts, it’ll probably be pretty straightforward to create accounts for them, link a parent/child relationship, and then have another dashboard to manage as a parent. Especially if something like Oculus takes off and that’s yet another account to deal with paid apps or subscriptions.

    There might even be network effects, where people who have child accounts are limited in the adult accounts they can interact with, and the social circle’s equilibrium naturally tends towards all child accounts (or the opposite, where everyone gets themselves an adult account).

    The fact is, many of the digital natives of Gen Alpha aren’t actually going to be as tech savvy as their parents as they dip their toes into the world of the internet. Because they won’t need to figure stuff out on their own to the same degree.



  • Yeah, this advanced packaging stuff is pretty new, where they figured out how to make little chiplets but still put them onto the same package, connected by new tech that finally allows for high speed, low latency connections between chiplets (without causing dealbreaker temperature issues). That’s opened up a lot of progress even as improving the circuits on the silicon itself has run into engineering challenges.

    So while TSMC seemingly ahead of its competition on actually printing circuits on silicon with smaller and denser features, advanced packaging tech is going a long way in allowing companies to mix and match different pieces of silicon with different strengths and functionality (for a more cost effective end solution, and making better use of the nodes that aren’t at the absolute bleeding edge).

    Engineers are doing all sorts of cool stuff right now.


  • You’re right, it’s not the same die, but the advanced packaging techniques that they keep improving (like the vertical stacking you mention) make for a much tighter set of specs for the raw flash storage silicon compared to what they might be putting in USB drives or NVMe sticks, in power consumption/temperature management, bus speeds/latency, form factor, etc.

    So it’d be more accurate to describe it as a system on a package (SiP) rather than a system on a chip (SoC). Either way, that carries certain requirements that aren’t present for a standalone storage package separately soldered onto the PCB, or even storage through some kind of non-soldered swappable interface.