• 1 Post
  • 146 Comments
Joined 1 year ago
cake
Cake day: July 5th, 2023

help-circle

  • I wonder if someone could set up some form of tunneling through much more mundane traffic, perhaps even entirely over a legitimate encrypted service through a regular browser interface (like the browser interface for services like Discord or slack or MS Teams or FB Messenger or Zoom or Google Chat/Meet) where you can just literally chat with a bot you’ve set up, and instruct the bot to do things on its end, and then forward the results through file sending in that service. From the outside it should look like encrypted chat with a popular service over that https connection.


  • If you’re 25 now, you were 15 during the early wild west days of smartphone adoption, while we as a society were just figuring that stuff out.

    Since that time, the major tech companies that control a big chunk of our digital identities have made pretty big moves at recording family relationships between accounts. I’m a parent in a mixed Android/iOS family, and it’s pretty clear that Apple and Google have it figured out pretty well: child accounts linked to dates of birth that automatically change permissions and parental controls over time, based on age (including severing the parental controls when they turn 18). Some of it is obvious, like billing controls (nobody wants their teen running up hundreds of dollars in microtransactions), app controls, screen time/app time monitoring, location sharing, password resets, etc. Some of it is convenience factor, like shared media accounts/subscriptions by household (different Apple TV+ profiles but all on the same paid subscription), etc.

    I haven’t made child accounts for my kids on Meta. But I probably will whenever they’re old enough to use chat (and they’ll want WhatsApp accounts). Still, looking over the parent/child settings on Facebook accounts, it’ll probably be pretty straightforward to create accounts for them, link a parent/child relationship, and then have another dashboard to manage as a parent. Especially if something like Oculus takes off and that’s yet another account to deal with paid apps or subscriptions.

    There might even be network effects, where people who have child accounts are limited in the adult accounts they can interact with, and the social circle’s equilibrium naturally tends towards all child accounts (or the opposite, where everyone gets themselves an adult account).

    The fact is, many of the digital natives of Gen Alpha aren’t actually going to be as tech savvy as their parents as they dip their toes into the world of the internet. Because they won’t need to figure stuff out on their own to the same degree.



  • Yeah, this advanced packaging stuff is pretty new, where they figured out how to make little chiplets but still put them onto the same package, connected by new tech that finally allows for high speed, low latency connections between chiplets (without causing dealbreaker temperature issues). That’s opened up a lot of progress even as improving the circuits on the silicon itself has run into engineering challenges.

    So while TSMC seemingly ahead of its competition on actually printing circuits on silicon with smaller and denser features, advanced packaging tech is going a long way in allowing companies to mix and match different pieces of silicon with different strengths and functionality (for a more cost effective end solution, and making better use of the nodes that aren’t at the absolute bleeding edge).

    Engineers are doing all sorts of cool stuff right now.


  • You’re right, it’s not the same die, but the advanced packaging techniques that they keep improving (like the vertical stacking you mention) make for a much tighter set of specs for the raw flash storage silicon compared to what they might be putting in USB drives or NVMe sticks, in power consumption/temperature management, bus speeds/latency, form factor, etc.

    So it’d be more accurate to describe it as a system on a package (SiP) rather than a system on a chip (SoC). Either way, that carries certain requirements that aren’t present for a standalone storage package separately soldered onto the PCB, or even storage through some kind of non-soldered swappable interface.


  • Packaging flash storage onto the actual SoC SiP costs more than manufacturing the same amount of storage into an M.2 or external USB form factor, so that price can’t be directly compared. They’re making a big chunk of profit on storage upgrades, and on cloud subscriptions, but it’s not exactly cheap to give everyone 1TB of storage at that base price.


  • GamingChairModel@lemmy.worldtoProgrammer Humor@programming.devEvil
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    2 months ago

    Ok so most monitors sold today support DDC/CI controls for at least brightness, and some support controlling color profiles over the DDC/CI interface.

    If you get some kind of external ambient light sensor and plug it into a USB port, you might be able to configure a script that controls the brightness of the monitor based on ambient light, without buying a new monitor.



  • Apple does two things that are very expensive:

    1. They use a huge physical area of silicon for their high performance chips. The “Pro” line of M chips have a die size of around 280 square mm, the “Max” line is about 500 square mm, and the “Ultra” line is possibly more than 1000 square mm. This is incredibly expensive to manufacture and package.
    2. They pay top dollar to get the exclusive rights to TSMC’s new nodes. They lock up the first year or so of TSMC’s manufacturing capacity at any given node, at which point there is enough capacity to accommodate other designs from other TSMC clients (AMD, NVIDIA, Qualcomm, etc.). That means you can just go out and buy an Apple device made from TSMC’s latest node before AMD or Qualcomm have even announced the lines that will be using those nodes.

    Those are business decisions that others simply can’t afford to follow.




  • You say that it is sorted in the order of most significants, so for a date it is more significant if it happend 1024, 2024 or 9024?

    Most significant to least significant digit has a strict mathematical definition, that you don’t seem to be following, and applies to all numbers, not just numerical representations of dates.

    And most importantly, the YYYY-MM-DD format is extensible into hh:mm:as too, within the same schema, out to the level of precision appropriate for the context. I can identify a specific year when the month doesn’t matter, a specific month when the day doesn’t matter, a specific day when the hour doesn’t matter, and on down to minutes, seconds, and decimal portions of seconds to whatever precision I’d like.


  • This isn’t exactly what you asked, but our URI/URL schema is basically a bunch of missed opportunities, and I wish it was better designed.

    Ok so it starts off with the scheme name, which makes sense. http: or ftp: or even tel:

    But then it goes into the domain name system, which suffers from the problem that the root, then top level domain, then domain, then progressively smaller subdomains, go right to left. www.example.com requires the system look up the root domain, to see who manages the .com tld, then who owns example.com, then a lookup of the www subdomain. Then, if there needs to be a port number specified, that goes after the domain name, right next to the implied root domain. Then the rest of the URL, by default, goes left to right in decreasing order of significance. It’s just a weird mismatch, and would make a ton more sense if it were all left to right, including the domain name.

    Then don’t get me started about how the www subdomain itself no longer makes sense. I get that the system was designed long before HTTP and the WWW took over the internet as basically the default, but if we had known that in advance it would’ve made sense to not try to push www in front of all website domains throughout the 90"s and early 2000’s.


  • Your day to day use isn’t everyone else’s. We use times for a lot more than “I wonder what day it is today.” When it comes to recording events, or planning future events, pretty much everyone needs to include the year. Getting things wrong by a single digit is presented exactly in order of significance in YYYY-MM-DD.

    And no matter what, the first digit of a two-digit day or two-digit month is still more significant in a mathematical sense, even if you think that you’re more likely to need the day or the month. The 15th of May is only one digit off of the 5th of May, but that first digit in a DD/MM format is more significant in a mathematical sense and less likely to change on a day to day basis.


  • Functionally speaking, I don’t see this as a significant issue.

    JPEG quality settings can run a pretty wide gamut, and obviously wouldn’t be immediately apparent without viewing the file and analyzing the metadata. But if we’re looking at metadata, JPEG XL reports that stuff, too.

    Of course, the metadata might only report the most recent conversion, but that’s still a problem with all image formats, where conversion between GIF/PNG/JPG, or even edits to JPGs, would likely create lots of artifacts even if the last step happens to be lossless.

    You’re right that we should ensure that the metadata does accurately describe whether an image has ever been encoded in a lossy manner, though. It’s especially important for things like medical scans where every pixel matters, and needs to be trusted as coming from the sensor rather than an artifact of the encoding process, to eliminate some types of error. That’s why I’m hopeful that a full JXL based workflow for those images will preserve the details when necessary, and give fewer opportunities for that type of silent/unknown loss of data to occur.


    • Existing JPEG files (which are the vast, vast majority of images currently on the web and in people’s own libraries/catalogs) can be losslessly compressed even further with zero loss of quality. This alone means that there’s benefits to adoption, if nothing else for archival and serving old stuff.
    • JPEG XL encoding and decoding is much, much faster than pretty much any other format.
    • The format works for both lossy and lossless compression, depending on the use case and need. Photographs can be encoded in a lossy way much more efficiently than JPEG and things like screenshots can be losslessly encoded more efficiently than PNG.
    • The format anticipates being useful for both screen and prints. Webp, HEIF, and AVIF are all optimized for screen resolutions, and fail at truly high resolution uses appropriate for prints. The JPEG XL format isn’t ready to replace camera RAW files, but there’s room in the spec to accommodate that use case, too.

    It’s great and should be adopted everywhere, to replace every raster format from JPEG photographs to animated GIFs (or the more modern live photos format with full color depth in moving pictures) to PNGs to scanned TIFFs with zero compression/loss.



  • Semiconductor manufacturing has gotten better over time, with exponential improvement to transistor density, which translates pretty directly to performance. This observation traces back to the 60’s and is commonly known as Moore’s Law.

    Fitting more transistors into the same size space required quite a few technical advancements and paradigm shifts. But for the first few decades of Moore’s law, every time they started to approach some kind of physical limit, they’d develop a completely new technique to get things smaller: photolithography moved from off the shelf chemicals purchased from photography companies like Eastman Kodak to specialized manufacturing processes, while the light used went higher and higher wavelength, with the use of new technology like lasers to get even more precisely etched masks.

    Most recently, the main areas of physical improvement has been in using extreme ultraviolet (aka EUV) wavelengths to get really small features, and certain three dimensional structures that break out from the old paradigm of stacking a bunch of planar materials on each other. Each of these breakthroughs was 20 years in the making, so the R&D and the implementation details had to be hammered out with partners in a tightly orchestrated process, to see if it would even work at scale.

    Some manufacturers recognized the huge cost and the uncertainty of success in taking stuff from academic papers in the 2000s and actually mass producing chips in 2025, so they abandoned the leading edge. Global Foundries, Micron, and a bunch of others basically decided it wasn’t worth the investment to try to compete, and now manufacture in those older nodes, without necessarily trying to compete on the newest nodes, leaving things to Intel, Samsung, and TSMC.

    TSMC managed to get EUV working at scale before Intel did. And even though Intel beat TSMC to market with certain three dimensional structures known as “FinFETs,” the next 2 generations after that, TSMC managed to really shove them in there at higher density, by using combining those FinFETs with lithography techniques that Intel couldn’t figure out fast enough. And every time Intel seemed to get close, a new engineering challenge would stifle them. And after a few years of stagnation, they went from being consistently 3 years ahead of TSMC to seeming like they’re about 2 years behind TSMC.

    On the design side of things, AMD pioneered chiplet-based design, where different pieces of silicon could be packaged together, which allowed them to have higher yields (an error in a big slab of silicon might make the whole thing worthless) and to mix and match things in a more modular way. Intel was slow to adopt that, so AMD started taking the lead in CPU performance per watt.

    It’s difficult engineering challenges, traceable back to decisions made in the past decades. Not all of the decisions were obviously wrong at the time, but nobody could’ve predicted at the time that TSMC and AMD would be able to leapfrog Intel based on these specific engineering challenges.

    Intel has a few things on the roadmap that might allow it to leapfrog the competition again (especially if the competition runs into their own setbacks). Intel is ramping up use of EUV in its current processes, are ramping up a competing three dimensional structures they call RibbonFET to compete with TSMC’s Gate All Around (both of which are supposed to replace FinFETs) and they’re hoping to beat TSMC to backside power delivery, which is going to represent a significant paradigm shift in how chips are designed.

    It’s true that in business, success begets success, but it’s also true that each new generation presents its own novel challenges, and it’s not easy to see where any given competitor might get stuck. Semiconductor manufacturing is basically wizardry, and the history of the industry shows that today’s leaders might get left behind, really quickly.