• AlecSadler@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    I’ll admit I don’t use Macs, so maybe they are more efficient than the Linux and windows machines I work off…

    …but I typically use machines with 64GB and recently upgraded my personal machine to 128GB. I still swap about 50GB to my SSD from time to time.

    And I’m not doing heavy graphic design or movie editing stuff.

    I cannot fathom for the life of me how 8GB would ever be feasible.

    • MacN'Cheezus@lemmy.today
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      I have a five year old MBP here with 16 gigs of RAM and it runs the latest version of macOS. I can run multiple web browsers with dozens of open tabs, VS Code, an LLM, and a video editing app on it, all simultaneously, without breaking a sweat.

      IDK what Apple’s secret sauce is but their shit just works better than everyone else’s, that’s a fact.

    • Ghostalmedia@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      I get the sense that a lot of people here don’t use MacOS.

      I have a few ARM and Intel Macs in 8 and 16gig configs, and I do a lot of heavy multimedia work. My 8 gig M1 only really gets into trouble when my partner and I both have an account with files open in bloated creative software. One pro user, and it’s usually fine. 2 active accounts with shitty creative software running, and you get a few beach balls.

      • AlecSadler@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        Interesting to know for sure! I guess I can’t speak to what they’re doing for optimizations first hand, but at the same time…my 128GB cost me like $300 on sale so, I dunno, a wash? Haha.

        I’ve tried to become a Mac convert a few times, mostly peer pressure, but I just haven’t been able to do it successfully yet.

        • Ghostalmedia@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          Yeah, if I’m building a PC, I’ll throw in as much RAM as I can get.

          That said, with 16gigs I’m usually not thinking about RAM at all. I’d probably only want to go higher than that if I was living in Adobe Lightroom 24/7.

      • summerof69@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        I get the sense that a lot of people here don’t use MacOS.

        I wish that was true.

    • lolcatnip@reddthat.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      Dude, that’s how much RAM I used to have on a super high-end dev box at work with 56 cores. It was very helpful for compiling Chrome. WTF are you doing with a personal machine that needs that much RAM?

      • AlecSadler@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        I mean it’s my personal machine but I am a software engineer consultant/contractor so I use it for work, too.

        • lolcatnip@reddthat.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          Ok fair enough. It’s just surprising to see someone say that. The standard-issue dev machine where I work is a laptop with 32 GB.

    • emptiestplace@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      Do you understand kernel memory management fundamentals? I’m asking because what you wrote here strongly suggests otherwise - so, unless you’re able to show me I’m wrong, I’m going to stick with my conclusion that this is all incorrect and likely complete bullshit.

  • horse@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    There is exactly one reason why they do this: So they can charge you $200 to upgrade it to 16GB and in doing so make the listed price of the device look $200 cheaper than it actually is. Or sometimes $400 if it’s a model where the base model comes with a 256GB SSD (the upgrade to 512GB, the minimum I’d ever recommend, is also $200).

    The prices Apple charges for storage and RAM are plain offensive. And I say that as someone who enjoys using their stuff.

  • Veraxus@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    My basic web dev Docker suite uses about 13GB just on its own, which - assuming you were on 16GB (double Apple’s minimum) - wouldn’t leave much for things like browser tabs, which also eat memory for breakfast.

    A fast swap is not an argument to short-change on RAM, especially since SSDs have a shorter lifespan than RAM modules. 16GB remains the absolute bare minimum for modern computing, and Apple is making weak, ridiculous excuses to pocket just a few extra bucks per MacBook.

    • hector@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      Wow! 13GB! I did some heavy stuff on my computer with like a shit ton of Docker servers running together + deployment and I never reached 13GB!

      Without disclosing private company information lol what are you doing ;)

      • ben_dover@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        7 months ago

        not OP, but I have to run fronted and backend of a project in docker simultaneously (multiple postgres and redis dbs, queues, search index, etc., plus two webservers), plus a few browser tabs and two VSCode instances open, regularly pushes my machine over 15gb ram usage

        pretty much like this

      • Veraxus@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        Running a suite of services in containers (DBs, DNS, reverse proxy, memcached, redis, elasticsearch, shared services, etc) plus a number of discreet applications that use all those things. My day-to-day usage hovers around 20GB with spikes to 32 (my max allocation) when I run parallelized test suites.

        Dockers memory usage really adds up fast.

    • accideath@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      Playing devils advocate here: As someone who deals with stuff like that, you also wouldn’t buy the base model mac. The average computer user can get by with 8GB just fine and it’s not like you can’t configure Macs with more than that.

      That of course doesn’t justify the abhorrent price of the upgrades…

        • accideath@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          Maybe you’re not an average user then. Most people just browse the web and maybe manage some photos or fill out a document once in a while. You could do that on 4GB if you wanted to, let alone 8.

      • PraiseTheSoup@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        The average computer user can get by with 8GB just fine

        Hard disagree. The average computer user is idling at 5gb already because the average computer user is stupid.

        • accideath@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          Still leaves 3gb for the web browser and the average user isn’t using anything else anyways. And even on chrome that’s quite a few pages.

  • BilboBargains@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    As engineers, we should never insert proprietary interfaces into our designs. We shouldn’t obfuscate the design.

    The motivation for these toxic practices comes from the business side because it’s profitable. These people won’t share the profits with you because they are psychopaths. Ultimately we are making more waste when electronics cannot be upgraded, maintained and repaired. It’s bad for people and it’s bad for the environment.

  • NostraDavid@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    I haven’t used 8GB since… 2008 or so? TBF, I’m a power user (as are most people on any Lemmy instance, I presume), but still…

    And sure, Mac OS presumably uses less RAM than Windows, but all the applications don’t.

    • Ghostalmedia@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      7 months ago

      Have you used an 8 gig ARM Mac?

      I’m pretty brutal on my machines, and if my 8 gig m1 really only starts to beach ball when multiple accounts are open, and those accounts all have bloated multimedia software running.

      My 16 gig machines can handle that use case fine, but the 8 gig machine will occasionally beach ball.

      Personally, I won’t buy an 8 gig config again. But I’m a fucking monster that leaves a million bloated things open across multiple active user sessions.

      • iopq@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        There are people who never touch anything but the browser and email. For them the SSD keeping some page files is good enough

  • KillingTimeItself@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    7 months ago

    what a weird title bro, of course they argue in favor of it, they sell the fucking hardware that they created. Be a little weird if they just argued against it after spending billions designing and manufacturing it.

    Regardless, i still can’t believe apple thought 8GB minimum was ok, genuinely baffling to me.

  • Blackmist@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    8GB RAM is what my phone has.

    Having that in a laptop shows what they think of people buying their kit. They think you’re only buying it so you can type easier on Facebook.

      • Blackmist@feddit.uk
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        Yeah, but if you have plenty of RAM on Android, there’s a chance those apps you left in the background will still be running when you go back to them, rather than doing the usual Android thing of just restarting them.

        • KillingTimeItself@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          nothing that requires 8GB of ram lol.

          I’ve played the entirety of java minecraft on an old thinkpad with 4GB of ram. It didn’t crash (i dont use swap)

          There literally shouldn’t be anything capable of using that much memory.

          • greedytacothief@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            7 months ago

            Is this bait? Because like, you could be rendering, simulating, running virtual machines. Lots of stuff that aren’t web browsers also eat ram

              • greedytacothief@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                7 months ago

                I was trying to mention things that weren’t just web browsers. Since it seemed the comment was about programs that use more ram than they seemingly need to.

                Edit: There’s like photogrammetry and stuff that happens on phones now!

                • KillingTimeItself@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  7 months ago

                  i suppose photo editing would be one? Maybe? I’m not sure how advanced photo editing would be on mobile, it’s not like you’re going to load up the entirety of GIMP or something.

                  As for photogrammetry, i’m not sure that would consume very much ram. It could, i honestly don’t think it would be that significant.

                • AdrianTheFrog@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  7 months ago

                  There’s like photogrammetry and stuff that happens on phones now!

                  No, the photogrammetry apps all use cloud processing. The LIDAR ones don’t, but that’s only for Apple phones and the actual mesh quality is pretty bad.

                • woelkchen@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  7 months ago

                  it’s not like most people are chronically browsing the web on their phones.

                  Yes, they do.

              • dustyData@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                7 months ago

                My man, have you been to selfhosted? People are using smart phones for all kinds of crazy stuff. They are basically mini ARM computers. Particularly the flagships, they can do many things like editing video, rendering digital drawings, after they end their use life they can host adguards, do torrent to NAS, host nextcloud. You name it.

                • pythonoob@programming.dev
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  7 months ago

                  Something like the Samsung Dex app that basically turns your phone into a mini computer with kbm and a monitor wouldn’t bee too bad tbh for most people. Take all your shit with you in your pocket and dock it at home or at work or whatever.

                • KillingTimeItself@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  7 months ago

                  yeah, i literally selfhost a server, running like 8 different services. I’m quite acclimated to it by now. Using a phone for this kind of thing is the wrong device. A chromebook is going to be a better alternative. You can probably get those cheaper anyway.

                  A big problem with phones is that they just aren’t really designed for that kind of thing, you leave a phone plugged in constantly and it’s going to spicy pillow itself. Let alone even trying to do that on something that isn’t an android. I cannot imagine the hell that self hosting on an android would be, let alone on an iphone.

                  I could see a usecase for it as a network relay in the event that you need a hyper portable node or something. GLHF with the dongling if you need those.

                  Unfortunately, if you already have a server, it’s going to be better to just spin up a new task on that server, as the cost of running a new device is going to outweight the cost of just using an existing one that’s already running. Also, you can get stuff like a raspi or le potato for pretty cheap also. not very powerful, but probably more utility, especially given the IO.

                • AdrianTheFrog@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  7 months ago

                  It sounds a lot more cost effective to get a used mini-pc than a flagship phone for any sort of server stuff.

            • AdrianTheFrog@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              7 months ago

              you could be rendering, simulating, running virtual machines

              On a phone? I guess you could, although 4gb is probably enough for any video game that any amount of people use.

              • woelkchen@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                7 months ago

                People use phone apps for photo and video editing these days. The common TikTok kid out there doesn’t use Adobe Premiere on a desktop workstation.

                Phone apps often are desktop applications with a specialized GUI these days.

                • KillingTimeItself@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  edit-2
                  7 months ago

                  i mean yeah, but even then those aren’t significant filters, and what makes you think that tiktok isn’t running a render farm somewhere in china to collect shit tons of data? They’re already collecting the data, might as well provide a rendering service to make the UI nicer, but i don’t use tiktok so don’t quote me on it.

                  Those are also all built into tiktok, and im pretty sure tiktok doesn’t require 8GB of ram to open.

          • IthronMorn@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            7 months ago

            What about running a chrooted nix install and using a vnc to connect to it? While web browsing and playing a background video? Just because you don’t use your ram doesn’t mean others don’t. And no, I don’t use all my ram, but a little overhead is nice.

            • KillingTimeItself@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              0
              ·
              7 months ago

              on a phone? I mean i suppose you could do that, but VNC is not a very slick remote access tool for anything other than, well, remote access. The latency and speed over WIFI would be a significant problem, i suppose you could stream from your phone to your TV, but again, most TVs that exist today are smart TVs so literally a non issue.

              my example here was using a computer rather than a phone, to show that even desktop computing tasks, don’t really use all that much ram.

              • IthronMorn@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                0
                ·
                7 months ago

                Well, then by that logic, since desktop computing tasks don’t really use all that ram: we shouldn’t need more than 8GB in a desktop ever. Yes, my example was a tad extreme, vnc-ing into your own VM on your phone, but my point was rather phones are becoming capable and replacing traditional computers more and more. A more realistic example is when I was using Samsung Dex the other day I had 80ish chrome tabs open, a video chat, and a terminal ssh’d into my computer fixing it. I liked the overhead of ram I had above me. Was I even close to 12GB? No. But it gave me room if I wanted another background program or had to spin something up quickly without disrupting my flow or lagging out/crashing.

  • GlobalMind@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    I also can not figure out why so many companies are selling them with only a 500Gb drive. SSD or HDD.

  • Dr. Moose@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    Apple has been really stretching their takes lately. Nice to see some fire under their ass though it’s not going to matter. Too many ignorant people falling for likeable propaganda.

  • sugar_in_your_tea@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    Well yeah, they’re enough to meet the minimum use cases so they can upsell most people on expensive RAM upgrades.

    That’s why I don’t buy laptops with soldered RAM. That’s getting harder and harder these days, but my needs for a laptop have also gone down. If they solder RAM, there’s nothing you can (realistically) do if you need more, so you’ll pay extra when buying so they can upcharge a lot. If it’s not soldered, you have a decent option to buy RAM afterward, so there’s less value in upselling too much.

    So screw you Apple, I’m not buying your products until they’re more repair friendly.

    • akilou@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      I had a extra stick of RAM available the other day so I went to open my wife’s Lenovo to see if it’d take it and the damn thing is screwed shut with the smallest torx screws I’ve ever seen, smaller than what I have. I was so annoyed

      • tal@lemmy.today
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        smallest torx screws I’ve ever seen

        Torx is legitimately useful for small screws, because it’s more resistant to stripping than Phillips.

        Now, if they start using Torx security bits or some oddball shapes, then they’re just being obnoxious. But there are not-trying-to-obstruct-the-customer reasons not to use Phillips.

          • seth@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            7 months ago

            Does it have triangle bits? Nintendo uses some really unusual driver shapes.

            • generichate1546@lemmynsfw.com
              link
              fedilink
              English
              arrow-up
              0
              ·
              7 months ago

              I’ve taken apart so so so many things… sometimes for the right reasons and sometimes for the wrong reasons…my ZuneHD still works. I’ll never ever try to open a Surface product.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        I bought the E495 because the T495 had soldered RAM and one RAM slot, while the E495 had both RAM slots replacable. Adding more RAM didn’t need any special tools. Newer E-series and T-series both have one RAM slot and some soldered RAM. I’m guessing you’re talking about one of the consumer lines, like the Yoga series or something?

        That said, Lenovo (well, Motorola in this case, but Lenovo owns Motorola) puts all kinds of restrictions to your rights if you unlock the bootloader of their phones (PDF version of the agreement). That, plus going down the path of soldering RAM gives me serious concerns about the direction they’re heading, so I can’t really recommend their products anymore.

        If I ever need a new laptop, I’ll probably get a Framework.

        • Capricorn_Geriatric@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          7 months ago

          puts all kinds of restrictions to your rights

          The document mentions a lot of US laws. I wonder if they try the same over in the EU.

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            7 months ago

            I’m guessing it wouldn’t hold. But I’m in the US, so I’ll just avoid their phones going forward, and will probably avoid their laptops and whatnot as well just due to a lack of trust.

        • tal@lemmy.today
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          7 months ago

          I keep looking at the Frameworks, because I’m happy with the philosophy, but the problem is that the parts that they went to a lot of trouble to make user-replaceable are the parts that I don’t really care about.

          They let you stick a fancy video card on the thing. I’d rather have battery life – I play games on a desktop. If they’d stick a battery there, that might be interesting.

          They let you choose the keyboard. I’m pretty happy with current laptop keyboards, don’t really need a numpad, and even if you want one, it’s available. I’ve got no use for the LED inserts that you can stick on the thing if you don’t want keyboard there.

          They let you choose among sound ports, Ethernet, HDMI, DisplayPort, and various types of USB. Maybe I could see putting in more USB-C then some other vendors have. But the stuff I really want is:

          • A 100Wh battery. Either built-in, or give me a bay where I can put more internal battery.

          • A touchpad with three mechanical buttons, like the Synaptics ones that the Thinkpads have.

          The fact that they aren’t soldering in the RAM and NVMe is nice in that they’re committing to not charging much more then market rate, so I guess they should get credit for that, but they are certainly not the only vendor to avoid soldering those.

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            7 months ago

            Yeah, ThinkPad used to allow either a CD drive or an extra battery in their T-series. They stopped offering the extra battery and started soldering RAM, so I got the cheaper E-series (might as well save cash if I can get what I want).

            I think there’s a market there. Have an option for a hot-swap battery to bring on trips and use the GPU at home. Serious travelers could even bring a spare battery to keep working for longer.

            touchpad with three mechanical buttons

            Yes please! And give me the ThinkPad nipple as well. :) If they had those, I’d not bother with even looking at Lenovo. The middle button is so essential to my normal workflow that any other laptop (including my fancy MacBook for work) feels crappy.

            I’m guessing the things they made modular are just the low hanging fruit. It’s pretty easy to make a USB-C to whatever port, it’s a bit harder to make a pluggable battery in a slot that can also support a GPU.

            • tal@lemmy.today
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              7 months ago

              I don’t know if I’d recommend it, but if you are absolutely set on having the Thinkpad nipple – I don’t use it, even if I really want the Thinkpad trackpad – the factory that made the original IBM Model M keyboards is still in business somewhere in Kentucky. IIRC the employees bought it or something when IBM stopped making the things. They offer a nipple keyboard, goes by the name of “Endura Pro”. checks Unicomp. That’s the remnants in the US of the IBM business; the Chinese Lenovo purchased the laptops and also do the Trackpoint.

              I got one like twenty years back, and while the actual buckling-spring keyswitches on the keyboard are pretty much immune to time, I wore out the switches on the mouse buttons, so I don’t know if I can give a buy recommendation for the mouse-enabled version (though maybe they improved the switches there). But if you really, really like it, that might be worthwhile for you. Last I looked they were still making them.

              checks

              They’ve got a message up saying that a supplier of a component used in that keyboard went under due to COVID so they suspended production. I don’t know what the status is on that.

              https://www.pckeyboard.com/mm5/merchant.mvc?Screen=CTGY&Category_Code=EnduraPro

              NOTICE CONCERNING AVAILABILITY – Unfortunately, we have had to temporarily suspend the sale of the Endura Pro keyboards due to another supply chain shortage. The supplier of one of the flex harnesses had to close their doors during the pandemic. We’ve begun the task of sourcing a new supplier but do not have a definite time frame for when these keyboards will be available again. For our customers with orders already placed, we have enough stock to complete all on order.

              Keep in mind that this is a very large, heavy keyboard that you could brain someone with; if you’re going to haul it around with a laptop, it’s going to be larger and heavier than the laptop. Mentioning it mostly since I figure that you might use it at some location where you could leave the keyboard.

              • sugar_in_your_tea@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                0
                ·
                7 months ago

                The thing is, I only like the Trackpoint in a laptop. It’s really nice to scroll while holding the middle mouse button and just shifting my finger. That way, my hand is ready to type, unlike using the trackpad, where I have to move my hands to type, and it works well in my largely keyboard-driven workflow (ViM for text editing, Trackpoint for web browsing).

                On a desktop, I have multiple screens and way more real estate, so the Trackpoint isn’t nearly as effective and it’s worth using the mouse instead.

                But I honestly don’t use my laptop all that often, so it’s something I’m fine doing without. But all other things being similar, I’ll prefer the Trackpoint since it’s a nice value add.

                It’s cool that they’re making those keyboards though. I have and nice mechanical keyboards, so I’m not looking for one, but I would be very interested in a Framework-compatible keyboard with a Trackpoint.

    • BorgDrone@lemmy.one
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      That’s why I don’t buy laptops with soldered RAM.

      In my opinion disadvantages of user-replaceable RAM far outweigh the advantages. The same goes for discrete GPUs. Apple moved away from this and I expect PC manufacturers to follow Apple’a move in the next decade or so, as they always do.

        • BorgDrone@lemmy.one
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          7 months ago

          User replaceable RAM is slow, which means you can’t integrate the CPU and GPU in one package. This means a GPU with it’s own RAM, which has huge disadvantages.

          Even a 4090 only has 24GB and slow transfers to/from VRAM. The GPU can only operate on data in VRAM, so anything you need it to work on you need to copy over the relatively slow PCIe bus to the GPU. Then once it’s done you need to copy the results back over the PCIe bus to system RAM for the CPU to be able to access it. This considerably slows down GPGPU tasks.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        Here’s how I see the advantages of soldered RAM:

        • better performance
        • less risk of physical damage
        • more energy efficient
        • smaller

        The risk of physical damage is so incredibly low already, and energy use of RAM is also incredibly low, so neither of those seem important.

        So that leaves performance, which I honestly haven’t found good numbers for. If you have this, I’m very interested, but since RAM speed is rarely the bottleneck in a computer (unless you have specific workloads), I’m going to assume it to be a marginal improvement.

        So really, I guess “smaller” is the best argument, and I honestly don’t care about another half centimeter of space, it’s really not an issue.

        • BorgDrone@lemmy.one
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          So that leaves performance, which I honestly haven’t found good numbers for. If you have this, I’m very interested, but since RAM speed is rarely the bottleneck in a computer (unless you have specific workloads), I’m going to assume it to be a marginal improvement.

          This is where you’re mistaken. There is one thing that integrated RAM enables that makes a huge difference for performance: unified memory. GPUs code is almost always bandwidth limited, which why on a graphics card the RAM is soldered on and physically close to the GPU itself, because that is needed for the high bandwidth requirements of a GPU.

          By having everything in one package, CPU and GPU can share the same memory, which means that you eliminate any overhead of copying data to/from VRAM for GPGPU tasks. But there’s more than that, unified memory doesn’t just apply to the CPU and GPU, but also other accelerators that are part of the SoC. What is becoming increasingly important is AI acceleration. UMA means the neural engine can access the same memory as the CPU and GPU, and also with zero overhead.

          This is why user-replaceable RAM and discrete GPUs are going to die out. The overhead and latency of copying all that data back and forth over the relatively slow PCIe bus is just not worth it.

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            7 months ago

            Do you have actual numbers to back that up?

            The best I’ve found is benchmarks of Apple silicon vs Intel+dGPU, but that’s an apples to oranges comparison. And if I’m not mistaken, Apple made other changes like a larger bus to the memory chips, which again makes comparisons difficult.

            I’ve heard about potential benefits, but without something tangible, I’m going to have to assume it’s not the main driver here. If the difference is significant, we’d see more servers and workstations running soldered RAM, but AFAIK that’s just not a thing.

            • Turun@feddit.de
              link
              fedilink
              English
              arrow-up
              0
              ·
              7 months ago

              I understand the scepticism, but without links of what you’ve found or which parts in particular you consider dubious claims (ram speed can be increased when soldered, higher speeds lead to better performance, etc) it comes across as “i don’t believe you, because i choose to not believe you”

              LTT has made a comparison video on ram speeds: https://www.youtube.com/watch?v=b-WFetQjifc

              Do you need proof that soldered ram can be made to run faster?

              • sugar_in_your_tea@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                0
                ·
                7 months ago

                Yes, and the results from that video (i assume, I skimmed it, but have watched similar videos) is that the difference is negligible (like 1-10FPS) and you’re usually better off spending that money on something else.

                I look at the benchmarks between the Intel MacBook Pro and the M1 MacBook Pro, and both use soldered RAM, yet the M1 gets so much better performance, even on non-GPU tasks (e.g. memory-heavy unit tests at work went from 3-5min to 45-50sec from latest Intel to M1). Docker build times saw a similar drop. But it’s hard for me to know what the difference is between memory vs CPU changes. I’d have to check, but I’m guessing there’s also the DDR4 to DDR5 switch, which increases memory channels.

                The claim is that proximity to the CPU explains it, but I have trouble quantifying that. For me, a 1-10FPS drop isn’t enough to reduce repairability and expandability. Maybe it is for others though, but if that’s the difference, that’s a lot less than the claims they seem to make.

                • Turun@feddit.de
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  edit-2
                  7 months ago

                  The video has a short section on productivity (i.e. rendering or compiling). That part is probably the most relevant for most people. Check the chapter view in YouTube to jump directly to it.

                  I think a 2x performance improvement is plausible when comparing non-soldered ram to the Apple silicon, which goes even further and has the memory on the die itself. If, of course, ram is the limiting factor.

                  The advantages of upgradable, expandable ram are obvious. But let’s face it: most people don’t need and even less use that capability.

            • BorgDrone@lemmy.one
              link
              fedilink
              English
              arrow-up
              0
              ·
              7 months ago

              The best I’ve found is benchmarks of Apple silicon vs Intel+dGPU, but that’s an apples to oranges comparison.

              The thing with benchmarks is that they only show you the performance of the type of workload the benchmark is trying to emulate. That’s not very useful in this case. Current PC software is not build with this kind of architecture in mind so it was never designed to take advantage of it. In fact, it’s the exact opposite: since transferring data to/from VRAM is a huge bottleneck, software will be designed to avoid it as much as possible.

              For example: a GPU is extremely good at performing an identical operation on lots of data in parallel. The GPU can perform such an operation much, much faster than the CPU. However, copying the data to VRAM and back may add so much additional time that it still takes less time to run it on the CPU, a developer may then choose to run it on the CPU instead even if the GPU was specifically designed to handle that kind of work. On a system with UMA you would absolutely run this on the GPU.

              The same thing goes for something like AI accelerators. What PC software exists that takes advantage of such a thing?

              A good example of what happens if you design software around this kind of architecture can be found here. This is a post by a developer who worked on Affinity Photo. When they designed this software they anticipated that hardware would move towards a unified memory architecture and designed their software based on that assumption.

              When they finally got their hands on UMA hardware in the form of an M1 Max that laptop chip beat the crap out of a $6000 W6900X.

              We’re starting to see software taking advantage of these things on macOS, but the PC world still has some catching up to do. The hardware isn’t there yet, and the software always lags behind the hardware.

              I’ve heard about potential benefits, but without something tangible, I’m going to have to assume it’s not the main driver here. If the difference is significant, we’d see more servers and workstations running soldered RAM, but AFAIK that’s just not a thing.

              It’s coming, but Apple is ahead of the game by several years. The problem is that in the PC world no one has a good answer to this yet.

              Nvidia makes big, hot, power hungry discrete GPUs. They don’t have an x86 core and Windows on ARM is a joke at this point. I expect them to focus on the server-side with custom high-end AI processors and slowly move out of the desktop space.

              AMD has the best papers for desktop. They have a decent x86 core and GPU, they already make APUs. Intel is trying to get into the GPU game but has some catching up to do.

              Apple has been quietly working towards this for years. They have their UMA architecture in place, they are starting to put some serious effort into GPU performance and rumor has it that with M4 they will make some big steps in AI acceleration as well. The PC world is held back by a lot of legacy hard and software, but there will be a point where they will have to catch up or be left in the dust.

          • __dev@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            7 months ago

            “unified memory” is an Apple marketing term for what everyone’s been doing for well over a decade. Every single integrated GPU in existence shares memory between the CPU and GPU; that’s how they work. It has nothing to do with soldering the RAM.

            You’re right about the bandwidth though, current socketed RAM standards have severe bandwidth limitations which directly limit the performance of integrated GPUs. This again has little to do with being socketed though: LPCAMM supports up to 9.6GT/s, considerably faster than what ships with the latest macs.

            This is why user-replaceable RAM and discrete GPUs are going to die out. The overhead and latency of copying all that data back and forth over the relatively slow PCIe bus is just not worth it.

            The only way discrete GPUs can possibly be outcompeted is if DDR starts competing with GDDR and/or HBM in terms of bandwidth, and there’s zero indication of that ever happening. Apple needs to puts a whole 128GB of LPDDR in their system to be comparable (in bandwidth) to literally 10 year old dedicated GPUs - the 780ti had over 300GB/s of memory bandwidth with a measly 3GB of capacity. DDR is simply not a good choice GPUs.

            • BorgDrone@lemmy.one
              link
              fedilink
              English
              arrow-up
              0
              ·
              7 months ago

              “unified memory” is an Apple marketing term for what everyone’s been doing for well over a decade.

              Wrong. Unified memory (UMA) is not an Apple marketing term, it’s a description of a computer architecture that has been in use since at least the 1970’s. For example, game consoles have always used UMA.

              Every single integrated GPU in existence shares memory between the CPU and GPU; that’s how they work.

              Again, wrong.

              While iGPUs have existed for PCs for a long time, they did not use a unified memory architecture. What they did was reserve a portion of the system RAM for the GPU. For example on a PC with 512MB RAM and an iGPU, 64MB may have been reserved for the GPU. The CPU then had access to 512-64 = 448MB. While they shared the same physical memory chips, they both had a separate address space. If you wanted to make a texture available to the GPU, it still had to be copied to the special reserved RAM space for the GPU and the CPU could not access that directly.

              With unified memory, both CPU and GPU share the same address space. Both can access the entire memory. No RAM is reserved purely for the GPU. If you want to make something available to the GPU, nothing needs to be copied, you just need to point to where it is in RAM. Likewise, anything done by the GPU is immediately accessible by the CPU.

              Since there is one memory pool for both, you can use RAM more efficiently. If you have a discrete GPU with 16GB VRAM, and your app only needs 8GB VRAM, that other memory just sits there being useless. Alternatively, if your app needs 24GB VRAM, you can’t run it because your GPU only has 16B, even if you have lots of system RAM available.

              With UMA you can use all the RAM you have for whatever you need it for. On an M2 Ultra with 192GB RAM you can use almost all of that for the GPU (minus a little bit that’s used for the OS and any running apps). Even on a tricked out PC with a 4090 you can’t run anything that needs more than 24GB VRAM. Want to run something where the GPU needs 180MB of memory? No problem on an M1 Ultra.

              It has nothing to do with soldering the RAM.

              It has everything to do with soldering the RAM. One of the reason iGPUs sucked, other than not using UMA, is that GPUs performance is almost limited by memory bandwidth. Compared to VRAM, standard system RAM has much, much less bandwidth causing iGPUs to be slow.

              A high-bandwidth memory bus, like a GPU needs, has a lot of connections and runs at high speeds. The only way to do this reliably is to physically place the RAM very close to the actual GPU. Why do you think GPUs do not have user-upgradable RAM?

              Soldering the RAM makes it possible to integrate a CPU and an non-sucking GPU. Go look at the inside of a PS5 or XSX and you’ll see the same thing: an APU with the RAM chips soldered to the board very close to it.

              This again has little to do with being socketed though: LPCAMM supports up to 9.6GT/s, considerably faster than what ships with the latest macs.

              LPCAMM is a very recent innovation. Engineering samples weren’t available until late last year and the first products will only hit the market later this year. Maybe this will allow for Macs with user-upgradable RAM in the future.

              The only way discrete GPUs can possibly be outcompeted is if DDR starts competing with GDDR and/or HBM in terms of bandwidth

              What use is high bandwidth memory if it’s a discrete memory pool with only a super slow PCIe bus to access it?

              Discrete VRAM is only really useful for gaming, where you can upload all the assets to VRAM in advance and data practically only flows from CPU to GPU and very little in the opposite direction. Games don’t matter to the majority of users. GPGPU is much more interesting to the general public.