“To enable the massive 256GB/s memory bandwidth that Ryzen AI Max delivers, the LPDDR5x is soldered,” writes Framework CEO Nirav Patel in a post about today’s announcements. “We spent months working with AMD to explore ways around this but ultimately determined that it wasn’t technically feasible to land modular memory at high throughput with the 256-bit memory bus. Because the memory is non-upgradeable, we’re being deliberate in making memory pricing more reasonable than you might find with other brands.”
Many LLM operations rely on fast memory and gpus seem to have that. Even though their memory is soldered and vbios is practically a black box that is tightly controlled. Nothing on a GPU is modular or repairable without soldering skills(and tools).
To be fair it starts with 32GB of RAM, which should be enough for most people. I know it’s a bit ironic that Framework have a non-upgradeable part, but I can’t see myself buying a 128GB machine and hoping to raise it any time in the future.
If you really need an upgradeable machine you wouldn’t be buying a mini-PC anyways, seems like they’re trying to capture a different market entirely.
To be fair, you didn’t ask a question. You made a statement and ended it with a question mark, so I don’t really understand exactly what it is that you were asking.
According to the CEO in the LTT video about this thing it was a design choice made by AMD because otherwise they cannot get the ram speed they advertise.
Yeah exactly, its worthless… Even the big players already admit to the AI hype being over. This is the worst possible thing to launch for them, its like they have no idea who their customers are.
😒🍎
Well, more specifically: why didn’t they try to go for LPCAMM?
From what I understand, they did try, but AMD couldn’t get it to work because of signal integrity issues.
Because you’d get like half the memory bandwidth to a product where performance is most likely bandwidth limited. Signal integrity is a bitch.
Would 256GB/s be too slow for large llms?
It runs on the gpu
Many LLM operations rely on fast memory and gpus seem to have that. Even though their memory is soldered and vbios is practically a black box that is tightly controlled. Nothing on a GPU is modular or repairable without soldering skills(and tools).
To be fair it starts with 32GB of RAM, which should be enough for most people. I know it’s a bit ironic that Framework have a non-upgradeable part, but I can’t see myself buying a 128GB machine and hoping to raise it any time in the future.
If you really need an upgradeable machine you wouldn’t be buying a mini-PC anyways, seems like they’re trying to capture a different market entirely.
Yes that’s the problem.
That they want to sell cheap ai research machines to use for workstation?
That’s a poor attempt to knowingly misrepresent my statement.
No, it is a question
The answer is that they’re abandoning their principles to pursue some other market segment.
Although I guess it could be said to be like Porsche and Lamborghini selling SUVs to support the development of their sports cars…
I don’t understand how that answers my question
To be fair, you didn’t ask a question. You made a statement and ended it with a question mark, so I don’t really understand exactly what it is that you were asking.
According to the CEO in the LTT video about this thing it was a design choice made by AMD because otherwise they cannot get the ram speed they advertise.
Which is fine, but there was no obligation for Framework to use that chip either.
In the same video it’s pointed out that this product wouldn’t exist at all without the AMD chip. It’s literally built around it.
Yeah hugely disappointed by this tbh. They should have made a gaming capable steam machine in cooperation with valve instead :)
This is an AI chip designed primarily for running AI workflows. The fact that it can game is secondary
Yeah exactly, its worthless… Even the big players already admit to the AI hype being over. This is the worst possible thing to launch for them, its like they have no idea who their customers are.
I mean, it’s not. You can do aí workflows with this wonderful chip.
If you wanna game, go buy nvidia
The AI hype being over doesn’t mean no one is working on AI anymore. LLMs and other trained models are here to stay whether you like it or not.
They still could; this seems aimed at the AI/ML research space TBH