• Todd Bonzalez@lemm.ee
    link
    fedilink
    arrow-up
    2
    ·
    1 month ago

    Yeah, but if you’re interested in running an LLM faster than 1 token per minute, RAM won’t matter. You’ll need as much VRAM as you can get.

    • oktoberpaard@feddit.nl
      link
      fedilink
      arrow-up
      2
      ·
      1 month ago

      Sure, but I’m just playing around with small quantized models on my laptop with integrated graphics and the RAM was insanely cheap. It just interests me what LLMs are capable of that can be run on such hardware. For example, llama 3.2 3B only needs about 3.5 GB of RAM, runs at about 10 tokens per second and while it’s in no way comparable to the LLMs that I use for my day to day tasks, it doesn’t seem to be that bad. Llama 3.1 8B runs at about half that speed, which is a bit slow, but still bearable. Anything bigger than that is too slow to be useful, but still interesting to try for comparison.

      I’ve got an old desktop with a pretty decent GPU in it with 24 GB of VRAM, but it’s collecting dust. It’s noisy and power hungry (older generation dual socket Intel Xeon) and still incapable of running large LLMs without additional GPUs. Even if it were capable, I wouldn’t want it to be turned on all the time due to the noise and heat in my home office, so I’ve not even tried running anything on it yet.