Thx in advice.

  • ffhein@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    2 months ago

    For LLMs it entirely depends on what size models you want to use and how fast you want it to run. Since there’s diminishing returns to increasing model sizes, i.e. a 14B model isn’t twice as good as a 7B model, the best bang for the buck will be achieved with the smallest model you think has acceptable quality. And if you think generation speeds of around 1 token/second are acceptable, you’ll probably get more value for money using partial offloading.

    If your answer is “I don’t know what models I want to run” then a second-hand RTX3090 is probably your best bet. If you want to run larger models, building a rig with multiple (used) RTX3090 is probably still the cheapest way to do it.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    7
    ·
    2 months ago

    Buy the cheapest graphics card with 16 or 24GB of VRAM. In the past people bought used NVidia 3090 cards. You can also buy a GPU from AMD, they’re cheaper but ROCm is a bit more difficult to work with. Or if you own a MacBook or any Apple device with a M2 or M3, use that. And hopefully you paid for enough RAM in it.

    • Fisch@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 months ago

      I actually use an AMD card for running image generation and LLMs on my PC on Linux. It’s actually not hard to set up.

  • kata1yst@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    2 months ago

    KobaldCPP or LocalAI will probably be the easiest way out of the box that has both image generation and LLMs.

    I personally use vllm and HuggingChat, mostly because of vllm’s efficiency and speed increase.

    • DarkThoughts@fedia.io
      link
      fedilink
      arrow-up
      2
      ·
      2 months ago

      It is probably dead but Easy Diffusion is imo the easiest for image generation.

      KoboldCPP can be a bit weird here and there but was the first thing that worked for me for local text gen + gpu support.