I placed a low bid on an auction for 25 Elitedesk 800 G1s on a government auction and unexpectedly won (ultimately paying less than $20 per computer)

In the long run I plan on selling 15 or so of them to friends and family for cheap, and I’ll probably have 4 with Proxmox, 3 for a lab cluster and 1 for the always-on home server and keep a few for spares and random desktops around the house where I could use one.

But while I have all 25 of them what crazy clustering software/configurations should I run? Any fun benchmarks I should know about that I could run for the lolz?

Edit to add:

Specs based on the auction listing and looking computer models:

  • 4th gen i5s (probably i5-4560s or similar)
  • 8GB of DDR3 RAM
  • 256GB SSDs
  • Windows 10 Pro (no mention of licenses, so that remains to be seen)
  • Looks like 3 PCIe Slots (2 1x and 2 16x physically, presumably half-height)

Possible projects I plan on doing:

  • Proxmox cluster
  • Baremetal Kubernetes cluster
  • Harvester HCI cluster (which has the benefit of also being a Rancher cluster)
  • Automated Windows Image creation, deployment and testing
  • Pentesting lab
  • Multi-site enterprise network setup and maintenance
  • Linpack benchmark then compare to previous TOP500 lists
  • PhlubbaDubba@lemm.ee
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    2
    ·
    2 months ago

    According to Bush Jr. And Cheney you are now capable of building a super computer dangerous enough to warrant a 20+ year invasion

    Depending on the actual condition of all those computers and your own skill in building I’d say you could rig a pretty decent home server rack out of all of those for really most purposes you could imagine, including as a personal VPN, personal RDP to conduct work on, personal test server for experimental code and/or testing potentially unsafe downloads/links for viruses

    Shit you could probably build your own OS that optimizes for all that computing power just for the funzies, or even use it to make money by contributing its computing power to a crowd sourced computing project where you dedicate memory bandwidth to the project for some grad student or research institute to do all their crazy math with. Easiest way to rack up academic citations if you ever want to be a researcher!

  • seaQueue@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    2 months ago

    Distcc, maybe gluster. Run a docker swarm setup on pve or something.

    Models like those are a little hard to exploit well because of limited network bandwidth between them. Other mini PC models that have a pcie slot are fun because you can jam high speed networking into them along with NVMe then do rapid fail over between machines with very little impact when one goes offline.

    If you do want to bump your bandwidth per machine you might be able to repurpose the wlan m2 slot for a 2.5gbe port, but you’ll likely have to hang the module out the back through a serial port or something. Aquantia USB modules work well too, those can provide 5gbe fairly stably.

    Edit: Oh, you’re talking about the larger desktop elitedesk g1, not the USFF tiny machines. Yeah, you can jam whatever hh cards into these you want - go wild.

    • Trainguyrom@reddthat.comOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      2 months ago

      From the listing photos these actually have half-height expansion slots! So GPU options are practically nonexistant, but networking and storage is blown wide open for options compared to the miniPCs that are more prevalent now.

      • seaQueue@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        Yeah, you’ll be fairly limited as far as GPU solutions go. I have a handful of hh AMD cards kicking around that were originally shipped in t740s and similar but they’re really only good for hardware transcoding or hanging extra monitors off the machine - it’s difficult to find a hh board with a useful amount of vram for ml/ai tasks.

  • Diabolo96@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    edit-2
    2 months ago

    Run 70b llama3 on one and have a 100% local, gpt4 level home assistant . Hook it up with coqui.Ai xttsv2 for mind baffling natural language speech (100% local too ) that can imitate anyone’s voice. Now, you got yourself Jarvis from Ironman.

    Edit : thought they were some kind of beast machines with 192gb ram and stuff. They’re just regular middle-low tier pcs.

    • SaintWacko@midwest.social
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 months ago

      I tried doing that on my home server, but running it on the CPU is super slow, and the model won’t fit on the GPU. Not sure what I’m doing wrong

      • Diabolo96@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        Sadly, can’t really help you much. I have a potato pc and the biggest model I ran on it was Microsoft phi-2 using the candle framework. I used to tinker with Llama.cpp on colab, but it seems they don’t handle llama3 yet. ollama says it does , but I’ve never tried it before. For the speed, It’s kinda expected for a 70b model to be really slow on the CPU. How much slow is too slow ? I don’t really know…

        You can always try the 8b model. People says it’s really great and even replaced the 70b models they’ve been using.

        • SaintWacko@midwest.social
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 months ago

          Show as in I waited a few minutes and finally killed it when it didn’t seem like it was going anywhere. And this was with the 7b model…

          • Diabolo96@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            It shouldn’t happen for a 8b model. Even on CPU, it’s supposed to be decently fast. There’s definitely something wrong here.

            • SaintWacko@midwest.social
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 months ago

              Hm… Alright, I’ll have to take another look at it. I kinda gave up, figuring my old server just didn’t have the specs for it

                • SaintWacko@midwest.social
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  2 months ago

                  It has a Intel Xeon E3-1225 V2, 20gb of ram, and a Strix GTX 970 with 4gb of VRAM. I’ve actually tried Mistral 7b and Decapoda Llama 7b, running them in Python with Huggingface’s Transformers library (from local models)

    • Trainguyrom@reddthat.comOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      The thought did cross my mind to run Linpack and see where I fall on the Top500 (or the Top500 of 2000 for example for a more fair comparison haha)

  • cmnybo@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 months ago

    I certainly wouldn’t want pay the power bill from leaving a bunch of these running 24/7, but would work fine if you wanted to learn cluster computing.

    You could always load them up with a bunch of classic games and get all your friends over for a LAN party.

  • ares35@kbin.social
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    edit-2
    2 months ago

    a pallet of 4th gens? i have a dozen left here from around that era that i can’t get rid of without literally giving them away. they’re ‘tolerable’ for a gui linux or win10 with an ssd, but the ‘performance per watt’ just isn’t there with hardware this old. i used a few of them (none in an always-on role, though), but the rest just sit in the corner, without home nor purpose.

    these 800 g1s are, iirc, 12vo, so upgrade or reuse potential is a bit limited. most users would want windows, and win10 does run ‘ok enough’ on 4th gen, just make sure they’re booting from ssd (120gb minimum). but they’ll run into that arbitrarily-errected wall-of-obsolescence with trying to upgrade or install win11 when win10 retires in ~ 18 months (you can ‘rufus’ a win11 installer, but there’s no guarantee that you will be able to in the future). that limits demand and resale value of pretty much all the pre-8th gen hardware.

    • Trainguyrom@reddthat.comOP
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 months ago

      I think you’re not giving 4th gen enough credit. My wife’s soon-to-be-upgraded desktop is built on a 4th gen i5 platform, and it generally does the job to a decent level. I was rocking a 4790k and GTX970 until 2022, and my work computer in 2022 was on an even older i5-2500 (more held back by the spinning hard drive than anything. Obviously not a great job, but I found something much better in 2022) my last ewaste desktop-turned-server was powered by an i5-6500 (which is a few percentage points better performance than the 4th gen equivalent) and I have a laptop I use for web browsing and media consumption that’s got a 6700HQ in it.

      I’ve already got a few people tentatively interested, and I honestly accepted the possibility of having to pay to recycle them later on. Should be a fun series of projects to be had with this pallet of not-quite-ewaste

    • Trainguyrom@reddthat.comOP
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 months ago

      State government, and it says they come with SSDs. They came from a school so presumably they’re from a lab or are upgraded staff PCs, both would be pretty low sensitivity. Maybe I’ll learn the final test answers for Algebra 1 at worst!

      Might be fun to do some forensic data recovery and see if anything was missed though

  • foggy@lemmy.world
    cake
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 months ago

    Setup a CS 1.6 LAN party arena.

    No pen testing lab sounds fun. 8 PCs for a segmented network, a few red team PCs.

  • HumanPerson@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 months ago

    If I were you I might try deploying a mini enterprise network with permissions and things. It would be fun to do it with active directory to try to practice pentesting, or it would also be fun to do with linux to try to learn more about deploying linux in enterprise environments.

    • Trainguyrom@reddthat.comOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 months ago

      This is pretty high on the to-do list. I plan on virtualization a bunch of it, but it would be pretty easy to have one desktop hosting each subnet of client PCs and one hosting the datacenter subnet. Having several hosts to physically network means less time spent verifying the virtual networks work as intended.

      Also playing with different deployment tools is a goal too. Having 2-3 nearly-identical systems should be really useful for creating unified Windows images for deployment testing

      • HumanPerson@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 months ago

        I don’t like windows, so I don’t deploy any of this for real, but yesterday and the day before I set up a windows server, a few clients, and a Kali VM and manager to get in. I found out if you type “\\anything” into the windows bar it will send that user’s name and hash out very easily with llmnr poisoning on every keystroke. What’s worse is that is the default behavior. It is super fun to learn about all this though.

        Edit: upon posting this comment it made the double backslash look like a single backslash so I changed it to a triple so it looks right on my end but just know I meant for it to be double.

  • notfromhere@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 months ago

    You could possibly run ai horde if they have enough ram or vram. You could run bare metal kubernetes or inside proxmox.