Hello friends,
I’m pretty deep into self-hosting - especially on the home automation side. I’ve got a couple of options for self-hosted AI, but I don’t think they’ll meet my long term goals:
-
Coral TPUs: I have 2x processing my Frigate data. These seem fine for that purpose, but not useful for generative AIs?
-
Jetson Nano: Near as I can tell nothing supports these things except DeepStack, which appears to be abandoned. Bummed these haven’t gotten broader support in the community.
I’ve got plenty of rack space and my day job is managing thousands of machines, so not afraid of a more technical setup.
The used NVIDIA rack mounted Tesla GPU servers look interesting. What are y’all using?
Requirements:
- Rack mounted
- Supports local LLM and GenAI
- Linux-based
- Works with Docker
I totally agree on the Coral TPUs. Great for Frigate, but not much else. I’ve got 2x of the USB ones cranking on half a dozen 4K stream - works wonderfully.
And I agree in theory these Nanos should be great for all sorts of stuff, but nothing supports them. Everything I’ve seen is custom one offs outside of DeepStake (though CodeProject.AI purports there’s someone now working on a Nano port).
Sounding like a decent gaming GPU and a 2-3U box is the ticket here.