I see people having a small 8 gigs and 4 core system and trying to split that with something like proxmox into multiple VMs. I think that’s not the best way to utilise the resources.

As many services are mostly in ideal mode so in case something is running it should be possible to use the complete power of the machine.

My opinion is using docker and compose to manage things on the whole hardware level for smaller homelab.

Only split VMs for something critical, even decide on that if it’s required.

Do you agree?

  • ervwalter@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    It depends on your goals of course.

    Personally, I use Proxmox on a couple machines for a couple reasons:

    1. It’s way way easier to backup an entire VM than it is to backup a bare metal physical device. And when you back up a VM, because the VM is “virtual hardware” you can (and I have) restore it to the same machine or to brand new hardware easily and it will “just work”. This is especially useful in the case that hardware dies.
    2. I want high availability. A few things I do in my homelab, I personally concider “critical” to my home happiness. They aren’t really critical, but I don’t want to be without them if I can avoid it. And by having multiple proxmox hosts, I get automatic failover. If one machine dies or crashes, the VMs automatically start up on the other machine.

    Is that overkill? Yes. But I wouldn’t say it “doesn’t make sense”. It makes sense but just isn’t necessary.

    Fudge topping on ice cream isn’t necessary either, but it sure is nice.