I see people having a small 8 gigs and 4 core system and trying to split that with something like proxmox into multiple VMs. I think that’s not the best way to utilise the resources.

As many services are mostly in ideal mode so in case something is running it should be possible to use the complete power of the machine.

My opinion is using docker and compose to manage things on the whole hardware level for smaller homelab.

Only split VMs for something critical, even decide on that if it’s required.

Do you agree?

  • Spuxilet@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    I run about 30 stacks (about 60 containers) on my 1L mini pc with i5 8500T + 12 GB RAM. If i were to split them in their own VMs it would be impossible to do. I would have run out of resources probably on fourth VM :D. 5.8 GB RAM is free on idle and i also have ZRAM enabled. I work on it too i have code-server and cloudbeaver running on it. I never run out of memory. Although i am thinking to upgrade it to 16 GBs. I know RAM IS CHEAP but i do not need more then 16 GBs on this PC.

    This setup also does not need to be so complex. I have stacks in their own networks isolated and access them solely from wireguard VPN no matter where i am on LAN or connecting from WAN. Wireguard is always on on my laptop and phone.