I see people having a small 8 gigs and 4 core system and trying to split that with something like proxmox into multiple VMs. I think that’s not the best way to utilise the resources.
As many services are mostly in ideal mode so in case something is running it should be possible to use the complete power of the machine.
My opinion is using docker and compose to manage things on the whole hardware level for smaller homelab.
Only split VMs for something critical, even decide on that if it’s required.
Do you agree?
No, I don’t agree, not necessarily. VMs are “heavier” as in use more disk and memory but if they are mostly idling and in a small lab you probably won’t notice the difference. Now if you are running 10 services and want to put each in its own vm on a tiny server, then yea, maybe don’t do that.
In terms of cpu it’s a non-issue. Vm or docker they will still “share” cpu. I can think of cases I’d rather run proxmox and others I’d just go bare metal and run docker. Depends on what I’m running and the goal.