As you all might be aware VMware is hiking prices again. (Surprise to no one)
Right now Hyper-V seems to be the most popular choice and Proxmox appears to be the runner up. Hyper-V is probably the best for Windows shops but my concern is that it will just become Azure tied at some point. I could be wrong but somehow I don’t trust Microsoft to not screw everyone over. They already deprecated WSUS which is a pretty popular tool for Windows environments.
Proxmox seems to be a great alternative that many people are jumping on. It is still missing some bigger features but things like the data center manager are in the pipeline. However, I think many people (especially VMware admins) are fundamentally misunderstanding it.
Proxmox is not that unique and is built on Foss. You could probably put together a Proxmox like system without completely being over your head. It is just KVM libvirt/qemu and corosync along with some other stuff like ZFS.
What Proxmox does provide is convenience and reliability. It takes time to make a system and you are responsible when things go wrong. Doing the DIY method is a good exercise but not something you want to run in prod unless you have the proper staff and skillset.
And there is where the problem lies. There are companies are coming from a Windows/point in click background who don’t have staff that understand Linux. Proxmox is just Debian under the hood so it is vulnerable to all the same issues. You can install updates with the GUI but if you don’t understand how Linux packaging works you may end up with a situation where you blow off your own foot. Same goes for networking and filesystems. To effectively maintain a Proxmox environment you need expertise. Proxmox makes it very easy to switch to cowboy mode and break the system. It is very flexible but you must be very wary of making changes to the hypervisor as that’s the foundation for everything else.
I personally wish Proxmox would serious consider a immutable architecture. TrueNAS already does this and it would be nice to have a solid update system. They would do a stand alone OS image or they could use something based on OStree. Maybe even build in a update manager that can update each node and check the health.
Just my thoughts
If you’re running Proxmox in prod you need to treat it like prod. That means plan and test your changes, have contingency plans, schedule your changes, and be very precise. Try to keep your system as close to stock as possible; just leave it alone.
I’ve run a lot of infrastructure, from VMware, Hyper-V, KVM+QEMU\libvert, oVirt, and PVE, not to mention cloud infra and container orchestration. I did not want to like Proxmox when it showed up on my radar because they don’t use libvert but I tried it anyway and it has earned my respect. Their tooling and design choices are not bad and I expect them to continue to improve.
I have two HCI stacks in prod (with PBS) with a DR stack on the way, it’s been rock solid for years.
I suppose thats the difference between Windows admins and Linux admins. Windows admins are used to click their way though things with fancy gui’s and wizards :)
I’ve been using Proxmox professionally for years now, and not once did i have s problem i could not fix myself. But i’m used to solve problems myself by “digging in deep”. Ofcourse it helps that i’ve been using Linux (mostly Debian) since 1996. There are plenty of guides around to teach how Debian works. Windows admin’s who make the switch just need to take the time to read them, watch the video’s, do the research.
The only skills we where born with are the basic survival skills. Everything else is learned along the way. So was using Windows, and so is using Linux.
But thats just my opinion :)
The problem isn’t necessarily the GUI. The problem is there are a lot of admins who don’t understand what is happening under the hood. I’ve talking to people of all ages who have no understanding of how basic things like networking work. I’ve also talked to to people of all ages that have a deep understanding of the system.
The biggest takeaway is that to be a good admin you need to understand the details. Don’t be afraid to dig in. Either dig in, move to management or both
Exactly! :)
I manage macOS, Linux AND Windows. In the end, they’re all the same. Its software that needs management.
If you want something more appliance-like, XCP-ng is a very good option. The GUI of Xen Orchestra is also closer to vCenter in my opinion and should be easier to navigate than Proxmox.
It has a much higher barrier to entry. If they can make it easy to get on a makeshift network made of old hardware then I might look more into it. I want to use it for personal use before I even think of prod. Proxmox has been in homelabs for a while which helps quite a bit to mature the product.
I’m not sure I’m parsing your fifth paragraph correctly. Are you suggesting Proxmox is DIY and unsuitable for Production? That Proxmox is suitable for Production and those who think they can roll their own hypervisor are in for a bad time? Something else?
What I am saying is that it depends. Proxmox is rock solid if you know what you are doing. It is a massive train wreck of you do the set it and forget it approach. Deploying Proxmox requires planning and a understanding on the system.
So yes it is good for production as long as you understand how it works.
OK. So we have a disagreement then. What part of Proxmox requires expertise?
You need to understand Debian and virtualization and it would great if you understood Linux storage.
There is a lot to learn about Proxmox specifically. It has its own features and tools that are important to understand. (Such as how to fix a broken cluster)
Proxmox sortof mangles the kernel and I find it frustrating to use from the command line. (I have also blown off my foot once or twice.) I would use Incus instead. Incus doesn’t require its own distro, so you could install it on an immutable distro.
If I were purely running VMs that didn’t need access to USB hardware I might go with XCP-ng.
Look at Openshift if you’re looking for immutable, production ready Linux infrastructure. Containers are quickly replacing VMs.
Containers run on top of VMs.
I have run plenty of clusters on bare metal, both Openshift and vanilla. No VMs are needed.
That’s a pain though. When you add new hardware there is more setup. With virtualization you can just transfer VMs to the hardware. Also you can setup templates and automation for VM creation
Been using KVM fir years. Works fine for me.
what are your thoughts on Unraid as an alternative?
Not for enterprise, but if you just need some general VM’s and docker services with good redundancy, it’s a really good product.