To clarify I am not asking about a dedicated machine running something like Proxmox or Esxi. My question is about VMs running on your daily use machine on something like VirtualBox, VM ware fusion, parallels etc
Multipass
Yes. My main “prod” server was my Hyper-V VM on my gaming machine. I’m in the process of migrating out of it. Or maybe find a nice way to migrate this VM.
My work-from-home workstation always has a VM or two running the test/dev environment for the tasks I’m working on at work. They are VBox instances provisioned/managed by Vagrant.
They are CentOS7 instances, each running a test database, usually a text editor, “tail -F” monitoring log output, and various daemons/services specific to my workplace’s internal infrastructure. The host system is running Slackware 15.0.
Yes. Stuff.
WSL2 Ubuntu and today I installed Linux Mint with VMWare to play around a bit and I like it.
So on my workstation / daily driver box:
- I have Docker using the WSL2 backend. I use this instance of docker to test deployments of software before I push it to my remote servers, to perform local development tasks, and to host some services that I only ever use when my PC is on (so services that require trust and don’t require 24x7 uptime).
- I have about 8 distros of linux in WSL2.
- The main distro is Ubuntu 22.04 for legacy reasons. I use this to host an nginx server on my machine (use it as a reverse proxy to my docker services running on my machine) and to run a bunch of linux apps, including GUI ones, without rebooting into my Arch install.
- I have two instances of Archlinux. One is ‘clean’ and is only used to mount my physical arch disk if I want to do something quick without rebooting into Arch, and the other one I actively tinker with.
- Other distros are just there for me to play with
- I use HyperV (since it is required for WSL) to orchestrate Windows virtual machines. Yes, I do use Windows VMs on Windows host. Why? Software testing, running dodgy software in an isolated environment, running
spywareI mean Facebook, and similar. - Prior to HyperV, I used to use Virtualbox. I switched to hyperv when I started using WSL. For a time, hyperv was incompatible with any other hypervosor on the same host so I dropped virtualbox. That seems to have been fixed now, and I reinstalled virtualbox to orchestrate Oracle Cloud VMs as well.
I am really curious to know what services you are running.
I don’t use Windows so unfamiliar with Hyper-V do you pass through your physical arch disk into the VM? Are you able to boot arch from the disk or do you use it to just access the files on the disk?
Thanks for your reply.
So to answer your last question first: I run dual boot Arch+Windows, and I can mount the physical Arch disk inside a WSL VM and then chroot into it to run or fix some things when I CBA to reboot properly. I haven’t tried booting a WSL instance off of the physical arch disk but I don’t imagine it would work. Firstly, WSL uses a modified linux kernel (which won’t be accessible without tinkering with the physical install). Secondly, the physical install is obviously configured for physical ACPI and Network use which will break if I boot into it from WSL. After all, WSL is not a proper VM.
To answer the first question as to services: notes, kanban boards, network monitoring tools (connected to a VPN / management LAN), databases, more databases, even MOAR databases, database managers, web scrapers, etc.
The very first thing I used WSL for (a long time ago) was to run ffmpeg. I just could not be bothered building it for Windows myself.
Yeah that was my thought too booting from the physical disk usually doesn’t work. Just had to ask in case, WSL had something up its sleeve to magically do this, I guess not.
Seems like you are a database guy, are they all always running?
Thank you for your reply.
I have Ubuntu 16 VMs running old versions of our app so I can test against the new stuff.
I also run two postgres and two redis servers in docker to test migrations of those apps.
I run wsl with Ubuntu for some development
I have 1 PC and a NAS at home.
To simplify work Vs home fun, I have Debian as my main OS and a KVM guest of Debian too. The guest is headless and runs all of my media tools for sailing the seas over a VPN.
Nope. No VMs. Don’t know why would I if I have a dedicated XCP-NG pool for that.
Not always, I only have a laptop. But I do have a terraform setup that quickly deploys a gitlab runner on my laptop for when I need it, then I destroy it when I’m done with it. Uses the libvirt provider, a CoreOS image, and ignition to configure and start the runner service immediately.
Work machine is now a VM on my desktop for better performance than my low spec ultra book. This runs great on VMware workstation
- Plex
- CouchPotato
- Sonar
- SabNZBd
- PFSense
- PiHole
Then on RaspberryPi’s:
- HomeAssistant
- OctoPrint
That’s cool. What OS are you running on the VM? How do you access these services, only from your workstation or across the home network? Is the machine always on so that you can access your media/PiHole?
I’m running VMwares ESXi on an dedicated Fujitsu Primergy box (old hardware), it’s on 24/7 in the cupboard under the stairs.
Plex has it’s own app on various devices, smartphone, console, etc.
PFSense comes with the ability to setup a VPN connection, so I use that to connect to home when I want to watch stuff on the Plex Server. The Sonar/SabNZBd/CouchPotato is mostly a set and forget thing.
ESXi is handy for whenever I want to try something out without busting up any existing VMs I have setup.
I wanted to have Proxmox run on my gaming desktop but I always have some issues with network passthrough. I thought about doing this to run a lab, for example an AD domain to prepare myself for certification exams.
I have my main/gaming rig that I use for everything unrelated to my career, and I run a Hyper V VM on/with it that I strictly use for just my daily job. (Work from home sysadmin for an MSP)
The company I work for provided me with hardware that I can connect at home and use, but it’s much more convenient for me to get everything done utilizing all of my monitors and gaming hardware and not having to have two physical computers set up. The company doesn’t care and I keep the work VM pretty isolated from everything else (well, as much as you can with a Hyper V VM and still get all of the functionality I crave.)
Have you considered a physical KVM switch? If you have, why did you decide against it?
Are you doing GPU partitioning?
There’s really no need for a KVM switch, I think having two physical computers to deal with in my case would just complicate things. Running the VM for my work life during work hours and then shutting it down once I’m off the clock is super simple already.
As far as GPU partitioning, all of the clients I work with are spending their day working with things like Excel and Outlook, so nothing graphically demanding.
Fair enough.
I am almost 90 per cent certain that my work won’t let me get away with a VM, but heh, who knows…