Hello! 😀
I want to share my thoughts on docker and maybe discuss about it!
Since some months I started my homelab and as any good “homelabing guy” I absolutely loved using docker. Simple to deploy and everything. Sadly these days my mind is changing… I recently switch to lxc containers to make easier backup and the xperience is pretty great, the only downside is that not every software is available natively outside of docker 🙃
But I switch to have more control too as docker can be difficult to set up some stuff that the devs don’t really planned to.
So here’s my thoughts and slowly I’m going to leave docker for more old-school way of hosting services. Don’t get me wrong docker is awesome in some use cases, the main are that is really portable and simple to deploy no hundreds dependencies, etc. And by this I think I really found how docker could be useful, not for every single homelabing setup, and it’s not my case.
Maybe I’m doing something wrong but I let you talk about it in the comments, thx.
It’s hard for me to tell if I’m just set in my ways according to the way I used to do it, but I feel exactly the same.
I think Docker started as “we’re doing things at massive scale, and we need to have a way to spin up new installations automatically and reliably.” That was good.
It’s now become “if I automate the installation of my software, it doesn’t matter that the whole thing is a teetering mess of dependencies and scripted hacks, because it’ll all be hidden inside the container, and also people with no real understanding can just push the button and deploy it.”
I forced myself to learn how to use Docker for installing a few things, found it incredibly hard to do anything of consequence to the software inside the container, and for my use case it added extra complexity for no reason, and I mostly abandoned it.
I hate how docker made it so that a lot of projects only have docker as the official way to install the software.
This is my tinfoil opinion, but to me, docker seems to enable the “phone-ification” ( for a lack of better term) of softwares. The upside is that it is more accessible to spin services on a home server. The downside is that we are losing the knowledge of how the different parts of the software work together.
I really like the Turnkey Linux projects. It’s like the best of both worlds. You deploy a container and a script setups the container for you, but after that, you have the full control over the software like when you install the binaries
I hate how docker made it so that a lot of projects only have docker as the official way to install the software.
Just so we are clear on this. This is not dockers fault. The projects chose Docker as a distribution method, most likely because it’s as widespread and known as it is. It’s simply just to reach more users without spreading too thin.
You are right and I should have been more precise.
I understand why docker was created and became popular because it abstracts a lot of the setup and make deployment a lot easier.
I agree with it, docker can be simple but can be a real pain too. The good old scripts are the way to go in my opinion, but I kinda like the lxc containers in docker, this principle of containerization is surely great but maybe not the way docker does… (maybe distrobox could be good too 🤷 )
Docker is absolutely a good when having to scale your env but I think that you should build your own images and not use prebuild ones
Docker is a convoluted mess of overlays and truly weird network settings. I found that I have no interest in application containers and would much prefer to set up multiple services in a system container (or VM) as if it was a bare-metal server. I deploy a small Proxmox cluster with Proxmox Backup Server in a CT on each node and often use scripts from https://community-scripts.github.io/ProxmoxVE/. Everything is automatically backed up (and remote sync’d twice) with a deduplication factor of 10. A Dockerless Homelab FTW!
Yeah I share your point of view and I think I’m going this way. These scripts are awesome but I prefer writing mine as I get more control over them
Are you using docker compose scripts? Backup should be easy, you have your compose scripts to configure the containers, then the scripts can easily be commited somewhere or backed up.
Data should be volume mounted into the container, and then the host disk can be backed up.
The only app that I’ve had to fight docker on is Seafile, and even that works quite well now.
using docker compose yeah. I find hard to tweak the network and the apps settings it’s like putting obstacles on my road
Its networking is a bit hard to tweak, but I also dont find I need to most of the time. And when I do, its usually just setting the network to host and calling it done.
I can recommend NixOS. It’s quite simple if your wanted application is part of NixOS already. Otherwise it requires quite some knowledge to get it to work anyways.
Yeah, It’s either 4 lines and you got some service running… Or you need to learn a functional language, fight the software project and make it behave on an immutable filesystem and google 2 pages of boilerplate code to package it… I rarely had anything in-between. 😆
One day I will try, this project seems interesting!
Nizos is a piece of shit if you want to do anything not in NixOS. Even trying to do normal things like running scripts in NixOS is horrible. I like the idea but the execution needs work.
I don’t like docker. It’s hard to update containers, hard to modify specific settings, hard to configure network settings, just overall for me I’ve had a bad experience. It’s fantastic for quickly spinning things up but for long term usecase and customizing it to work well with all my services, I find it lacking.
I just create Debian containers or VMs for my different services using Proxmox. I have full control over all settings that I didn’t have in docker.
the old good way is not that bad
Use portainer + watchtower
And I’ve done the exact opposite moves everything off of lxc to docker containers. So much easier and nicer less machines to maintain.
Less “machines” but you need to maintain docker containers at the end
Docker compose plus using external volume mounts or using the docker volume + tar backup method is superior
Can be but I’m not enough free, and this way I run lxc containers directly onto proxmox
You’re basically adding a ton of overhead to your services for no reason though
Realistically you should be doing docker inside LXC for a best of both worlds approach
I accept the way of doing, docker or lxc but docker in a lxc is not suitable for me, I already tried it and I’ve got terrible performance
Yeah, when I got started I initially put everything in Docker because that’s what I was recommended to do, but after a couple years I moved everything out again because of the increased complexity, especially in terms of the networking, and that you now have to deal with the way Docker does things, and I’m not getting anything out of it that would make up for that.
When I moved it out back then I was running Gentoo on my servers, by now it’s NixOS because of the declarative service configuration, which shines especially in a server environment. If you want easy service setup, like people usually say they like about Docker, I think it’s definitely worth a try. It can be as simple as “services.foo.enable = true”.
(To be fair NixOS has complexity too, but most of it is in learning how the configuration language which builds your operating system works, and not in the actual system itself, which is mostly standard except for the store. A NixOS service module generates a normal systemd service + potentially other files in the file system.)
I ditched nix and install software only through portage. If needed, i make my own ebuilds.
This has two advantages:
- it removes all the messy software: i am not going to install something if I can’t make the ebuild becayse the development was a mess , like everything TS/node
- i can install, rollback, reinstall, upgrad and provision (configuration) everything using portage
- i am getting to know gentoo and portage in great details, making the use of my desktop and laptop much much easier
nixos definitely gives a try
I love docker, and backups are a breeze if you’re using ZFS or BTRFS with volume sending. That is the bummer about docker, it relies on you to back it up instead of having its native backup system.
What are you hosting on docker? Are you configuring your apps after? Did you used the prebuild images or build yourself?
I use the *arr suite, a project zomboid server, a foundry vtt server, invoice ninja, immich, next cloud, qbittorrent, and caddy.
I pretty much only use prebuilt images, I run them like appliances. Anything custom I’d run in a vm with snapshots as my docker skills do not run that deep.
This why I don’t get anything from using docker I want to tweak my configuration and docker is adding an extra level of complexity
Tweak for what? Compile with the right build flags been there done that not worth the time.
If I want really to dive in the config files and how this thing works, no normal install I can really easily, on docker it’s something else
What application are you trying to tweak?
I should also say I use portainer for some graphical hand holding. And I run watchtower for updates (although portainer can monitor GitHub’s and run updates based on monitored merged).
For simplicity I create all my volumes in the portainer gui, then specify the mount points in the docker compose (portainer calls this a stack for some reason).
The volumes are looped into the base OS (Truenas scale) zfs snapshots. Any restoration is dead simple. It keeps 1x yearly, 3x monthly, 4x weekly, and 1x daily snapshot.
All media etc… is mounted via NFS shares (for applications like immich or plex).
Restoration to a new machine should be as simple as pasting the compose, restoring and restoring the Portainer volumes.
I don’t really like portainer, first their business model is not that good and second they are doing strange things with the compose files
I’m learning to hate it right now too. For some reason, its refusing to upload a local image from my laptop, and the alarm that comes up tells me exactly nothing useful.
Docker is good when combined with gVisor runtime for better isolation.
What is gVisor?
gVisor is an application kernel, written in memory safe Golang, that emulates most system calls and massively reduces the attack surface of the kernel. This is important since the host and guest share the same kernel, and Docker runs rootful. Root inside a Docker container is the same as root on the host, as long as a sandbox escape is used. This could arise if a container image requires unsafe permissions like Docker socket access. gVisor protects against privilege escalation by only using root at the start and never handing root over to the guest.
Sydbox OCI runtime is also cool and faster than gVisor (both are quick)
I like reminding people that with every new technology, the old one is still around. The new gets most of the attention, but the old is still kicking. (We still have wire wrapped programs kicking around.)
You are all good. Spend your limited attention on other things.