(This is a repost of this reddit post https://www.reddit.com/r/selfhosted/comments/1fbv41n/what_are_the_things_that_makes_a_selfhostable/, I wanna ask this here just in case folks in this community also have some thoughts about it)
What are the things that makes a selfhostable app/project project good? Maybe another way to phrase this question is, what are the things that makes a project easier to self-host?
I have been developing an application that focuses on being easy to selfhost. I have been looking around for existing and already good project such as paperless-ngx, Immich, etc.
From what I gather the most important thing are:
- Good docs, this is probably the most important. The developer must document how to self-host
- Less runtime dependency–I’m not sure about this one, but the less it depends on other services the better
- Optional OIDC–I’m even less sure about this one, and I’m also not sure about implementing this feature on my own app as it’s difficult to develop. It seems that after reading this subreddit/community, I concluded that lots of people here prefer to separate identity/user pool and app service. This means running a separate service for authentication and authorization.
What do you think? Another question is, are there any more good project that can be used as a good example of selfhostable app?
Thank you
Some redditors responded on the post:
- easy to install, try, and configure with sane defaults
- availabiity of image on dockerhub
- screenshots
- good GUI
I also came across this comment from Hacker News lately, and I think about it a lot
https://news.ycombinator.com/item?id=40523806
This is what self-hosted software should be. An app, self-contained, (essentially) a single file with minimal dependencies.
Not something so complex that it requires docker. Not something that requires you to install a separate database. Not something that depends on redis and other external services.
I’ve turned down many self-hosted options due to the complexity of the setup and maintenance.
Do you agree with this?
Has docker compose file for deployment
Can be hosted on sub-path and not only subdomain
Can easily be integrated into SSO lime authelia
A docker compose example with as few images as possible.
Not something so complex that it requires docker.
I disagree. Docker makes things a lot easier and I’m going to use it regardless.
My rule is pretty simple: not PHP. PHP requires configuring a web server, so either that’s embedded in the docker image, (violates the “do one thing” rule of docker) or it’s pushed onto the user. This falls under the dependencies part, but I uniquely hate dealing with standalone web servers and I don’t mind configuring databases, so I called it out.
I actually tried switching to OCIS from Nextcloud specifically to avoid PHP, but OCIS is even more complex so I bailed.
Give me an example configuration that works out of the box and detailed documentation about options and I’ll be happy. Don’t make me configure a web server any particular way, and do let me handle TLS myself. If you do that, I’ll probably check it out.
@hono4kami To me, good documentation is the number one thing that makes a selfhostable application good.
Second would be “is it dockerized ?”Yep, documentation and a good base level default installation configuration/guide with minimal friction.
I’m perfectly willing to play around once I know at the basic level that the core flow is going to work for me. If it takes me digging through a stack of documentation (especially if it’s bad) to even get something to experiment with on my own system? I won’t bother.
To me, good documentation is the number one thing that makes a selfhostable application good.
I agree. If you don’t mind: what are your qualifications for good documentation? Do you have some good examples of good docs?
What helps a lot for apps with multiple config files:
- if you tell the user to “add code xy to the config file” : tell me which file. is it the main config file? the one of the reverse proxy etc.?
- provide a sensible example library of the config structure. For example: duting the implementation of an importer for beancount I was struggling with what goes where. The example structure was really, really helpful.
- also, if you have configurations which allow different options: TELL ME THE OPTIONS! If I get an error during startup, that for config.foo the value “bar” is not allowed, I need a list of options somehwere, so many hours lost to find out what I can write to config.foo
including examples for everything in the docs is the best way to explain imo
@hono4kami
One of the best documentation I’ve encountered so far:
https://borgbackup.readthedocs.io/en/stable/
Not something so complex that it requires a docker
Docker is the thing that sandboxes your services from the host OS. I’d rather use Podman because of the true non-root mode, but Docker is still based. Plus, you can use Docker Swarm if you don’t want to switch to Kubernetes (though you don’t have easy storage integration for persistence).
The problem is when Docker is used to gift wrap a mess. Then there are rotting dependencies in the containers. The nice thing about Debian packaged things is the maintainer is forced to do things properly. Even more so if they get it into the repos.
My preference is Debian Stable in LXC or even KVM for services. I only go for Docker if that is the recommended option. There is stuff out there where the recommend way is their VM image which is full of their soup of Dockers.
Docker is in my pile of technologies I don’t really like or approve of, but don’t have the energy to really fight.
IMO a lot of what makes nice self-hostable software is clean and sane software in general. A lot of stuff tend to end up trying to be too easy and you can’t scale up, or stuff so unbelievably complicated you can’t scale it down. Don’t make me install an email server and API keys to services needed by features I won’t even use.
I don’t particularly mind needing a database and Redis and the likes, but if you need MySQL and PostgreSQL and Redis and memcached and an ElasticSearch cluster and some of it is Go, some of it is Ruby and some of it is Java with a sprinkle of someone’s erlang phase, … no, just no, screw that.
What really sucks is when Docker is used as a bandaid to hide all that insanity under the guise of easy self-hosting. It works but it’s still a pain to maintain and debug, and it often uses way more resources than it really need. Well written software is flexible and sane.
My stuff at work runs equally fine locally in under a gig of RAM and barely any CPU at idle, and yet spans dozens of servers and microservices in production. That’s sane software.
A lot of stuff tend to end up trying to be too easy and you can’t scale up, or stuff so unbelievably complicated you can’t scale it down.
I see, it’s probably good to have some balance between those. Noted
Good documentation.
I’ve turned down many self-hosted options due to the complexity of >the setup and maintenance.
Do you agree with this?
No.
I like my services and stack.
From the *arrs and Jellyfin, HortusFox, Unifi Network Application (though it should run with just a SQL-Lite DB) over many things else.
Yes, databases are annoying but if the service that wants it is sane I have no problem doing it.What does grind my gears is when services have many breaking changes (e.g. Immich). If it wasnt for that I would be more open to finally start with that and maybe install good and working immich service.
The things redditors mentioned are very good already. Primarily screenshots.
Please, please always add screenshots to let me have a general idea of the UI.
At the very least a demo instance if you can’t be bothered to add screenshots (yes, I have seen many services that would rather share a demo instance than screenshots…)The things redditors mentioned are very good already. Primarily screenshots. Please, please always add screenshots to let me have a general idea of the UI.
I’ve read this mentioned many times. Is it really that bad XD
It’s annoying to figure out if the program is any good if it’s reliant on a UI.
To me the number one thing is, that it is easy to setup via Docker. One container, one network (ideally no network but just using the default one), one storage volume, no additional manual configuration when composing the container.
No, I don’t want a second container for a database. No I don’t want to set up multiple networks. Yes, I already have a reverse proxy doing the routing and certificates. No, I don’t need 3 volumes for just one application.
Please just don’t clutter my environment.
No, I don’t want a second container for a database.
Unless you’re talking about using SQLite:
Isn’t the point of Docker container is to only have one software/process running? I’m sure you can use something like s6 or other lightweight supervisor, but I feel like that’s seems counterintuitive?
To me, the point of Docker is having one container for one specific application. And I see the database as part of the application. As well as all other things needed to run that application.
Since we’re here, lets take Lemmy for example. It wants 6 different containers with a total of 7 different volumes (and I need to manually download and edit multiple files before even touching anything Docker-related).
In the end I have lemmy, lemmy-ui, pictrs, postgres, postfix-relay, and an additional reverse proxy for one single application (Lemmy). I do not want or need or use any of the containers for anything else except Lemmy.
There are a lot of other applications that want me to install a database container, a reverse proxy, and the actual application container, where I will never ever need, or want, or use any of the additional containers for anything else except this one application.
So in the end I have a dozen of containers and the same amount of volumes just to run 2-3 applications, causing a metric shit-ton of maintenance effort and update time.
I agree with this. If you are going to be using multiple containers for a single app anyways, what is the point of it being in multiple containers? Stick all of it in one container and save everyone the hassle.
It’s because of updates and who owns the support.
The postgres project makes the postgres container, the pict-rs project makes the pict-rs container, and so on.
When you make a monolithic container you’re now responsible for keeping your shit and everyone else’s updated, patched, and secured.
I don’t blame any dev for not wanting to own all that mess, and thus, you end up with seperate containers for each service.
I can see why editing config files is annoying, but why exactly are two services and volumes in a docker-compose file any more difficult to manage than one?
See it in a broader scope. If I’d only host Lemmy with is multiple mandatory things, I couldn’t care less, but I already have some other applications that I run via Docker. Fortunately I was able to keep the footprint small, no multiple containers or volumes for one application, but as said: those exist. And they would clutter the setup and make it harder to maintain an manage.
I also stand by my point that it is counter-intuitive to have multiple containers and volumes for just one single application.
Ok but is there room for the idea that your intuitions are incorrect? Plenty of things in the world are counter-intuitive. ‘docker-compose up -d’ works the same whether it’s one container or fifty.
Computer resources are measured in bits and clock cycles, not the number of containers and volumes. It’s entirely possible (even likely) that an all-in-one container will be more resource-heavy than the same services split across multiple containers. Logging from an all-in-one will be a jumbled mess, troubleshooting issues or making changes will be annoying, it’s worse in every way except the length of output from ‘docker ps’
docker ps
or Portainer as a nice web-UI wrapper around the Docker commands are the two main use cases with Docker I have have on a regular basis.No, thank you. I am not going to maintain fifty containers and fifty + X volumes for just a handful of applications and will alway prefer self-contained applications over applications that spread over multiple containers for no real reason.
but there is a reason i just explained it to you
I disagree with pretty much all of this, you are trading maintainability and security for easy setup. Providing a docker-compose file accomplishes the same thing without the sacrifice
- separate volumes for configuration, data, and cache because I might want to put them in different places and use different backup strategies. Config and db on SSD, large data on spinning rust, for example.
- separate container for the database because the official database images are guaranteed to be better maintained than whatever every random project includes in their image
- separate networks because putting your reverse proxy on a different network from your database is just prudent
I prefer this, but if the options are available its shows me that soneone actually thought about it while creating the software/conatiner
I came here to basically say this. It’s especially bad when you aren’t even sure if you want to keep the service and are just testing it out. If I already have to go through a huge setup/troubleshooting process just to test the app, then I’m not feeling very good about it.
Not something so complex that it requires docker. Not something that requires you to install a separate database. Not something that depends on redis and other external services.
This comment is a bit silly. Databases just make sense for many services, although many could just use sqlite which would be fine (and many do). Redis etc is usually optional and might increase performance in some cases.
I wouldn’t be a fan of something requiring docker, but it’s often just the easiest way to deploy these types of services (and the easiest way to install it as a user).
Anyway, I’ll echo that clear, up-to-date documentation is nice. I shouldn’t have to search through actual code or the bug/issues section to find current information (but I get this is very challenging). And I’d rather projects didn’t make Discord a source of documentation (especially not the primary one).
I’ll add that having a NixOS module is a big plus for me, but I don’t expect the developers themselves to maintain this.
Unless you are a business with millions of users, I would be really skeptical of redis improving performance. At the end of the day, any simple KV store can replace it. It’s not like postgres where having sqlite in it’s place means not only a different performance profile, but also different semantics, sql dialect, rpc protocol, etc
Documentation, screenshots, a forum, one click installer or simple line to paste into the terminal.
I totally disagree with the quote from hackernews. Having the option to use sqlite is nice to test it, but going with postgresql or mariadb allows you to have better performance if you use rdbms. Also, packaging with containers allows to have one standardized image for support if some third party packaging (from a distro repo) is bugging to test it further. To me, a good gui really depends on what service is provided. For kanidm (IAM), I don’t care this much of a web admin panel, the cli is really intuitive and if you need some graph views of your users, you can generate some diagram files. Considering OIDC/LDAP, I’d rather have OIDC implemented for two reasons : I can point my users to the (really minimalist) kanidm ui where they have a button for each app allowed. Also, the login informations are only stored in kanidm, no spreading of login password.
I saw a comment about not needing to rely on many third services but I partly disagree with it. Using nextcloud as a mixed example, using elastic search for full text search is better than reimplementing it, but the notify_push should not be as separated as it is (it is here because I understood, apache-php and websockets does not mix well).
All in all, the main criterias for me are :
- SSO with OIDC, but ldap is good enough
- Good documentation
- easy deployment to test, prod deployment can be more advanced
- Not reimplement the weel eg if you need full text search, meilisearch or elastic can do it better than you will, so don’t try to much (a simple grep for a test instance is enough)
- If you need to store files, having remote stores is nice to have (webdav or s3)
-
Has a simple backup and migration workflow. I recently had to backup and migrate a MediaWiki database. It was pretty smooth but not as simple as it could be. If your data model is spread across RDBMS and file, you need to provide a CLI tool that does the export/import.
-
Easy to run as a systemd service. This is the main criteria for whether it will be easy to create a NixOS module.
-
Has health endpoints for monitoring.
-
Has an admin web UI that surfaces important configuration info.
-
If there are external service dependencies like postgres or redis, then there needs to be a wealth of documentation on how those integrations work. Provide infrastructure as code examples! IME systemd and NixOS modules are very capable of deploying these kinds of distributed systems.
-
Please be mindful of HDD spindown.
If your app frequently looks up stuff in a database and also has a bunch of files that are accessed on-demand, then please have an option to separate the data-directory from the appdata-directory.
A lot of stuff is self-hosted in homes and not everyone has the luxury of a dedicated server room.
separate the data-directory from the appdata-directory
Would you mind explaining more about this?
Take my setup for jellyfin as an example: There’s a database located on the SSD and there’s my media library located on an HDD array. The HDD is only spun up when jellyfin wants to access a media file.
In my previous setup, the nextcloud database was located on a HDD, which resulted in the HDD never spinning down, even if the actual files are never really accessed.
In immich, I wasn’t able to find out if they have this separation, which is very annoying.
All this is moot, if you simply offer a tiny service which doesn’t access big files that aren’t stored on SSDs.
Exactly. Separate configuration and metadata from data. If the metadata DB is relatively small, I’ll stick it on my SSD and backup to my HDD on a schedule.
I’ve turned down many self-hosted options due to the complexity of the setup and maintenance.
Do you agree with this?
Yes. If I have to spend an hour reading your documentation just to find out how to run the damn thing, I’m not going to run it.
I hate docker with a passion because it seems like the people who develop on it forego writing documentation because docker “just works” except when it doesn’t.
I archived one of my github repos the other day because someone requested I add docker support. It’s a project I made specifically to not use docker…