You didn’t say how you currently keep your data…
You didn’t say how you currently keep your data…
The NAS can do almost everything you need except offsite C2.
For example Synology ((so far I only had these)) has a built-in DynDNS service that gives you a subdomain you can access the NAS through without extra steps. I bet all the other NAS brands have this built-in as well. Whichever you pick, definitely have 2FA enabled. Also if you can setup your storage pool as btrfs that’s great too.
As others pointed out, you need an offsite copy on some C2 provider or a friend’s NAS or whatever. (if you’ve really no budget, you could get a bunch on free subscriptions (dropbox etc.) and split up the backups between them).
The NAS will have an app that already supports a whole lot of providers + things like external USB drive and you can setup automatic backup there.
I think you might have better luck with your question in r/artificial perhaps
It’s funny how as a self-hoster with no open ports, sort of supply chain attacks are almost my biggest worry… Here’s the tidbits I’ve collected so far, but just getting into this so take it with a grain of salt …
(* One example for non-rootifying a docker, I got tempo running as non root the other night as it is based on a nginx alpine linux image, after a while I found a nginx.conf file online where all the dirs are redirected to /tmp so nginx can still run if a non-root user launches it. Mapped that config file to the one in the container, set it to run as my user and it works. Did not even have to rebuild it.)
Hey, this is where I am stuck just now: I want to keep the docker volumes, as bind mounts, also on my NAS share. If the containers run as a separate non root user (say 1001) then I can mount that share as 1001… sounds good right?
But somebody suggested running each container from their own user. But then I would need lots of differently owned directories. I wonder if I could keep mounting subdirs of the same NAS share, as different users, so each of them can have their own file access? Perhaps that is overkill.
(For OP: I’ve been on a selfhosting binge the past week and trying to work my way in at least the general direction of best practice… At least for the container databases I’ve been starting to use tiredofit/docker-db-backup (does database dumps) but also discovered this jdfranel docker backup as well which looks great as well. I save the dumps on a volume mounted from NAS. btrfs and there is a folder replication (snapshots) tool. So far, so good. )
ಠ_ಠ