• 1 Post
  • 29 Comments
Joined 11 months ago
cake
Cake day: July 29th, 2023

help-circle

  • Your workload (a NAS and a handful of services) is going to be a very familiar one to members of the community, so you should get some great answers.

    My (I guess slightly wacky) solution for this sort of workload has ended up being a single Docker container inside an LXC container for each service on Proxmox. Docker for ease of management with compose and separate LXCs for each service for ease of snapshots/backups.

    Obviously there’s some overhead, but it doesn’t seem to be significant.

    On the subject of clustering, I actually purchased three machines to do this, but have ended up abandoning that idea - I can move a service (or restore it from a snapshot to a different machine) in a couple of minutes which provides all the redundancy I need for a home service. Now I keep the three machines as a production server, a backup (that I swap over to for a week or so every month or two) and a development machine. The NAS is separate to these.

    I love Proxmox, but most times it get mentioned here people pop up to boost Incus/LXD so that’s something I’d like to investigate, but my skills (and Ansible playbooks) are currently built around Proxmox so I’ve got a bit on inertia.



  • For light touch monitoring this is my approach too. I have one instance in my network, and another on fly.io for the VPSs (my most common outage is my home internet). To make it a tiny bit stronger, I wrote a Go endpoint that exposes the disk and memory usage of a server including with mem_okay and disk_okay keywords, and I have Kuma checking those.

    I even have the two Kuma instances checking each other by making a status page and adding checks for each other’s ‘degraded’ state. I have ntfy set up on both so I get the Kuma change notifications on my iPhone. I love ntfy so much I donate to it.

    For my VPSs, this is probably not enough, so I am considering the more complicated solutions (I’ve started wanting to know things like an influx of fali2ban bans etc.)


  • thirdBreakfast@lemmy.worldtoSelfhosted@lemmy.worldKavita runners
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    5 months ago
    - fiction
        - Abbott, Edwin A_
            - Flatland
                - Flatland - Edwin A. Abbott.epub
                - Flatland - Edwin A. Abbott.jpg
                - Flatland - Edwin A. Abbott.opf
        - Achebe, Chinua
            - Things Fall Apart
                - Things Fall Apart - Chinua Achebe.epub
                - Things Fall Apart - Chinua Achebe.jpg
                - Things Fall Apart - Chinua Achebe.opf
    

    So in each directory that I use to delineate a library, I have a subdirectory for each author (in sort order form). Within each author subdirectory is a subdirectory for each book, with just the title, then the book with (edit - the anti-injection code mangled how I was trying to say the book file name. it’s [book name]-[author].[extension])

    I didn’t invent this, it’s just what Calibre spits out. When I buy a new book, I ingest it into Calibre, fix any metadata and export it to the NAS. Then I delete the Calibre library - I’m just using it to do the neatening up work.





  • Yo dawg, I put most of my services in a Docker container inside their own LXC container. It used to bug me that this seems like a less than optimal use of resources, but I love the management - all the VM and containers on one pane of glass, super simple snapshots, dead easy to move a service between machines, and simple to instrument the LXC for monitoring.

    I see other people doing, and I’m interested in, an even more generic system (maybe Cockpit or something) but I’ve been really happy with this. If OP’s dream is managing all the containers and VM’s together, I’d back having a look at Proxmox.


  • This is where I landed on this decision. I run a Synology which just does NAS on spinning rust and I don’t mess with it. Since you know rsync this will all be a painless setup apart from the upfront cost. I’d trust any 2 bay synology less than 10 years old (I think the last two digits in the model number is the year), then if your budget is tight, grab a couple 2nd hand disks from different batches (or three if you budget stretches to it,).

    I also endorse u/originalucifer’s comment about a real machine. Thin clients like the HP minis or lenovos are a great step up.




  • I’ve just been down this exact journey, and ended up settling on Kavita. It has all the browse, search and library stuff you’d expect. You can download or read things in the web interface. I’m only using it for epub and PDF books, but its focus is comics and manga so I expect it to shine there.

    I don’t think it does mobi, but since I use Calibre on my laptop to neaten up covers and metadata before I drop books on to the server it’s a simple matter to convert the odd mobi I end up with. Installation (using docker inside an LXC) was simple.

    It’s been a really straightforward, good experience. Highly recommend. I like it better than AudioBookshelf (which I’m already hosting for audio books) which I also tried, but didn’t like as much for inexplicable reasons. I also considered Calibre-Web, but that seemed a bit messy since I guess I’d use Calibre on my laptop to manage my books on a NAS share then serve it headless from the server with Calibre-Web? I might have that completely wrong, I didn’t spend any time looking into it because Kavita was the second thing I tried and it did exactly what I wanted.



  • I don’t sew, but a follow several people who do (for vintage and modern clothing) on Instagram - just to emotionally vampire off their irrepressible happiness when it all comes together and they make something that comes out as great as they imagined (lots of “and it has pockets!!!” moments) or they master a new skill they had been struggling with - like sewing button holes in denim or whatever.

    It’s not for me, but I love the obvious satisfaction and joy other people are getting out of it.



  • Your head might be spinning from all the different advice you’re getting - don’t worry, there are a lot of options and lots of folk are jumping in with genuinely good (and well meaning) advice. I guess I’ll add my two cents, but try and explain the ‘why’ of my thinking.

    I’m assuming from your questions you know your way around a computer, can figure things out, but haven’t done much self-hosting. If I’m wrong about that, go ahead and skip this suggestion.

    • Jellyfin good - a common gateway drug to homelabbing, and the only thing you’ll do that non-tech friends will appreciate
    • Proxmox good - it makes the backups simple and provides a path forward for all sorts of things
    • Docker good - you’ve said it increases complexity; this is correct in that you’re adding more layers of stuff, but it reduces your complexity of management by removing a heap of dependency issues. There is a compute and memory overhead involved, but it’s small and the tradeoff is worth it.
    • VM good - yes an LXC is more efficient, but it’s harder to run docker in. Save that for a future project
    • Media data somewhere else good - I run a separate NAS with an SMB share. A NAS in a VM is a compromise, but like all things self hosting, you start out with what you’ve got. I let Jellyfin keep the metadata in the VM that’s hosting my Jellyfin though since the NAS is over the network. That’s less of a consideration if you are visualizing your NAS on the same machine, but I’d still do it my way for future proofing.
    • Passthrough magic not yet - this can also be a future project. If your metal has quicksync that can be utilized to reduce the CPU load, but that can also be a future project.


  • I have a very similar setup. Jellyfin in Docker on a Debian VM (2 cores, 8GB RAM), and all the media on the NAS. The CIFS/SMB from the NAS is mounted in fstab. I keep all the metadata locally for speed - ie not on the NAS. I don’t like the extra layer of running Docker, but it works like a charm whereas I had a few hassles running Jellyfin natively in the VM. I do have a special ‘media’ user with the name and password in the mount command which only has permissions for the media.

    Can’t comment on the arrs suite since I get all my linux distros on those disks attached to the front of magazines.