• 0 Posts
  • 76 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle



  • Yeah to be fair when I was in my late teens I also had access to a full shop. Neighbors owned it, so if it was my mom, they’d take care of her. If it was me I was told to stop bothering them because I knew where the tools were and how to note it for inventory.

    So now I’ve got a local shop I trust, I don’t have to deal with it, and they do the kind of clean work I would do myself. Works out for the best.


  • That’s not really the issue.

    It’s all about stuffing more stuff in these days. Adding oil for example is easy, sure - but getting to the filter without a lift can be a nightmare of annoying little tasks.

    And I’m mostly referring to actual work, not basic maintenance stuff, the headlight was just an example of a ridiculous design approach that is all too common. It’s annoying enough that I have no patience for it anymore on modern cars. The mechanics of the actual work haven’t changed, the effort required for those tasks has though.


  • The fact that the whole thing could be worked on in a driveway with basic tools is what I miss.

    My dad did most work on his cars in the driveway. I did most of the work on my cars in the same driveway.

    My wife (then gf) 2005 Nissan Altima changed that. The very moment I had to remove an air intake TO REPLACE A HEADLIGHT BULB, I gave up on ever doing work on my own car again.

    Unless I somehow end up with a ton of disposable income to convert an older car to all electric. By “older” I mean like a Chrysler Town & Country barrel back wagon, a 50s Buick Skylark, 56 Continental, etc. And by those cars I mean a lightweight shell in that style.


  • Anker and TPLink tend to be well supported on Linux, I might have a look there first.

    For the record, one 7th Gen is running two Jellyfin instances, a DNS server, a pihole, my mqtt server, a docker host with misc nginx containers, my proxies, Prometheus and grafana, and home assistant.

    Currently under 5% CPU load, 14gb ram in use.

    Is it enough? Probably. But I also don’t transcode 4k (a holdover from when that wasn’t reasonable, I maintain a separate library). But I do HDR to SDR constantly, and PGS subtitles often.


  • I actually moved everything over to those little boxes during the pandemic too. There were a few things running on them, and my piholes (3b’s) I wanted to repurpose, but it was hard to get a pi for a reasonable price…

    And then I realized I could be doing this smarter and moved everything to those machines lol

    Just now spinning up an additional Jellyfin LXC for family - this way I can simplify access, each LXC has read only access to my media, no ssh access, etc, etc.

    Almost all of them are 6th, 7th, and 8th gen, with one 4th Gen i5 running just a bunch of lightweight LXCs. All in a 14ru rack next to Aruba 2960s, a DS1520+ and 1515+, etc, etc. Every bit of tech I run here in a small rack!




  • Definitely unfortunate…

    What I’d recommend maybe looking at then is an off lease Lenovo/HP/Dell tiny/mini/micro - on eBay I’m seeing a Dell 3050 with 6th gen i3 for ~$60, a 7050 with an i5 6th gen and no HD for the same price (better purchase right there), or an i7 6th gen with a 256gb drive and 16gb ram for $120.

    Anything like that, and swap in the SSD if it will take it, then you could put proxmox on one disk and dedicate the second SSD to your media server VM.

    Good luck with whichever route you go!




  • iptables is a solid choice for the regular Linux side (or an LXC), if you use a docker container though, you can just use the docker network to restrict access - you can see a solid example of that here:

    https://tcpip.wtf/en/force-docker-containers-vpn-gluetun.htm

    Regarding your questions:

    • Good enough for direct play, and for SRT subtitles, but any image subtitles (pgs, vobsub) will transcode. If you can use GPU for transcoding, it won’t matter, it’ll come down to how much simultaneous use you’ve got more than anything. You can view how it’s running from the summary page of the VM/LXC, and adjust accordingly whenever. Just give it another core, shutdown, start back up, and you’ll have more cores applied - ready to test again. (One of the reasons I like proxmox).
    • You can update any time, just need to shutdown and start up again to apply hardware changes! The only thing you can’t change easily is privileged/unprivileged LXCs. For now don’t worry about that.
    • A privileged container can access the hardware on the host, but an unprivileged container can’t (without some extra shenanigans). I’d make it privileged for now, if you want to change later after you’ve gotten some experience you’ll be able to do so much more quickly. The HD4000 will do pretty well with h264 video, but won’t help with h265/hevc, so stick with h264 for anything with subtitles.
    • Sure can! You can actually mount it to the LXC from proxmox with a simple command:

    pct set XXX -mpX /host/dir,mp=/container/mount/point

    Where XXX is the container number, and mpX is the mount number. Mount Point 0 is the first (mp0), the next directory you mount is mp1, etc.

    • Check out tteck’s helper scripts for an idea on the things you can do. Personally I recommend making the LXCs yourself, but these scripts are good to use to get familiar with what you can do:

    https://tteck.github.io/Proxmox/




  • Only your container with VPN connection and transmission would be using the VPN’s DNS

    With regards to the pihole, yeah I’d run that as a container on proxmox (there are some handy scripts by [tteck](https://tteck.github.io/Proxmox/, though I’m not generally a fan of running shell scripts off the net like this it is easy). But I wouldn’t get rid of the pi, I’d keep it as your secondary. Single point of failure means it will drive you nuts if you have to reboot the server, everything will be down.

    Outside of the beelink, it’s just the tiny/mini/micro options from Lenovo/HP/Dell, and then by generation of CPU. The beelink is a popular choice, but personally I like the power that an i5/i7 will give me, and I’ve got a couple of machines with 32GB and 64GB of ram - throwing 16GB at a VM I can access remotely for Windows apps is super useful, and I can otherwise live in my Linux desktop.

    I’d also say you don’t have to toss the 2012 Mac mini, you can grab some bits you want (SSD), but you can drop in a replacement and make it another proxmox host. Even run your second pihole there.

    In terms of guides, sorry don’t really have any on specifically my setup no. But there are ones out there to learn how to set up an LXC as a docker host, docker networking, guides to the *arrs being set up, etc.

    Such as this one for a VPN container, and docker-compose samples for having other containers use that network: https://www.naturalborncoder.com/linux/2021/02/19/making-a-docker-container-use-a-vpn/

    For a good start on how to set up (after the containers are running) sonarr/Radarr/etc, check out: https://trash-guides.info/

    And you can always ask questions in the various home server communities here (and elsewhere on the fediverse obviously).

    Good luck!



  • Use your VPNs DNS, and make sure to test that the IP seen publicly is your VPNs. There are a bunch of simple torrent tests out there for that.

    No need to set up cron/rsync if you use the *arrs. They will handle fetching, renaming, upgrading, moving, etc.

    Related to those two above, and the Proxmox recommendation, here is what I do:

    • Set up an LXC (another type of container, lightweight but with a closer to VM in use - really useful) and make it run all your docker containers.
    • Instead of a container that combines the torrent client and VPN, have a distinct VPN container that other containers can connect to. You can also have it set up where if the VPN connection goes down, it stops working - this is safer.
    • Your torrent client will connect to that VPN container for network access.
    • Prowlarr (connects to all of your indexers) can then be set to use that network to search for torrents as well.
    • Sonarr, Radarr, etc don’t need to connect to that VPN container, since Prowlarr is what they would be querying.

    Your Mac Studio would definitely be better for Plex unless you’re going to avoid transcoding entirely. If you are going to do direct stream only (and that means some subtitle types will be a problem btw), you can put Plex in a VM.

    Now a synology that can handle Plex and transcodes is an option, but in my opinion you’re better off with a 6th gen or higher Intel machine that’s cheap (preferably 10th for the latest capabilities, 8th for more transcoding options than 6, but 6th is good enough for most people). I have two synology NASs that could be used for Plex and an xpenology VM, and I don’t go that route because a $100-$200 business desktop (tiny/mini/micro) is more capable.

    With 4K content, id lean toward using a 10th gen chip, there are some options with quick sync iGPUs though in cheaper lineups from modern gens that do a great job though (beelink, s12pro specifically with the alder lake n100, though any with the n100 would be the same).

    This can seem like a TON of setup, but honestly once you get the handle on managing a few containers it is just so much easier than other options.