• 0 Posts
  • 27 Comments
Joined 3 months ago
cake
Cake day: April 5th, 2024

help-circle





  • I’ll be honest op if it’s on a TV I use the newer fire sticks with the jellyfin app. They already have support for various codecs and stream from my server just fine. Cheap too and come with a remote.

    If I were just trying to get a home made client up I would consider Debian bookworm and just utilize the Deb from the GitHub link here…

    https://jellyfin.org/downloads/clients/

    Personally I’d throw on cockpit to make remote administration a bit easier and setup an auto start at login for the jellyfin media player with the startup apps. You can even add a launch variable to launch it full screen like…

    jellyfin --fullscreen
    

    The media player doesn’t really need special privileges so you could create a basic user account just for jellyfin.


  • Setups for hardware decoding are based on the underlying OS. An example quite common is docker on Debian or Ubuntu. You will need to pass the appropriate /dev/ directories and at times files into your jellyfin docker container with the device environment variable. Commonly that would be /dev/dri

    It gets more complicated with a vm because you are likely going to be passing the hardware directly into the vm which will prevent other devices outside the vm from using it.

    You can get around this by placing docker directly on the os or placing docker in a Linux container with appropriate permissions and the same devices passed into the Linux container. In this manner system devices and other services will still have access the the video card.

    All this to say it depends on your setup and where you have docker installed how you will pass the hardware into jellyfin. However jellyfin on docker will need you to pass the video card into the container with the device environment variable. Docker will need to see the device to be able to do that.


  • You can do that or you can use a reverse proxy to expose your services without opening ports for every service. With a reverse proxy you would point port 80 and 443 to the reverse proxy once traffic hits your router/firewall. In the reverse proxy you would configure hostnames that point to the local service IP/ports. Reverse proxy servers like nginx proxy manager then allow you to setup https certificates for every service you expose. They also allow you to disable access to them through a single interface.

    I do this and have setup some blocklists on the opnsense firewall. Specifically you could setup the spamhaus blocklists to drop any traffic that originates from those ips. You can also use the Emerging Threats Blocklist. It has spamhaus and a few more integrated from dshield ect. These can be made into simple firewall rules.

    If you want to block entire country ips you can setup the GeoIP blocklist in opnsense. This requires a maxmind account but allows you to pick and choose countries.

    You can also setup the suricatta ips in opnsense to block detected traffic from daily updates lists. It’s a bit more resource intensive from regular firewall rules but also far more advanced at detecting threats.

    I use both firewall lists and ips scanning both the wan and lan in promiscuous mode. This heavily defends your network in ways that most modern networks can’t even take advantage.

    You want even more security you can setup unbound with DNS over TLS. You could even setup openvpn and route all your internal traffic through that to a VPN provider. Personally I prefer having individual systems connect to a VPN service.

    Anyway all this to say no you don’t need a VPN static IP. You may prefer instead a domain name you can point to your systems. If you’re worried about security here identify providers that allow crypto and don’t care about identity. This is true for VPN providers as well.


  • This is a journey that will likely fill you with knowledge. During that process what you consider “easy” will change.

    So the answer right now for you is use what is interesting to you.

    Yes plenty ways to do the same thing in different ways. Imo though right now jump in and install something. Then play with it.

    Just remember modern CPUs can host many services from a single box. How they do that can vary.


  • So you mentioned using proxmox as the underlying system but when I asked for proxmox filesystem I’m more referring to if you just kept the defaults during installation which would be lvm/ext4 as the proxmox filesystem or if you changed to zfs as the underlying proxmox filesystem. It sounds like you have additional drives that you used the proxmox command line to “passthru” as scsi devices. Just be aware this not true passthru. It is slightly virtualized but is handing the entire storage of the device to the vm. The only true passthru without a slight virtualization would be pci passthru utilizing IOMMU.

    I have some experience with this specifically because of a client doing similar with a truenas vm. They discovered they couldn’t import their pool into another system because proxmox had slightly virtualized the disks when they added them to vm in this manner. In other words zfs wasn’t directly managing the disks. It was managing virtual disks.

    Anyway, it would still help to know the underlying filesystem of the slightly virtualized disks you gave to mergerfs. Are these ext4, xfs, btrfs? mergerfs is just a union filesystem that unifies storage across multiple mountpoints into a single virtual filesystem. Which means you have another couple layers of complexity in your setup.

    If you are worried about disk IO you may consider letting the hypervisor manage these disks and storage a bit more directly. Removing some of the filesystem layers.

    I could recommend just making a single zfs pool from these disks within proxmox to do this. Obviously this is a pretty big transition on a production system. Another option would be creating a btrfs raid from these disks within proxmox and adding that mountpoint as storage to the hypervisor.

    Personally I use zfs but btrfs works well enough. Regardless this would allow you to just hand storage to vms from the gui and the hypervisor would aid much more efficiently with disk io.

    As for the error it’s typically repaired by unmount mount operations. As I mentioned before the cause can be various but usually is a loss of network connectivity or an inability to lock something down in use.

    My advice would be to investigate reducing your storage complexity. It will simplify administration and future transitions.


    Repost to op as op claims his comments are being purged



  • most filesystems as mentioned in the guide that exist within qcow2, zvols, even raws, that live on a zfs dataset would benefit form a zfs recordsize of 64k. By default the recordsize will be 128k.

    I would never utilize 1mb for any dataset that had vm disks inside it.

    I would create a new dataset for media off the pool and set a recordsize of 1mb. You can only really get away with this if you have media files directly inside this dataset. So pics, music, videos.

    The cool thing is you can set these options on an individual dataset basis. so one dataset can have one recordsize and another dataset can have another.


  • It looks like you are using legacy bios. mine is using uefi with a zfs rpool

    proxmox-boot-tool status
    Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
    System currently booted with uefi
    31FA-87E2 is configured with: uefi (versions: 6.5.11-8-pve, 6.5.13-5-pve)
    

    However, like with everything a method always exists to get it done. Or not if you are concerned.

    If you are interested it would look like…

    Pool Upgrade

    sudo zpool upgrade <pool_name>
    

    Confirm Upgrade

    sudo zpool status
    
    

    Refresh boot config

    sudo pveboot-tool refresh
    
    

    Confirm Boot configuration

    cat /boot/grub/grub.cfg
    

    You are looking for directives like this to see if they are indeed pointing at your existing rpool

    root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet
    

    here is my file if it helps you compare…

    #
    # DO NOT EDIT THIS FILE
    #
    # It is automatically generated by grub-mkconfig using templates
    # from /etc/grub.d and settings from /etc/default/grub
    #
    
    ### BEGIN /etc/grub.d/000_proxmox_boot_header ###
    #
    # This system is booted via proxmox-boot-tool! The grub-config used when
    # booting from the disks configured with proxmox-boot-tool resides on the vfat
    # partitions with UUIDs listed in /etc/kernel/proxmox-boot-uuids.
    # /boot/grub/grub.cfg is NOT read when booting from those disk!
    ### END /etc/grub.d/000_proxmox_boot_header ###
    
    ### BEGIN /etc/grub.d/00_header ###
    if [ -s $prefix/grubenv ]; then
      set have_grubenv=true
      load_env
    fi
    if [ "${next_entry}" ] ; then
       set default="${next_entry}"
       set next_entry=
       save_env next_entry
       set boot_once=true
    else
       set default="0"
    fi
    
    if [ x"${feature_menuentry_id}" = xy ]; then
      menuentry_id_option="--id"
    else
      menuentry_id_option=""
    fi
    
    export menuentry_id_option
    
    if [ "${prev_saved_entry}" ]; then
      set saved_entry="${prev_saved_entry}"
      save_env saved_entry
      set prev_saved_entry=
      save_env prev_saved_entry
      set boot_once=true
    fi
    
    function savedefault {
      if [ -z "${boot_once}" ]; then
        saved_entry="${chosen}"
        save_env saved_entry
      fi
    }
    function load_video {
      if [ x$feature_all_video_module = xy ]; then
        insmod all_video
      else
        insmod efi_gop
        insmod efi_uga
        insmod ieee1275_fb
        insmod vbe
        insmod vga
        insmod video_bochs
        insmod video_cirrus
      fi
    }
    
    if loadfont unicode ; then
      set gfxmode=auto
      load_video
      insmod gfxterm
      set locale_dir=$prefix/locale
      set lang=en_US
      insmod gettext
    fi
    terminal_output gfxterm
    if [ "${recordfail}" = 1 ] ; then
      set timeout=30
    else
      if [ x$feature_timeout_style = xy ] ; then
        set timeout_style=menu
        set timeout=5
      # Fallback normal timeout code in case the timeout_style feature is
      # unavailable.
      else
        set timeout=5
      fi
    fi
    ### END /etc/grub.d/00_header ###
    
    ### BEGIN /etc/grub.d/05_debian_theme ###
    set menu_color_normal=cyan/blue
    set menu_color_highlight=white/blue
    ### END /etc/grub.d/05_debian_theme ###
    
    ### BEGIN /etc/grub.d/10_linux ###
    function gfxmode {
            set gfxpayload="${1}"
    }
    set linux_gfx_mode=
    export linux_gfx_mode
    menuentry 'Proxmox VE GNU/Linux' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-/dev/sdc3' {
            load_video
            insmod gzio
            if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
            insmod part_gpt
            echo    'Loading Linux 6.5.13-5-pve ...'
            linux   /ROOT/pve-1@/boot/vmlinuz-6.5.13-5-pve root=ZFS=/ROOT/pve-1 ro       root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet
            echo    'Loading initial ramdisk ...'
            initrd  /ROOT/pve-1@/boot/initrd.img-6.5.13-5-pve
    }
    submenu 'Advanced options for Proxmox VE GNU/Linux' $menuentry_id_option 'gnulinux-advanced-/dev/sdc3' {
            menuentry 'Proxmox VE GNU/Linux, with Linux 6.5.13-5-pve' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-6.5.13-5-pve-advanced-/dev/sdc3' {
                    load_video
                    insmod gzio
                    if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
                    insmod part_gpt
                    echo    'Loading Linux 6.5.13-5-pve ...'
                    linux   /ROOT/pve-1@/boot/vmlinuz-6.5.13-5-pve root=ZFS=/ROOT/pve-1 ro       root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet
                    echo    'Loading initial ramdisk ...'
                    initrd  /ROOT/pve-1@/boot/initrd.img-6.5.13-5-pve
            }
            menuentry 'Proxmox VE GNU/Linux, with Linux 6.5.13-5-pve (recovery mode)' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-6.5.13-5-pve-recovery-/dev/sdc3' {
                    load_video
                    insmod gzio
                    if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
                    insmod part_gpt
                    echo    'Loading Linux 6.5.13-5-pve ...'
                    linux   /ROOT/pve-1@/boot/vmlinuz-6.5.13-5-pve root=ZFS=/ROOT/pve-1 ro single       root=ZFS=rpool/ROOT/pve-1 boot=zfs
                    echo    'Loading initial ramdisk ...'
                    initrd  /ROOT/pve-1@/boot/initrd.img-6.5.13-5-pve
            }
            menuentry 'Proxmox VE GNU/Linux, with Linux 6.5.11-8-pve' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-6.5.11-8-pve-advanced-/dev/sdc3' {
                    load_video
                    insmod gzio
                    if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
                    insmod part_gpt
                    echo    'Loading Linux 6.5.11-8-pve ...'
                    linux   /ROOT/pve-1@/boot/vmlinuz-6.5.11-8-pve root=ZFS=/ROOT/pve-1 ro       root=ZFS=rpool/ROOT/pve-1 boot=zfs quiet
                    echo    'Loading initial ramdisk ...'
                    initrd  /ROOT/pve-1@/boot/initrd.img-6.5.11-8-pve
            }
            menuentry 'Proxmox VE GNU/Linux, with Linux 6.5.11-8-pve (recovery mode)' --class proxmox --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-6.5.11-8-pve-recovery-/dev/sdc3' {
                    load_video
                    insmod gzio
                    if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
                    insmod part_gpt
                    echo    'Loading Linux 6.5.11-8-pve ...'
                    linux   /ROOT/pve-1@/boot/vmlinuz-6.5.11-8-pve root=ZFS=/ROOT/pve-1 ro single       root=ZFS=rpool/ROOT/pve-1 boot=zfs
                    echo    'Loading initial ramdisk ...'
                    initrd  /ROOT/pve-1@/boot/initrd.img-6.5.11-8-pve
            }
    }
    
    ### END /etc/grub.d/10_linux ###
    
    ### BEGIN /etc/grub.d/20_linux_xen ###
    
    ### END /etc/grub.d/20_linux_xen ###
    
    ### BEGIN /etc/grub.d/20_memtest86+ ###
    ### END /etc/grub.d/20_memtest86+ ###
    
    ### BEGIN /etc/grub.d/30_os-prober ###
    ### END /etc/grub.d/30_os-prober ###
    
    ### BEGIN /etc/grub.d/30_uefi-firmware ###
    menuentry 'UEFI Firmware Settings' $menuentry_id_option 'uefi-firmware' {
            fwsetup
    }
    ### END /etc/grub.d/30_uefi-firmware ###
    
    ### BEGIN /etc/grub.d/40_custom ###
    # This file provides an easy way to add custom menu entries.  Simply type the
    # menu entries you want to add after this comment.  Be careful not to change
    # the 'exec tail' line above.
    ### END /etc/grub.d/40_custom ###
    
    ### BEGIN /etc/grub.d/41_custom ###
    if [ -f  ${config_directory}/custom.cfg ]; then
      source ${config_directory}/custom.cfg
    elif [ -z "${config_directory}" -a -f  $prefix/custom.cfg ]; then
      source $prefix/custom.cfg
    fi
    ### END /etc/grub.d/41_custom ###
    

    You can see the lines by the linux sections.



  • Keep in mind it’s more an issue with writes as others mentioned when it comes to ssds. I use two ssds in a zfs mirror that I installed proxmox directly on. It’s an option in the installer and it’s quite nice.

    As for combating writes that’s actually easier than you think and applies to any filesystem. It just takes knowing what is write intensive. Most of the time for a linux os like proxmox that’s going to be temp files and logs. Both of which can easily be migrated to tmpfs. Doing this will increase the lifespan of any ssd dramatically. You just have to understand restarting clears those locations because now they exist in ram.

    As I mentioned elsewhere opnsense has an option within the gui to migrate tmp files to memory.


  • I’m specifically referencing this little bit of info for optimizing zfs for various situations.

    Vms for example should exist in their own dataset with a tuned record size of 64k

    Media should exist in its own with a tuned record size of 1mb

    lz4 is quick and should always be enabled. It will also work efficiently with larger record sizes.

    Anyway all the little things add up with zfs. When you have an underlying zfs you can get away with more simple and performant filesystems on zvols or qcow2. XFS, UFS, EXT4 all work well with 64k record sizes from the underlying zfs dataset/zvol.

    Btw it doesn’t change immediately on existing data if you just change the option on a dataset. You have to move the data out then back in for it to have the new record size.


  • Upgrading a ZFS pool itself shouldn’t make a system unbootable even if an rpool (root pool) exists on it.

    That could only happen if the upgrade took a shit during a power outage or something like that. The upgrade itself usually only takes a few seconds from the command line.

    If it makes you feel better I upgraded mine with an rpool on it and it was painless. I do have a everything backed up tho so I rarely worry. However ai understand being hesitant.



  • It looks like you could also do a zpool upgrade. This will just upgrade your legacy pools to the newer zfs version. That command is fairly simple to run from terminal if you are already examining the pool.

    Edit

    Btw if you have ran pve updates it may be expecting some newer zfs flags for your pool. A pool upgrade may resolve the issue enabling the new features.


  • Out of curiosity what filesystem did you choose for you opnsense vm. Also can you tell if it’s a zvol, qcow2, or raw disk. FYI if it’s a qcow2 or a raw they both would benefit from a record size of 64k if they exist in a vm dataset. If it’s a zvol still 64 k can help.

    I also utilize a heavily optimized setup running opnsense within proxmox. My vm filesystem is ufs because it’s on top of proxmox zfs. You can always find some settings in your opnsense vm to migrate log files to tmpfs which places them in memory. That will heavily reduce disk writes from opnsense.