About a year ago I switched to ZFS for Proxmox so that I wouldn’t be running technology preview.

Btrfs gave me no issues for years and I even replaced a dying disk with no issues. I use raid 1 for my Proxmox machines. Anyway I moved to ZFS and it has been a less that ideal experience. The separate kernel modules mean that I can’t downgrade the kernel plus the performance on my hardware is abysmal. I get only like 50-100mb/s vs the several hundred I would get with btrfs.

Any reason I shouldn’t go back to btrfs? There seems to be a community fear of btrfs eating data or having unexplainable errors. That is sad to hear as btrfs has had lots of time to mature in the last 8 years. I would never have considered it 5-6 years ago but now it seems like a solid choice.

Anyone else pondering or using btrfs? It seems like a solid choice.

  • cmnybo@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    43
    ·
    1 month ago

    Don’t use btrfs if you need RAID 5 or 6.

    The RAID56 feature provides striping and parity over several devices, same as the traditional RAID5/6. There are some implementation and design deficiencies that make it unreliable for some corner cases and the feature should not be used in production, only for evaluation or testing. The power failure safety for metadata with RAID56 is not 100%.

    https://btrfs.readthedocs.io/en/latest/btrfs-man5.html#raid56-status-and-recommended-practices

    • Anonymouse@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      30 days ago

      I’ve got raid 6 at the base level and LVM for partitioning and ext4 filesystem for a k8s setup. Based on this, btrfs doesn’t provide me with any advantages that I don’t already have at a lower level.

      Additionaly, for my system, btrfs uses more bits per file or something such that I was running out of disk space vs ext4. Yeah, I can go buy more disks, but I like to think that I’m running at peak efficiency, using all the bits, with no waste.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        30 days ago

        btrfs doesn’t provide me with any advantages that I don’t already have at a lower level.

        Well yeah, because it’s supposed to replace those lower levels.

        Also, BTRFS does provide advantages over ext4, such as snapshots, which I think are fantastic since I can recover if things go sideways. I don’t know what your use-case is, so I don’t know if the features BTRFS provides would be valuable to you.

        • Anonymouse@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          29 days ago

          Generally, if a lower level can do a thing, I prefer to have the lower level do it. It’s not really a reason, just a rule of thumb. I like to think that the lower level is more efficient to do the thing.

          I use LVM snapshots to do my backups. I don’t have any other reason for it.

          That all being said, I’m using btrfs on one system and if I really like it, I may migrate to it. It does seem a whole lot simpler to have one thing to learn than all the layers.

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            29 days ago

            Yup, I used to use LVM, but the two big NAS filesystems have a ton of nice features and they expect to control the disk management. I looked into BTRFS and ZFS, and since BTRFS is native to Linux (some of my SW doesn’t support BSD) and I don’t need anything other than RAID mirror, that’s what I picked.

            I used LVM at work for simple RAID 0 systems where long term uptime was crucial and hardware swaps wouldn’t likely happen (these were treated like IOT devices), and snapshots weren’t important. It works well. But if you want extra features (file-level snapshots, compression, volume quotas, etc), BTRFS and ZFS make that way easier.

            • Anonymouse@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              28 days ago

              I am interested in compression. I may give it a try when I swap out my desktop system. I did try btrfs in it’s early, post alpha stage, but found that the support was not ready yet. I think I had a VM system that complained. It is older now and more mature and maybe it’s worth another look.

  • vividspecter@lemm.ee
    link
    fedilink
    English
    arrow-up
    24
    ·
    1 month ago

    No reason not to. Old reputations die hard, but it’s been many many years since I’ve had an issue.

    I like also that btrfs is a lot more flexible than ZFS which is pretty strict about the size and number of disks, whereas you can upgrade a btrfs array ad hoc.

    I’ll add to avoid RAID5/6 as that is still not considered safe, but you mentioned RAID1 which has no issues.

      • vividspecter@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 month ago

        Check status here. It looks like it may be a little better than the past, but I’m not sure I’d trust it.

        An alternative approach I use is mergerfs + snapraid + snapraid-btrfs. This isn’t the best idea for a system drive, but if it’s something like a NAS it works well and snapraid-btrfs doesn’t have the write hole issues that normal snapraid does since it operates on r/o snapshots instead of raw data.

      • sntx@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        30 days ago

        It’s affected by the write-hole phenomenon. In BTRFS case that can mean that perfectly good old data might corrupt without any notice.

    • Possibly linux@lemmy.zipOP
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      12
      ·
      edit-2
      1 month ago

      What’s up is ZFS. It is solid but the architecture is very dated at this point.

      There are about a hundred different settings I could try to change but at some point it is easier to go btrfs where it works out of the box.

      • prenatal_confusion@feddit.org
        link
        fedilink
        English
        arrow-up
        17
        ·
        1 month ago

        Since most people with decently simple setups don’t have the described problem likely somethings up with your setup.

        Yes ifta old and yes it’s complicated but it doesn’t have to be to get a decent performance.

        • Possibly linux@lemmy.zipOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          30 days ago

          I have been trying to get ZFS working well for months. Also I am not the only one having issues as I have seen lots of other posts about similar problems.

          • prenatal_confusion@feddit.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            29 days ago

            I don’t doubt that you have problems with your setup. Given the large number of (simple) zfs setups that are working flawlessly there are a bound to be a large number of issues to be found on the Internet. People that are discontent voice their opinion more often and loudly compared to the people that are satisfied.

        • Avid Amoeba@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          30 days ago

          I used to run a mirror for a while with WD USB disks. Didn’t notice any performance problems. Used Ubuntu LTS which has a built-in ZFS module, not DKMS, although I doubt there’s performance problems stemming from DKMS.

      • Avid Amoeba@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        ·
        30 days ago

        What seems dated in its architecture? Last time I looked at it, it struck me as pretty modern compared to what’s in use today.

        • Possibly linux@lemmy.zipOP
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          edit-2
          30 days ago

          It doesn’t share well. Anytime anything IO heavy happens the system completely locks up.

          That doesn’t happen on other systems

          • Avid Amoeba@lemmy.ca
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            30 days ago

            That doesn’t speak much of the architecture. Also it’s really odd. Not denying what you’re seeing is happening, just that it seems odd based on the setups I run with ZFS. My main server is in fact a shared machine that I use as a workstation and games along as a server. All works in parallel. I used to have a mirror, then a 4-disk RAIDz and now an 8-disk RAIDz2. I have multiple applications constantly using the pool. I don’t notice any performance slowdowns on the desktop, or in-game when IO goes high. The only time I notice anything is when something like multiple Plex transcoders hit the CPU hard. Sequential performance is around 1.3GB/s which is limited by the data bus speeds (USB DAS boxes). Random performance is very good although I don’t have any numbers out of my head. I’m using mostly WD Elements shucked disks and a couple of IronWolfs. No enterprise grade disks on this system.

            I’m also not saying that you have to keep fucking around with it instead of going Btrfs. Simply adding another anecdote to the picture. If I had a serious problem like that and couldn’t figure it out I’d be on LVMRAID+Ext4 which is what used prior to ZFS.

        • Possibly linux@lemmy.zipOP
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          29 days ago

          I have gotten a ton of people to help me. Sometimes it is easier to piss people off to gather info and usage tips.

  • exu@feditown.com
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 month ago

    Did you set the correct block size for your disk? Especially modern SSDs like to pretend they have 512B sectors for some compatibility reason, while the hardware can only do 4k sectors. Make sure to set ashift=12.

    Proxmox also uses a very small volblocksize by default. This mostly applies to RAIDz, but try using a higher value like 64k. (Default on Proxmox is 8k or 16k on newer versions)

    https://discourse.practicalzfs.com/t/psa-raidz2-proxmox-efficiency-performance/1694

    • randombullet@programming.dev
      link
      fedilink
      English
      arrow-up
      3
      ·
      29 days ago

      I’m thinking of bumping mine up to 128k since I do mostly photography and videography, but I’ve heard that 1M can increase write speeds but decrease read speeds?

      I’ll have a RAIDZ1 and a RAIDZ2 pool for hot storage and warm storage.

  • Domi@lemmy.secnd.me
    link
    fedilink
    English
    arrow-up
    10
    ·
    29 days ago

    btrfs has been the default file system for Fedora Workstation since Fedora 33 so not much reason to not use it.

  • Moonrise2473@feddit.it
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 month ago

    One day I had a power outage and I wasn’t able to mount the btrfs system disk anymore. I could mount it in another Linux but I wasn’t able to boot from it anymore. I was very pissed, lost a whole day of work

  • tripflag@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 month ago

    Not proxmox-specific, but I’ve been using btrfs on my servers and laptops for the past 6 years with zero issues. The only times it’s bugged out is due to bad hardware, and having the filesystem shouting at me to make me aware of that was fantastic.

    The only place I don’t use zfs is for my nas data drives (since I want raidz2, and btrfs raid5 is hella shady) but the nas rootfs is btrfs.

  • poVoq@slrpnk.net
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    30 days ago

    I am using btrfs on raid1 for a few years now and no major issue.

    It’s a bit annoying that a system with a degraded raid doesn’t boot up without manual intervention though.

    Also, not sure why but I recently broke a system installation on btrfs by taking out the drive and accessing it (and writing to it) from another PC via an USB adapter. But I guess that is not a common scenario.

    • blackstrat@lemmy.fwgx.uk
      link
      fedilink
      English
      arrow-up
      1
      ·
      29 days ago

      The whole point of RAID redundancy is uptime. The fact that btrfs doesn’t boot with a degraded disk is utterly ridiculous and speaks volumes of the developers.

      • horse_battery_staple@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        30 days ago

        Are you backing up files from the FS or sre you backing up the snapshots? I had a corrupted journal from a power outage that borked my install. Could not get to the snapshots on boot. Booted into a live disk and recovered the snapshot that way. Would’ve taken hours to restore from a standard backup, however it was minutes restoring the snapshot.

        If you’re not backing up BTRFS snapshots and just backing up files you’re better off just using ext4.

        https://github.com/digint/btrbk

  • TFO Winder@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    Used it in development environment, well I didn’t need the snapshot feature and it didn’t have a straightforward swap setup, it lead to performance issues because of frequent writes to swap.

    Not a big issue but annoyed me a bit.

  • interdimensionalmeme@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    29 days ago

    For my jbod array, I use ext4 on gpt partitions. Fast efficient mature.

    For anything else I use ext4 on lvm thinpools.

      • interdimensionalmeme@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        29 days ago

        There is error detection, crc checks and lvm does snapshots and offline deduplication

        However I run sha256 checks offline and PAR files for forward error correction

  • Lem453@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 month ago

    Btrfs only has issues with raid 5. Works well for raid 1 and 0. No reason to change if it works for you