• 0 Posts
  • 17 Comments
Joined 1 year ago
cake
Cake day: August 2nd, 2023

help-circle

  • Since we don’t know what server or VM tech you’re using the advice will be pretty generic. For self hosting, you can likely get away with your ISCSI traffic sharing the LAN interface with your usual vm traffic but if you need high throughput you will want ISCSI optimized nics and turn on jumbo frames (mtu of 9000 is the standard here). This requires a switch that supports jumbo frames as well.

    For Windows, I find the ISCSI support to be very lacking. Every time I have used it I have had sporadic loss of connectivity, failure to mount on boot, and other issues. I would avoid it.

    For ESXi you can map an ISCSI lun as a datastore and create vmdks on top. This functions the same if you use actual FC luns or NFS mounts, and have had no issues with reliability. There’s also RDM which is raw direct map which can mount the ISCSI lun as a disk of the vm. If you’re using vSphere I would advise against this as you lose the ability to vMotion or use DRS.



  • Cool. Yeah, as a professional I am constantly aware of data integrity and have most of my shit stored on redundant drives. I had a WoW Guild Officer who shared his home setup with like 8x12TB drives in Windows Storage Spaces with no redundancy that was like 80% full. I had to ask how he slept at night knowing he could lose 80TB of data at any time.

    Personally my TrueNAS has 5x1.92TB SSDs setup in two mirror vdevs and a hot spare for my ISCSI LUNs and 8x1.2TB 10K drives in a raidz2 (2 disk parity) for my NAS storage.


  • I believe ZFS works best when having direct access to the disks, so having a md underlying it is not best practice. Not sure how well ZFS handles external disks, but that is something to consider. As for the drive sizes and redundancy, each type should have its own vdev. So you should be looking at a vdev of the 2x6TB in mirror and a vdev of the 2x12TB in mirror for maximum redundancy against drive failure, totaling 18TB usable in your pool. Later on if you need to add more space you can create new vdevs and add them to the pool.

    If you’re not worried about redundancy, then you could bypass ZFS and just setup a RAID-0 through mdadm or add the disks to a LVM VG to use all the capacity, but remember that you might lose the whole volume if a disk dies. Keep in mind that this would include accidentally unplugging an external disk.



  • Based on your edit about getting the public IP: Most firewall/routers are not configured to do this operation by default (called Hairpinning). If you request your firewall/router’s external IP address from the internal network you won’t get a response unless Hairpinning is enabled and some devices don’t allow you to do that. If you have an internal dns server, you should override the internal dns to return the private ip address so it goes to your nginx reverse proxy instead of the firewall/router.








  • Since you’ve probably been using the SMB protocol to access the NAS you probably need to understand a few things about the NFS protocol which functions differently. The NFS mount acts like a mapping of the entire system, rather than a specific user. That means that if there are differences in the systems, you may get access errors. For example the default user in Synology has a uid of 1024, but most client systems have a default of 1000. This means your user may not have access to the share or files, even if you have it mounted on the client.

    One thing to check is what your Shared Folder’s NFS permissions squash is set to. This is found in Control Panel > Shared Folder the the NFS permissions tab. If it’s set to “no mapping” then uids must match. The easiest setup is to “map all users to admin” but you may encounter issues with that later if you switch back to SMB since new files will be owned by admin.