• 0 Posts
  • 22 Comments
Joined 9 months ago
cake
Cake day: October 16th, 2023

help-circle


  • I’ll add that sometimes the self-hosted version does something that the “official, paid”[1] version doesn’t, or at the very least allows you to try to hack it together.

    A problem with commercial offerings is that their idea of completed product is different than yours, and depending on the feature there’s not enough $$$ incentive to pursue it. This is the major problem with Google, because search is such a ocean of income, that no other project will ever stand up to it.

    [1] I say official because quite a few of self hosted versions are clones of some paid product.


  • Is there a typo on the HDD prices? Looks super expensive. With that amount you could have 48TB of NVME drives (assuming you have a controller for that many sticks, but you probably could buy one with the leftover from the sticks)

    In general you shouldn’t spend more on newer/larger capacity HDDs unless you absolutely need that much storage per slot (that is, you are space constrained), as the prices invariably go down in less than an year. Smaller drives with 12~16TB are a much better TB/$. Check diskprices.com to see current trends.

    If you care about speed, relying on HDDs for that is a bad proposition. You are better served by a SSD cache in front of the HDDs. In case of SSD failure, it’s much cheaper/faster to replace that SSD than to rebuild your strained RAID.

    As for CPU performance, pretty much any newer Intel QuickSync will handle your needs. If money isn’t an issue, I’d rather go with Intel NUC 12th gen or newer. I have one and it handles anything I throw at it, including the most demanding 4k transcoding. The NUC and a 4-disk Synology uses less than 50w with the disks being used.



  • As an experienced software developer, I can confidently say that no amount of technology will fix a bad user workflow. Your best plan of action is to sit down with your mother and try to come up with a consistent workflow she is happy with. If she doesn’t know or can’t come up with one, find what is the industry standard (which seems to be year-month-date folders? I’m not a photographer), or maybe ask an experienced photographer friend. Remember you can adjust the workflow later, but it’s important to have something stable while she is learning how she really likes to work.

    Storing the photos in a NAS is a good choice. For the slowness, I suppose she would need to keep the most recent photos on her main computer/laptop, then move/sync to the NAS once editing is done? I don’t see why one would need to edit old photos every day, so keeping only the most recent/active work on the computer seems smart.

    adobe Lightroom performs bad with network drives

    Most software perform worse with network drives because the host OS can’t optimize as much as a local drive, specially so with random access like editing software do (contrast to copying/streaming a file, which is sequential). If performance is an issue, the only real solution is to copy files locally then sync back to the NAS. You can diminish the latency of network drives by having a SSD on the NAS, and a better link between them. Gigabit is a good start, but I’d go with 10Gbit because although the files are 70MB in size, lightroom is probably fetching several photos at the same time, and it will easily saturate your Gigabit link.

    Good luck!












  • If you had monitoring, you wouldn’t have taken 6 hours to catch it.

    I’d say learn HA anyway because it’s a good skill, but that doesn’t prevent you from having the other parts I mentioned. I say this because, again, unless you are experienced with HA, there will be edge cases where it’s not going to do what you though it would do, and your service will be down all the same. Monitoring/alerting and one-click/shell script install will be much more valuable in the short-mid term.



  • HA involves many factors: service uptime, link uptime, db uptime, etc. I’d probably put a reverse proxy in front and use the servers as upstream. web servers tend to be more reliable, so in your case a single instance ought to suffice.

    Aside from actual HA tools, your most important asset in this stage is a uptime check service that pings your server every n seconds, a reliable backup/restore procedure, and a one-button deployment strategy.

    Shits can and will probably happen. What are you going to do when it does? And how fast can you respond? I say this because you most likely won’t get HA right in the first, second, or third time, unless you already have tons of experience behind you. Embrace failure and plan accordingly.


  • Your main dividing factor in this regard is if you want to do transcoding or not. If so, you need to pick a CPU with good iGPU, and for Intel that starts on the 8th gen. Older gens work well for 1080p, but for 4k they aren’t great. I have a i5-7500 that couldn’t do 4k HEVC without lag (although I was using it as HTPC. Maybe headless would be enough?)

    For anything else, mostly any computer will do. Most of the stuff you host will be idle most of the time, so your CPU only needs to be powerful enough for the apps you are using at the moment.