• 2 Posts
  • 10 Comments
Joined 3 years ago
cake
Cake day: November 29th, 2021

help-circle
  • Yes, I am using PersistentVolumes. I have played around with different tools that have backup/snapshot abilities, but I haven’t seen a way to integrate that functionality with a CD tool. I’m sure if I spent enough time working through things, I may be able to put together something that allows the CD tool to take a snapshot. However, I think that having it handle rollbacks would be a bit too much for me to handle without assistance.


  • Thanks for the reply! I am currently looking to do this for a Kubernetes cluster running various services to more reliably (and frequently) perform upgrades with automated rollbacks when necessary. At some point in the future, it may include services I am developing, but at the moment that is not the intended use case.

    I am not currently familiar enough with the CI/CD pipeline (currently Renovatebot and ArgoCD) to reliably accomplish automated rollbacks, but I believe I can get everything working with the exception of rolling back a data backup (especially for upgrades that contain backwards incompatible database changes). In terms of storage, I am open to using various selfhosted services/platforms even if it means drastically changing the setup (eg - moving from TrueNAS to Longhorn, moving from Ceph to Proxmox, etc.) if it means I can accomplish this without a noticeable performance degradation to any of the services.

    I understand that it can be challenging (or maybe impossible) to reliably generate backups while the services are running. I also understand that the best way to do this for databases would be to stop the service and perform a database dump. However, I’m not too concerned with losing <10 seconds of data (or however long the backup jobs take) if the backups can be performed in a way that does not result in corrupted data. Realistically, the most common use cases for the rollbacks would be invalid Kubernetes resources/application configuration as a result of the upgrade or the removal/change of a feature that I depend on.





  • Congrats on getting everything working - it looks great!

    One piece of (unprovoked, potentially unwanted) advice is to setup SSL. I know you’re running your services behind Wireguard so there isn’t too much of a security concern running your services on HTTP. However, as the number of your services or users (family, friends, etc.) increases, you’re more likely to run into issues with services not running on HTTPS.

    The creation and renewal of SSL certificates can be done for free (assuming you have a domain name already) and automatically with certain reverse proxy services like NGINXProxyManager or Traefik, which can both be run in Docker. If you set everything up with a wildcard certificate via DNS challenge, you can still keep the services you run hidden from people scanning DNS records on your domain (ie people won’t know that an SSL certificate was issued for immich.your.domain). How you set up the DNS challenge will vary by the DNS provider and reverse proxy service, but the only additional thing that you will likely need to set up a wildcard challenge, regardless of which services you use, is an email address (again, assuming you have a domain name).




  • rhymepurple@lemmy.mltoSelfhosted@lemmy.worldProtectli FW6B
    link
    fedilink
    English
    arrow-up
    2
    ·
    6 months ago

    Some additional ideas for the Protectli device:

    • backup/redundant OPNsense instance for high availability
    • backup NAS/storage
      • set it up at a family/friend’s house
    • a test/QA device for new services or architecture changes
    • travel router/firewall
    • home theater PC
    • Proxmox/virtualization host
      • Kubernetes cluster
    • Tor, I2P, cryptocurrency, etc. node
    • Home Assistant
      • dedicated local STT/TTS/conversation agent
    • NVR
    • low powered desktop PC

    There are so many options. It really depends on what you want, your other devices, the Protectli’s specs, your budget, etc.




  • tl;dr: A notable marketshare of multiple browser components and browsers must exist in order to properly ensure/maintain truly open web standards.

    It is important that Firefox and its components like Gecko and Spidermonkey to exist as well as maintain a notable marketshare. Likewise, it is important for WebKit and its components to exist and maintain a notable marketshare. The same is true for any other browser/rendering/JavaScript engines.

    While it is great that we have so many non-Google Chrome alternatives like Chromium, Edge, Vivaldi, etc., they all use the same or very similar engines. This means that they all display and interact with websites nearly identically.

    When Google decides certain implementation/interpretation of web standards, formats, behavior, etc. should be included in Google Chrome (and consequently all Chromium based browsers), then the majority marketshare of web browsers will behave that way. If the Chrome/Chromium based browsers reaches a nearly unanimous browser marketshare, then Google can either ignore any/all open web standards, force their will in deciding/implementing new open web standards, or even become the defacto open web standard.

    When any one entity has that much control over the open web standards, then the web standards are no longer truly “open” and in this case becomes “Google’s web standards”. In some (or maybe even many) cases, this may be fine. However, we saw with Internet Explorer in the past this is not something that the market should allow. We are seeing evidence that we shouldn’t allow Google to have this much influence with things like the adoption of JPEG XL or implementation of FLoC.

    With three or more browser engines, rendering engines, and browsers with notable marketshares, web developers are forced to develop in adherence to the accepted open web standards. With enough marketshare spread across those engines/browsers, the various engines/browsers are incentivized to maintain compatibility with open web standards. As long as the open web standards are designed and maintained without overt influence by a single or few entities and the open standards are actively used, then the best interest of the collective of all internet users is best served.

    Otherwise, the best interest of a few entities (in this case Google) is best served.