

That’s not exactly true, synology doesn’t do anything you can’t access from an off the shelf linux (it’s your usual mdraid and btrfs). But you better know what you’re doing if you go that route.
**beep ** bop.
That’s not exactly true, synology doesn’t do anything you can’t access from an off the shelf linux (it’s your usual mdraid and btrfs). But you better know what you’re doing if you go that route.
Conduit is in no way compact either. I tuned its caches because two gigs of ram seemed ridiculous for a single-user instance but I only got the mobile client sync lag as a result.
XMPP used to be so much nicer…
This is the best answer. Your router protects you from the outside, but a local firewall can protect you from someone prodding your lan from a hacked camera or some other IoT device. By having a firewall locally you just minimize the attack surface further.
Another alternative: https://pushover.net/
You don’t need -it
because you don’t run an interactive session in docker. It might be failing because you ask for a pseudoterminal in an environment where it doesn’t make sense.
Seq is expecting structured logs which yours aren’t. So you want to either convert your app’s logs into a structured format (which is generally hard for a random third-party application) or use a log collector that’s fine with non-structured logs (e.g. Loki+grafana don’t care about the shape is your logs and you can format the output while querying).
I have a dedicated vm for things that are crucial to the home network, either latency-critical or network related.
That’d be my dns resolver (I enforce it over VLANs by hijacking anyone trying to do DNS to other resolvers, like random IoT devices), homebridge for less important home automaton and my own matter controller for most important home automaton (controlling the lights).
My router of choice is RouterOS in another VM. I tried opnsense, pfsense, vyatta, and a bunch of others (even a containerized Cisco route), and I settled on ROS, because it was the only one who could do IPv6 properly (apart from Cisco, but that has other issues).
For the less important things I run them on k8s and really, there are only two bits worth mentioning as essential: ArgoCD and nixhelm. Together, they provide effortless and mostly automated software updates with very easy rollbacks. I don’t have to go and manually update every single bit of software and that saves huge amounts of time.
I wonder if NixOS is a vacuum coffee maker for how confusing nix looks when you see it for the first time or instant coffee for how reproducible it is…
That’s just Slackware.
I would absolutely recommend a file system with snapshot capabilities for a home server. One of btrfs mirror, dm-raid (raid5) with btrfs, or zfs would work. The practical differences would be negligible at this scale and you can just pick whatever you fancy.
If tailscale inside a container allows you to talk to it via “direct” connection and not a derp proxy, then it will offer you better service isolation (can set the tailscale ACLs for this specific service) without sacrificing performance.
Tailscale pushes for it because it just ties you in more. It allows to to utilize the ACLs better, to see your thing in their service mesh, and every service will count against the free node limit.
In practice, I often do both. E.g. I’ll have my http ingress exposed to tailscale and route a bunch of different services through it at a single tailscale node, where the access control is done by services individually. But I’ll also run a pod-to-pod tailscale between two k8s clusters because tailscale ACL is just convenient.
I don’t think your question relates to the language as much as to the platform. The language of choice is somewhat irrelevant and what you care about is what actually happens under the hood.
For the likes of java and go you want to have some understanding of what runtime does for the memory allocations and how their GCs work. For python you sometimes end up in the spots where you need to understand what limitations the GIL imposes (even more important now that they are trying to get rid of it). When you run C (or C++ or Rust) on the embedded hardware it really helps to understand what exactly bit flipping does in specific registers and what DMA means for how you write your code.
You don’t really have to know it all. You can absolutely write code without caring about anything of that and I know plenty software engineers that do. Some people write amazing functional things in java without ever questioning what it does to the machines and what resources you need to run it.
If you start questioning it, that will only expand your understanding. It’s not a lateral move from e.g. C to Rust where you need to learn a lot to write your code in a memory-safe way, it’s a movement deeper into the stack and what you learn there will be applicable to any language you use for this stack.
Answering your question: I always feel bad about not understanding some low-level concept. I have stacks of MCU reference docs lying around, printed, highlighted; I have archives with sample code, and hand-annotated CMSIS reversing notes. Embedded world is hard because you can’t just know C and be done with it. You have to know the hardware, too.
Here’s my advice for you. Make notes of things that you learn from people smarter than you. Create a web of those notes and see where your gaps are. Then, work on learning something in those gaps in particular and see if you can make a blog post or something of your own. When you share what you learn you become one of those people with deep understanding that others look up to. There’s always someone struggling with something that you either know or know how to figure it out.
given time in lieu
after squadron 42 ships*
Actual public services run there, yeah. In case if any is compromised they can only access limited internal resources, and they’d have to fully compromise the cluster to get the secrets to access those in the first place.
I really like garage. I remember when minio was straightforward and easy to work with. Garage is that thing now. I use it because it’s just co much easier to handle file serving where you have s3-compatible uploads even when you don’t do any real clustering.
I’ve dealt with exactly the same dilemma in my homelab. I used to have 3 clusters, because you’d always want to have an “infra” cluster which others can talk to (for monitoring, logs, docker registry, etc. workloads). In the end, I decided it’s not worth it.
I separated on the public/private boundary and moved everything publicly facing to a separate cluster. It can only talk to my primary cluster via specific endpoints (via tailscale ingress), and I no longer do a multi-cluster mesh (I used to have istio for that, then cilium). This way, the public cluster doesn’t have to be too large capacity-wise, e.g. all the S3 api needs are served by garage from the private cluster, but the public cluster will reverse-proxy into it for specific needs.
I would not recommend unifi for a mature solution. It sure works nice as a glass panel, but it will get limiting if you will have a desire to hack around your network. Their APs are solid, though, it’s just the USG/Dream machine that I wouldn’t recommend.
Mikrotik software is very capable and hackable and you can run it in a vm if you feel like bringing your own hardware.
Apparently traefik might be better if you run docker compose and such, as it does auto-discovery, which reduces the amount of manual configuration required.
and swap Prometheus for VictoriaMertics, or your homelab ram usage becomes Prometheus ram usage.
I’ll second conduit. You can tune up its caching, reducing the ram usage significantly. It does become a bit painful to sync the mobile clients, but at least it’s not gigabytes of ram wasted.
Isn’t kagi’s point that they store very little about you to the point there no search history and you have to pay for the service provided?