I’m currently using the blocklists included with unbound in opnsense on a mini PC and I have used pihole on a pi which now operates my 3d printers instead. I haven’t tried any of the other network wide options. Has anyone made any blog posts or similar detailing performance testing of different options?
I have an 8 person household with each person having at least a phone and computer and probably some consoles or something. I haven’t noticed any obvious differences but whitelisting seemingly can’t be done in bulk efficiently with my current setup.
We are all going to be moving in the coming months so I am revisiting different aspects of the home network and trying to figure out what can be improved and if anything is irritating enough in it’s current state to tolerate a potential performance loss.
Most self hosted DNS level blocking will be very fast as it is really easy to keep the block list in RAM. I hosted Pi Hole on RPi 3 and an over provisioned VM (4 cores and 4GB of ram lol). The only difference I’ve noticed is whether or not the device is hardwired. When my RPi was hardwired into the network, there was no notable difference between the two.
Firstly, I absolutely agree you should be hardwiring any kind of infrastructure. But honestly, even over WiFi your main latency is going to come from the WAN hop to whatever upstream DNS you’ve configured.
This is more or less what I was hoping would be the case. I’ll be pulling down some of the drywall to run Ethernet beween floors so I won’t need to worry about wireless being slower at least. I figure I’ll just try the other blocking options and go with the one I find most pleasant.
Performance is usually pretty similar. I use AdGuard Home (mostly for malware blocking) and it stores the cache in RAM, so the only thing that’s potentially slow is the first lookup for a domain. That’s going to be affected by the size of your block list, but it’s likely that all these solutions use hashing to speed it up, meaning it won’t slow down linearly. Once a domain is cached, serving it is very fast. The others work similarly.
AdGuard Home has an option to serve a stale cached record and refresh it in the background, meaning DNS lookups will practically always be served from RAM, except after you restart it when the RAM cache is empty.
I also like AdGuard Home because it supports DNS over HTTPS and DNS over TLS, and uses it by default. There’s also a separate piece of software called AdGuardHome-Sync to keep the config in sync between multiple instances. I run two of them (one on a home server in Docker, and one on a Raspberry Pi in Docker) so I can take one down without breaking the internet for my wife.
I’ll have to look in to that. I really need to look into redundancy for a lot of things actually.
Redundancy is really important when it effects other people, IMO. Personally I use 2 piholes kept in sync with gravity-sync.
DNS is one of the easiest things to make redundant, since each server runs independently of the others, and clients automatically handle falling back to the other server in case one of them is down (modern OSes will send around half the queries to the primary server and half to the secondary, but they handle outages well too)
All DNS blocking is going to be very fast, essentially no real added time.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters DNS Domain Name Service/System HTTP Hypertext Transfer Protocol, the Web HTTPS HTTP over SSL RPi Raspberry Pi brand of SBC SBC Single-Board Computer SSL Secure Sockets Layer, for transparent encryption TLS Transport Layer Security, supersedes SSL
4 acronyms in this thread; the most compressed thread commented on today has 9 acronyms.
[Thread #643 for this sub, first seen 31st Mar 2024, 00:35] [FAQ] [Full list] [Contact] [Source code]
Why not benchmark it yourself and find out?
I’m not sure how but if nobody has already done it I’ll probably try to figure it out. There are night shifters in the household so I would probably need additional hardware and run a separate testing network since any downtime at all will get the complaining going.
You should be able to host another one in parallel with whatever you’re doing now and run some tests based on your typical use cases. Set the client to use that specific one for DNS.
Honestly, though, I doubt you’ll see much difference. Clients make a DNS request and cache it, so it’s not like it’ll affect download speeds. Unless DNS responses are delayed by human-observable amounts (half second, whole seconds, or more) then a millisecond or two in either direction isn’t going to make a noticeable difference.