Benchmark for PiHole that is _Meaningful_ on Pogoplugs, low-RAM Rpi's, lightweight headless servers

I don't know if this is of any interest to anyone else, but I thought that I'd share it. For kicks I used dnsperf to flood my pihole (running on a Pogoplug V2 ARMV5te armel box, w/ Debian Bookworm) and got it to bend but not break:

These are 1GHz SoC's w/ 256MB RAM, 4 USB2.0 ports and Gb enet.

dnsperf -d dnsperf-2.14.0/src/test/datafile -s 192.168.11.50 -c 200 -n 20000

yielded

Statistics:

  Queries sent:         40000
  Queries completed:    39345 (98.36%)
  Queries lost:         655 (1.64%)

  Response codes:       REFUSED 39345 (100.00%)
  Average packet size:  request 28, response 28
  Run time (s):         98.333494
  Queries per second:   400.117990

  Average Latency (s):  0.168309 (min 0.003269, max 0.323745)
  Latency StdDev (s):   0.011802

This is for home use only ... keeping websites loading faster, etc.

Is there a more realistic measurement we can get? Maybe an off-the-shelf (as a Debian package, or easily compiled), open source app that would serve benchmarking what a Pihole does more faithfully?

==================================

Afterthought: I tried adjusting the Q/s limit and found that 100Q/sec was easily within its grasp... but what about other bottlenecks ... the 600Mb/30Mb down/up connection w/ have w/ the internet here.

In retrospect maybe all it is telling me is that even flooding it w/ an unreasonable Q/s load, the program keeps chugging and the box didn't crash. Is there something more here?

Is it just me or did pihole drop close to 99% of the queries.

You might want to run that test without the rate limit enabled

The 400Q/sec readings are from running it unlimited.

Not for we in general.
But there certainly are more realistic ways to test your personal DNS performance limits for your specific environment, see Benchmarking - Pi-hole documentation.

There are quite a few tools labeling as dnsperf, but from your output, I'm going to assume you are using the DNS-OARC one.

Note that DNS-OARCS's dnsperf was designed primarily to test authoritative DNS servers, which would be expected to receive only DNS requests for the domains they are authoritative for.
By contrast, Pi-hole is a caching resolver, which can be expected to receive requests for really any domain.

That doesn't make for a meaningful test.

As the input file you are using is likely DNS-OARC's /src/test/datafile, that file would contain only two requests, for google.com's A and AAAA records - not something that would be representative of your usual client activities.
In contrast, the linked Pi-hole suggestions would extract a set of DNS queries from your Pi-hole's database, which at least would use data as actually observed (though depending on the period of extraction, it may not be representative either, as DNS requests can be expected to fluctuate over time, even wildly at times).

REFUSED could indicate that Pi-hole's rate limiting (default: 1,000 queries per minute per client) has kicked in immediately for that test, suggesting that there already was some load on your Pi-hole prior to starting the test, or else at least the first 1,000 queries would have succeeded.

When testing, you'd ideally want to make sure that there are no other clients using your Pi-hole besides the designated test nodes, as any additional queries would impact and distort measurement results.

If you want to know how much queries your Pi-hole would be able to handle without the rate limit being imposed, you either had to run your tests from additional nodes, or you'd have to temporarily lift Pi-hole's rate limiting.

The test as you've run it only tells you that Pi-hole would rate-limit clients that excessively request resolution of google.com.

Even a more realistic approach would probably give you just a theoretical number in the range of some hundred queries per second that you -in all likelihood- will never observe in day-to-day usage, as Pi-hole's rate limiting may trigger well before that limit is reached.

@Bucking_Horn , thank you, this indeed was what I was trying to verbalize... something meaningful in the context of Pihole's function in a Small Office or Home Office would be.

The article Benchmarking - Pi-hole documentation does indeed hit the desired nail on the head for me. I never even considered extracting the domains from the db, but it makes sense.

Again, thanks and I'll try this.

@sml156 I do believe that the test I was running was, in effect, useless for my case...

@Bucking_Horn : I've thought about it and I can clarify what I wanted to know in the first place:

  1. What level of DNS traffic can my current Pihole setup handle before it degrades/drops in performance?
  2. Is there an accessible method for us to test this?
  3. What are some things that can be done on a memory-constrained device (ie. one of my Piholes - the failover unit, is a pogoplug V4 with only 128MB RAM; also Rpi's, BananaPi's, etc.) to prevent it from becoming swamped?
  4. Would V6 be more prone to degrading under heavy use in my own SOHO network?

There are other questions to think about, like web server eating resources, swap, etc., but best to keep it as a narrower set of questions.

Your pointer to the Pihole Benchmarking page was an excellent starter. What I eventually did was the following:

  • extract the database from my main Pihole: sqlite3 /etc/pihole/pihole-FTL.db "SELECT domain FROM queries LIMIT 100000;" > domains-PhA1B2.list and label, in this case w/ its last 4 MAC addr digits.
  • scp domains-PhA1B2.list [several-linux-hosts-in-my-house]
  • from each of those hosts, execute several instances of dig @<ip-of-pihole-to-test> -f domains-PhA1B2.list +noall +answer > /dev/null &
  • try to get a real-world feel of how much traffic just one of my Pihole's running V6 on our network could handle, and if it would bog down under a certain amount of load. While this was all running, I opened up browsers on my desktop, laptop and iphone and checked to see if I could still surf the net with good performance and ad blockage.

Short answer, after a day of sustained spamming it w/ requests like this, to me it appears:

  • under normal usage, it is unlikely to become saturated. At up to 250Q/sec I couldn't feel any slowdown.
  • up to about 300Q/sec, it shows 25-65% RAM usage* and CPU load < 65%
  • above 300Q/sec it starts to push 65% or greater RAM usage and load is > 90%

I decided that for my purposes, I'd skip the adjustments to rate limit and logging, since I normally just leave those alone.

Also, it points out to me that 128MB RAM is indeed a pretty tight constraint, but 256MB seems fine. This gives me some confidence about moving forward to V6 on the boxes w/ 256MB RAM, but makes me think that my 128MB box may be approaching EOS.

I think the best way to test a specific machine in a real life scenario is to install and use Pi-hole.

Your use case (number of devices, number of lists, number of regex entries, queries per minute/second) will be different from other users.

I also think you are correct to point out the 128MB RAM as a main constrain, but only a real life test will tell you if this is enough.