Hello,
i want to test how capable my pi-hole is at the moment.
Is there any tool/software how i can send an huge amount of requests to my dns server (100 of thousands request a day)?
regards
Hello,
i want to test how capable my pi-hole is at the moment.
Is there any tool/software how i can send an huge amount of requests to my dns server (100 of thousands request a day)?
regards
This would keep your Pi-Hole busy for a while - dig for every domain in your gravity list:
dig -f /etc/pihole/gravity.list +noall +answer
use dnsblast
install:
#!/bin/bash
# Make sure only root can run our script
if [ "$(id -u)" != "0" ]; then
echo "This script must be run as root" 1>&2
exit 1
fi
cd /home/pi
git clone https://github.com/jedisct1/dnsblast
cd /home/pi/dnsblast
make
than execute:
cd /home/pi/dnsblast
./dnsblast 127.0.0.1 1000 50
edit
OR, to test from a remote windows machine, use dpt.
/edit
It should be pointed out that the solutions of @jpgpi250 and @jfb could not be more different.
will cause massive querying of blocked domains, i.e., it will only hit Pi-hole's cache. This will not produce any traffic to the upstream DNS servers.
will cause massive querying of random domains, i.e., everything will be sent upstream. This will generate a lot of upstream traffic and does not test caching performance.
Both are artificial and should not be considered even close to benchmarking real-life situations.
If you want to simulate more realistic traffic, it is important to know what you want to achieve with your test. I assume you want to know how many queries a day your Pi-hole can handle (using the given hardware) to estimate how many clients you could serve.
In this case, I'd suggest using the domains stored your long-term database and use these domains for benchmarking your system. As they reflect real-life usage, they should give much better output. You can extract the domains using, e.g.,
sqlite3 /etc/pihole/pihole-FTL.db "SELECT domain FROM queries LIMIT 100000;" > domains.list
This will generate a list file with (at most) 100,000 domains. You can modify this list if you like, however, I'd suggest to start from not too much of data for getting realistic results.
They are put into the list as they are in the database, i.e., there will be duplicates from frequently queried domains (maybe google.com
or similar) as well as some blocked ad domains. This list will serve you as a good testing bench for realisitic DNS queries as typically seen in your particular network. Use @jfb's command with the new file for mass querying the domains.
P.S.: Two more things in order to make the benchmark more realistic:
I'd suggest disabling the long-term database during the test. The file would unnecessarily grow - maybe be several hundreds of megabytes - and your statistics would be distorted by the artificial test you are doing.
You can do this by setting
DBFILE=
in /etc/pihole/pihole-FTL.conf
and running sudo pihole restartdns
(see also here).
I'd also suggest to increase the DNS cache for the testing. The rather low value is fine for typical tests as domains will expire at some point and make room for new domains. If you, however, artificially increase the querying rate to such high values, there will be no time for the domains to expire naturally. This would dramatically hit the caching performance while you would never see this in any real use case.
Set cache-size
to a rather high value (maybe 25,000 - by guess I'd say roughly one-eight to one-fourth number of the domains you extracted from the database) in /etc/dnsmasq.d/01-pihole.conf
and run sudo pihole restartdns
afterwards.
Thank you guys.
Perfect solutions.
This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.