There are still lots of errors in the FTL log, including
CRIT Corrupt binary detected - this may lead to unexpected behaviour!
ERROR SQLite3: database corruption at line 96760 of [17144570b0] (11)
ERROR Cannot receive UDP DNS reply: Timeout - no response from upstream DNS server.
If a binary and dbase file is corrupted, I suspect a power issue and after that, a dying SD card.
Check below link and the systemd journals for power issues:
There are no voltage warnings in the systemd journals. Is there a way to check the SD card?
I have removed the FTL database again and this time it recreated with no errors, but I am still getting DNS upstream errors:
2025-06-25 08:57:24.104 ERROR Cannot receive UDP DNS reply: Timeout - no response from upstream DNS server
2025-06-25 08:57:24.104 INFO Tried to resolve PTR "8.8.8.8.in-addr.arpa" on 127.0.0.1#53 (UDP)
2025-06-25 08:58:10.504 ERROR Cannot receive UDP DNS reply: Timeout - no response from upstream DNS server
2025-06-25 08:58:10.504 INFO Tried to resolve PTR "8.8.8.8.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.6.8.4.0.6.8.4.1.0.0.2.ip6.arpa" on 127.0.0.1#53 (UDP)
2025-06-25 08:58:35.704 ERROR Cannot receive UDP DNS reply: Timeout - no response from upstream DNS server
2025-06-25 08:58:35.704 INFO Tried to resolve PTR "4.4.8.8.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.6.8.4.0.6.8.4.1.0.0.2.ip6.arpa" on 127.0.0.1#53 (UDP)
new error messages:
2025-06-25 09:03:00.490 WARNING Long-term load (15min avg) larger than number of processors: 1.0 > 1
2025-06-25 09:03:01.521 ERROR add_message(type=6, message=excessive load) - SQL error step DELETE: database is locked
2025-06-25 09:03:01.521 ERROR Error while trying to close database: database is locked
2025-06-25 09:03:02.504 ERROR Cannot receive UDP DNS reply: Timeout - no response from upstream DNS server
Is this an indication that my old RPi2B with just one core isn't cutting it any more?
Running a filesystem check (fsck) on a live mounted filesystem is a bit tricky.
Do you have another Linux host with an SD card slot where you can perform the fsck?
You could try below on the Pi (read only, no fixing) to see if it detects something wrong:
sudo fsck -n -f /dev/mmcblk0p2
I still run Pi-hole on a Pi 1B.
The first Pi with Ethernet.
And I've seen folks run Pi-hole on even less like a Pogoplug/stick.
Above ones are conflicting!
Which is it if run below?
cat /proc/device-tree/model; echo
Bc from that Raspi link, the Pi 1B (without the +) doesnt have the "low-voltage detection circuitry":
On all models of Raspberry Pi since the Raspberry Pi B+ (2014) except the Zero range, there is low-voltage detection circuitry that will detect if the supply voltage drops below 4.63V (±5%).
From above two, I suspect a DNS loop or partial loop troubling your setup.
What does below one show for upstreams?
dig +short @localhost servers.bind chaos txt
And did you configure the router WAN/Internet DNS settings to point to the Pi-hole IP?
;; communications error to ::1#53: timed out
;; communications error to ::1#53: timed out
;; communications error to ::1#53: timed out
;; communications error to 127.0.0.1#53: timed out
;; no servers could be reached
I haven't currently got the router configured to use the Pi-hole for DNS as I need internet access.
I'm going to scan the SD card now and will post the results.
Thats the Pi 1B without the voltage check.
Below a Pi 1B+:
$ cat /proc/device-tree/model; echo
Raspberry Pi Model B Rev 2
So it might be worth to test with another AC adapter or USB cable for power.
Pi's are infamous for crashing and corrupting storage when the power isn't ok.
Output for below instead?
sudo grep server= /etc/pihole/dnsmasq.conf
Pending outcome for a fsck, if there are many errors, I'd recommend flashing the SD card new instead of trying to fix with fsck.
$ dig +noall +comments +answer +ad @127.0.0.1 -p 5335 cloudflare.com
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 59669
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; ANSWER SECTION:
cloudflare.com. 300 IN A 104.16.132.229
cloudflare.com. 300 IN A 104.16.133.229
The first command should return a SERVFAIL status and no IP address ANSWER.
The second should return a NOERROR status plus an IP address in the ANSWER section in addition to an ad flag.
Ps. those dig's are from below pull to adjust the ones in the official Pi-hole guide:
Is it pihole-FTL thats causing the high load?
Can check with the top or htop commands when experiencing those messages.
EDIT: Oh did you run a Pi-hole gravity pull manually at those times?
That can cause a bit of excessive load on a Pi 1B.
But this one is scheduled on a early Sunday morning so should trouble you:
$ cat /etc/cron.d/pihole
[..]
# Pi-hole: Update the ad sources once a week on Sunday at a random time in the
# early morning. Download any updates from the adlists
# Squash output to log, then splat the log to stdout on error to allow for
# standard crontab job error handling.
41 3 * * 7 root PATH="$PATH:/usr/sbin:/usr/local/bin/" pihole updateGravity >/var/log/pihole/pihole_updateGravity.log || cat /var/log/pihole/pihole_updateGravity.log
FYI, I've installed Unbound on top and configured the Pi to do DHCP services for my LAN.
Its live now and getting hammered by a Samsung TV (a query every second):
I reinstalled unbound and pi-hole, removed PiVPN from that device and added a second pi-hole and things are running OK again now. Load is still quite high, but manageable I think, and the database seems stable.
I hooked up the Samsung TV again and it hammered my Pi 1B for a good 5 minutes with Netflix related queries while I dont have a subscription.
Some blocked and some not.
After that I've been monitoring with below and the max I saw was .55 but that was for the 1 minute interval.
The 15 minutes interval max was .20 ish.
$ man proc
[..]
/proc/loadavg
The first three fields in this file are load average figures
giving the number of jobs in the run queue (state R) or
waiting for disk I/O (state D) averaged over 1, 5, and 15
minutes. They are the same as the load average numbers
given by uptime(1) and other programs. The fourth field
consists of two numbers separated by a slash (/). The first
of these is the number of currently runnable kernel schedul‐
ing entities (processes, threads). The value after the
slash is the number of kernel scheduling entities that cur‐
rently exist on the system. The fifth field is the PID of
the process that was most recently created on the system.