Pihole-FTL stops DNS resolution randomly (Get's resolved by restarting the FTL service)

Expected Behaviour:

Pihole resolves DNS, and the Pihole admin is accessible

  • Ubuntu Linux
  • Oracle Cloud - Micro Instance
  • Unbound as a DNS resolver
  • Wireguard (split tunnel) for DNS resolution for clients (iDevices, Android Phones, Windows Laptop)

Actual Behaviour:

The Pihole randomly stops responding to DNS queries. Most of the time, a restart is required to fix the DNS resolution. At other times, I think it automatically fixes and starts resolving.

  • DNS stops responding (including dig @127.0.0.1)

  • Pi-hole Admin UI becomes inaccessible

  • FTL process remains running

Some observations (based on checks I performed)

Observations

  • ssh to the server works fine

  • tcpdump shows DNS packets arriving on wg0 and lo

  • ss -lupn confirms FTL is listening on :53

  • dig to 127.0.0.1:53 (FTL) times out, while dig to 127.0.0.1:5335 (unbound) works normally

  • echo ">stats" | nc 127.0.0.1 4711 returns nothing when broken

  • strace shows FTL stuck in nanosleep loop:

ubuntu@pihole-vpn:~$ sudo strace -p $(pidof pihole-FTL)
strace: Process 75885 attached
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0
nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7fff67daf770) = 0

ubuntu@pihole-vpn:~$ dig @127.0.0.1 google.com
;; communications error to 127.0.0.1#53: timed out
;; communications error to 127.0.0.1#53: timed out
;; communications error to 127.0.0.1#53: timed out

; <<>> DiG 9.18.39-0ubuntu0.24.04.3-Ubuntu <<>> @127.0.0.1 google.com
; (1 server found)
;; global options: +cmd
;; no servers could be reached
ubuntu@pihole-vpn:~$ dig @127.0.0.1 -p 5335 google.com

; <<>> DiG 9.18.39-0ubuntu0.24.04.3-Ubuntu <<>> @127.0.0.1 -p 5335 google.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 21161
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;google.com. IN A

;; ANSWER SECTION:
google.com. 30 IN A 142.251.220.110

;; Query time: 79 msec
;; SERVER: 127.0.0.1#5335(127.0.0.1) (UDP)
;; WHEN: Mon Mar 30 09:47:11 UTC 2026
;; MSG SIZE rcvd: 55

ubuntu@pihole-vpn:~$

Debug Token:

https://tricorder.pi-hole.net/bLVdOR3u/

The nanosleep loop in strace plus the unresponsive telnet API (nc port 4711 returning nothing) both point to the same thing, FTL's internal processing thread is stuck in a deadlock, not a network or iptables issue. If it were iptables blocking loopback, tcpdump on lo would show packets being dropped, but unbound on 5335 still works fine from the same loopback, so the kernel path is clear.

The most common trigger for this on Oracle Cloud with WireGuard is FTL getting caught mid-operation when the wg0 interface flaps. FTL holds an internal mutex when processing interface changes, and if the link goes down unexpectedly, that mutex can be left held. The nanosleep loop is the symptom, FTL's worker is stuck waiting for a lock that never gets released.

A few things worth trying:

First, enable FTL debug logging before the next failure happens. Edit /etc/pihole/pihole-FTL.conf and add MAXDBDAYS=7 and confirm the log level is set to debug. After the next failure, check /var/log/pihole/FTL.log for WARN or ERROR entries in the minutes before it stopped responding.

Second, add a systemd watchdog to auto-recover. Create /etc/systemd/system/pihole-FTL.service.d/watchdog.conf with:

[Service]
WatchdogSec=60s
Restart=on-failure
RestartSec=5s

This means if FTL stops responding to systemd's watchdog pings for 60 seconds it gets automatically restarted without you needing to SSH in.

Third, check if the issue correlates with your WireGuard clients connecting or disconnecting. If it does, the fix is usually to increase keepalive on your client configs (PersistentKeepalive = 25) so the tunnel stays stable rather than tearing down and re-establishing.

On Oracle Cloud Micro specifically, also worth checking memory pressure at the time of failure. The ARM instances only have 1GB and FTL plus unbound can push it close to the edge. Run free -m when it next happens before you restart anything.

Ehm…

My Raspberry Pi 3B has 1 GB of RAM while running Pi-Hole + Unbound and never uses more than 25% of it so far ?! :face_with_raised_eyebrow:

2 Likes

@RianKellyITriankellyit, was your reply composed using AI or something?
Bc most of what you posted applies for the old Pi-hole v5 release and not for the current v6.

The v6 release doesnt listen on TCP 4711 for the API anymore:

$ sudo ss -nltup | grep 'Netid\|pihole'
Netid State  Recv-Q Send-Q Local Address:Port Peer Address:PortProcess
udp   UNCONN 0      0            0.0.0.0:123       0.0.0.0:*    users:(("pihole-FTL",pid=465,fd=40))
udp   UNCONN 0      0            0.0.0.0:53        0.0.0.0:*    users:(("pihole-FTL",pid=465,fd=20))
udp   UNCONN 0      0               [::]:123          [::]:*    users:(("pihole-FTL",pid=465,fd=41))
udp   UNCONN 0      0               [::]:53           [::]:*    users:(("pihole-FTL",pid=465,fd=22))
tcp   LISTEN 0      200          0.0.0.0:80        0.0.0.0:*    users:(("pihole-FTL",pid=465,fd=34))
tcp   LISTEN 0      32           0.0.0.0:53        0.0.0.0:*    users:(("pihole-FTL",pid=465,fd=21))
tcp   LISTEN 0      200          0.0.0.0:443       0.0.0.0:*    users:(("pihole-FTL",pid=465,fd=35))
tcp   LISTEN 0      200             [::]:80           [::]:*    users:(("pihole-FTL",pid=465,fd=36))
tcp   LISTEN 0      32              [::]:53           [::]:*    users:(("pihole-FTL",pid=465,fd=23))
tcp   LISTEN 0      200             [::]:443          [::]:*    users:(("pihole-FTL",pid=465,fd=37))

Instead its got a REST API via the web ports now:

$ stat /etc/pihole/pihole-FTL.conf
stat: cannot statx '/etc/pihole/pihole-FTL.conf': No such file or directory

Everything is configured in a single .toml file now:

$ sudo cat /etc/pihole/pihole.toml
[..]
  # How long should queries be stored in the database [days]?
  #
  # Allowed values are:
  #     A positive integer value in days, or 0 to disable the database
  maxDBdays = 91
[..]
[debug]
  # Print debugging information about database actions. This prints performed SQL
  # statements as well as some general information such as the time it took to store the
  # queries and how many have been saved to the database.
  #
  # Allowed values are:
  #     true or false
  database = false

  # Prints a list of the detected interfaces on the startup of pihole-FTL. Also, prints
  # whether these interfaces are IPv4 or IPv6 interfaces.
  #
  # Allowed values are:
  #     true or false
  networking = false

  # Print information about shared memory locks. Messages will be generated when waiting,
  # obtaining, and releasing a lock.
  #
  # Allowed values are:
  #     true or false
  locks = false
[..]
$ sudo pihole-FTL --config database.maxDBdays
91
$ sudo pihole-FTL --config debug.database true
true

This is my Pi-hole+Unbound combo (times two):

$ cat /proc/device-tree/model
Raspberry Pi Model B Rev 1
$ uptime
 22:09:12 up 273 days, 18:04,  1 user,  load average: 0.07, 0.20, 0.62
$ vcgencmd get_mem gpu; vcgencmd get_mem arm
gpu=16M
arm=240M
$ free -h
               total        used        free      shared  buff/cache   available
Mem:           221Mi       157Mi        28Mi       7.1Mi        92Mi        64Mi
Swap:          511Mi        58Mi       453Mi

64MB available free memory is plenty.

2 Likes

@Anant, post a fresh debug token just after the issue occurred?
Retention on the tricorder server for those uploaded logs is 48 hours.

I kind of had the same impression, but I can’t be bothered with fixing the weird broken world we live in these days :rofl:

Funnily, I tried doing this and for some reason Pihole simply ignored uploading it the first time:

[âś“] ** FINISHED DEBUGGING! **

* The debug log can be uploaded to tricorder.pi-hole.net for sharing with developers only.

[?] Would you like to upload the log? [y/N] y
* Log will NOT be uploaded to tricorder.
* A local copy of the debug log can be found at: /var/log/pihole/pihole_debug.log

Dunno why this happened. Anyway, I ran it again and it worked the 2nd time.

Updated debug token: https://tricorder.pi-hole.net/fk1TlJOl/

The issue is currently ongoing. I don’t know for how long but I suspect at least for the last 4-5 hours. I’ve not been able to access the Internet with Wireguard VPN active (on my iphone as I’ve been on the move) and I’m able to access the Internet with the wireguard VPN tunnel turned off.

@RianKellyIT :

ubuntu@pihole-vpn:~$ sudo pihole-FTL --config database.maxDBdays

91

ubuntu@pihole-vpn:~$

and

ubuntu@pihole-vpn:~$ uptime

19:11:10 up 18 days, 10:36, 1 user, load average: 2.16, 2.23, 1.67

and

ubuntu@pihole-vpn:~$ sudo pihole-FTL --config debug.database true

true

Interestingly,

ubuntu@pihole-vpn:~$ sudo systemctl restart pihole-FTL

is not working either. I waited for ~45 seconds with no response and eventually I killed the service restart attempt using ctrl+c. This is very weird and odd.

@deHakkelaar

A little trick is Bash completion (double TAB) eg:

$ sudo pihole-FTL
arp-scan           idn2               sha256sum
branch             --list-dhcp4       sqlite3
--config           --list-dhcp6       sqlite3_rsync
debug              --lua              tag
--default-gateway  lua                --teleporter
dhcp-discover      --luac             test
dnsmasq-test       luac               --tls-ciphers
-f                 no-daemon          --totp
--gen-x509         ntp                --v
gravity            --perf             -v
gzip               ptr                verify
-h                 --read-x509        version
--help             --read-x509-key    -vv
help               regex-test
$ sudo pihole-FTL --config
database.DBimport
database.DBinterval
database.maxDBdays
[..]

And if I double tab again with the desired setting, it even auto completes to show the current value:

$ sudo pihole-FTL --config database.maxDBdays 91

Which you can change oc.

Eg for the debug settings if I double tab:

$ sudo pihole-FTL --config debug.
debug.aliasclients  debug.extra         debug.regex
debug.all           debug.flags         debug.reserved
debug.api           debug.gc            debug.resolver
debug.arp           debug.helper        debug.shmem
debug.caps          debug.inotify       debug.status
debug.clients       debug.locks         debug.timing
debug.config        debug.netlink       debug.tls
debug.database      debug.networking    debug.vectors
debug.dnssec        debug.ntp           debug.webserver
debug.edns0         debug.overtime
debug.events        debug.queries

But first lets wait see if the mods/devs have time to inspect the uploaded debug log!

Sorry, those are formatting issues with copy paste. It’s past 1 AM and I’m a little loopy so I missed fixing the formatting errors.

1 Like

I guess the debug token has expired again.
@Bucking_Horn Would you be able to help? My issue appears to be similar to Pi-hole, DNS service, Web UI, Logging, randomly hangs

It’s not the same as I don’t use proxmox the symptoms are suspiciously similar

Hmm, a likely place for Pi-hole to wait would be upstream DNS replies, which would in turn suggest that your upstreams are unresponsive.
But that would just repeatedly wait for 10,000 nanoseconds at a time, and also, you are using a co-located unbound which is responding, so that seems unlikely.

Could you please verify that tv_nsec value really is 10,000,000 as quoted in your original post?

The automatic self-healing you observe may indicate that Pi-hole could be starving on I/Os until they become available again, but I can't find an explicit wait for 10,000,000 nanos (10ms), and the only implicit value seems to be used for NTP purposes.

What are your micro instance's maximum limits on sockets and file descriptors?
Does it expose an RTC, and if so, are you allowed to manipulate it?

Run from your Pi-hole machine, what's the result of:

pihole-FTL --config ntp.ipv4.active
pihole-FTL --config ntp.ipv6.active
pihole-FTL --config ntp.sync.active

Also, please provide a fresh debug token, as your previous has indeed expired.

Sorry, It took me a couple of days to be around at the time this issue occurred.

ubuntu@pihole-vpn:~$ pihole-FTL --config ntp.ipv4.active
true
ubuntu@pihole-vpn:~$ pihole-FTL --config ntp.ipv6.active
true
ubuntu@pihole-vpn:~$ pihole-FTL --config ntp.sync.active
true

Debug token:

[âś“] Your debug token is: https://tricorder.pi-hole.net/zw5xCmgN/


ubuntu@pihole-vpn:\~$ sudo strace -fp $(pidof pihole-FTL) -e nanosleep
strace: Process 517351 attached with 8 threads
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, 0x7ffdb8590790) = 0
[pid 517351] nanosleep({tv_sec=0, tv_nsec=10000000}, ^Cstrace: Process 517351 detached
 <detached ...>
strace: Process 549540 detached
strace: Process 549541 detached
strace: Process 549544 detached
strace: Process 549545 detached
strace: Process 553959 detached
strace: Process 553963 detached
strace: Process 549479 detached

ubuntu@pihole-vpn:\~$

strace shows repeated nanosleep calls with tv_nsec=10000000 (~10ms)

Max open files

ubuntu@pihole-vpn:~$ ulimit -n

1024
ubuntu@pihole-vpn:~$

System-wide file-max:

ubuntu@pihole-vpn:~$ cat /proc/sys/fs/file-max

9223372036854775807
ubuntu@pihole-vpn:~$

Max connections:

ubuntu@pihole-vpn:~$ cat /proc/sys/net/core/somaxconn
4096
ubuntu@pihole-vpn:~$

ubuntu@pihole-vpn:~$ echo -n "count: "; cat /proc/sys/net/netfilter/nf_conntrack_count
count: 0
ubuntu@pihole-vpn:~$ echo -n "max: "; cat /proc/sys/net/netfilter/nf_conntrack_max
max: 7680
ubuntu@pihole-vpn:~$

RTC:

ubuntu@pihole-vpn:~$ timedatectl
Local time: Fri 2026-04-10 06:48:09 UTC
Universal time: Fri 2026-04-10 06:48:09 UTC
RTC time: Fri 2026-04-10 06:48:09
Time zone: Etc/UTC (UTC, +0000)
System clock synchronized: yes
NTP service: active
RTC in local TZ: no
ubuntu@pihole-vpn:~$

RTC is present and system clock is synchronized via NTP.

Full summary of my situation:

strace shows FTL repeatedly calling:
nanosleep({tv_sec=0, tv_nsec=10000000}, ...)

This occurs continuously while the issue is present.

During this time:

  • dig @127.0.0.1:53 (FTL) times out
  • dig @127.0.0.1:5335 (unbound) works normally
  • FTL socket (port 4711) is unresponsive
  • tcpdump shows DNS packets arriving on wg0 and lo

System/resource state:

  • Open file limit (ulimit -n): 1024
  • FD usage: low (~37)
  • file-max: 9223372036854775807
  • somaxconn: 4096
  • conntrack usage: 0 / 7680

Time/RTC:
timedatectl shows:

  • RTC present
  • System clock synchronized via NTP
  • No timezone or drift issues

Restarting FTL immediately restores functionality, but the issue reoccurs at random times under normal usage (WireGuard clients: iOS, Android, Windows). The issue is temporarily resolved by restarting FTL or the entire OCI micro instance

@Bucking_Horn : I tried to restart FTL but even that’s not working right now:

ubuntu@pihole-vpn:~$ sudo systemctl restart pihole-FTL

Nothing at all (ran the command ~60 seconds ago)

Edit: A few more seconds after I posted the above, FTL did restart, and now Pihole is working normally.

I was going to suggest to disable Pi-hole's NTP server and client support, but your debug log shows them to be disabled already?

     [ntp.ipv4]
       active = false ### CHANGED, default = true
       address = ""
     [ntp.ipv6]
       active = false ### CHANGED, default = true
       address = ""
     [ntp.sync]
       active = false ### CHANGED, default = true
       server = "pool.ntp.org"
       interval = 3600
       count = 8
       [ntp.sync.rtc]
         set = false
         device = ""
         utc = false ### CHANGED, default = true

Also:

   2026-04-10 00:30:00.836 UTC [517351/T544734] INFO: Ignoring file etc/pihole/pihole.toml in Teleporter archive (not in import list)
   2026-04-10 00:30:00.836 UTC [517351/T544734] INFO: Skipping file etc/hosts in Teleporter archive
   2026-04-10 00:30:00.836 UTC [517351/T544734] INFO: Ignoring file etc/pihole/dhcp.leases in Teleporter archive (not in import list)
   2026-04-10 00:30:00.837 UTC [517351/T544734] INFO: Ignoring table client in etc/pihole/gravity.db (not in import list)
   2026-04-10 00:30:00.837 UTC [517351/T544734] INFO: Ignoring table client_by_group in etc/pihole/gravity.db (not in import list)
   2026-04-10 00:30:00.855 UTC [517351/T544734] INFO: Skipping file etc/pihole/pihole-FTL.db in Teleporter archive

Do you run any third-party software that regularly would import settings, like nebula-sync?

Only older Pi-holes (5 and lower) may have been using that port for their telnet API.
Pi-hole v6 replaced that with a new REST API, as already mentioned by deHakkelaar.
Your debug log shows you are running Pi-hole v6.

It also suggests you'd run netdata on your Pi-hole machine.
(How) is that configured to monitor your Pi-hole?

Bucking_Horn already asked about third party app or script executing an Import process, but I have a complementary question.

Your log shows, at 0:18, an attempt to request an invalid API endpoint /api/v2/static/not.found.

This is not part of Pi-hole and this may indicate that there is a software sending queries to the the wrong server, or an app is trying to connect to Pi-hole, but requesting invalid information:

   2026-04-10 00:18:52.877 UTC [517351/T544734] WARNING: API: Not found (key: not_found, hint: /api/v2/static/not.found)
   2026-04-10 00:19:07.789 UTC [517351/T544821] WARNING: API: Not found (key: not_found, hint: /api/v2/static/not.found)

And another one later:

   2026-04-10 06:30:05.176 UTC [517351/T553961] WARNING: API: Invalid request (key: bad_request, hint: Failed to commit transaction: database is locked)

Do you know what is the origin of these requests?

I’ve disabled Pihole’s NTP based on suggestion on Reddit (https://www.reddit.com/r/pihole/comments/1r577el/how_do_i_stop_this_error/). I was receiving that error pretty frequently and the consensus there was that the Pihole as a NTP Server is not necessary for it’s functioning.

Yes, I sync my local RPi pihole (as the primary) with the OCI instance. The sync occurs once a week on early morning of Sundays

It also suggests you'd run netdata on your Pi-hole machine.
(How) is that configured to monitor your Pi-hole?

I used it only a couple of times but it isn’t configured to monitor Pihole specifically but the overall instance. TBH, netdata has a very complicated interface, and I needed to use it only when this issue occurs. However, I’ve not used it and I plan to remove it soon. Just haven’t gotten around to it yet.

Two possible ways:

  1. I use Void app to monitor and manage the pihole from my phone ( ‎Void: Pi-hole Manager App - App Store )

  2. It is a crawler trying some random paths on my Pihole admin domain

I have no clue about this. Could this be due to an attempt to blacklist/whitelist a domain from the Void app?

Are you exposing your Pi-hole on the Internet?

No. Only the admin interface is accessible over the Internet but not the DNS server. The DNS server is only accessible through the Wireguard VPN interface

I just received this “message” in the OCI pihole instance. I’m not sure what caused this:

Edit: Ran htop on the server. It shows Pihole-FTL taking a huge amount of CPU. I noticed this as I was using the web-interface to block some domains and show this message popup a few minutes AFTER I was done with the blocking