pihole-FTL with regular outstanding CPU and DISK I/O usage

I was wondering if this behavior is "normal" and what task might pihol-FTL trigger to consume those resources:

Usage

  • CPU: 17-23 % for roughly 5 to 10 seconds (7 probably on average) every very roughly 45 to 50 seconds (so not exactly 1 minute but always a bit below that mark)
  • DISK I/O: up to 350 MB/s for about the same amout of time in the same interval like CPU usage

Between that interval, regular usage (down to almost 0 % CPU and 0 KB/s DISK I/O).

What I'm especially worried about is the DISK I/O. Reading almost 4 GB every minute (see below notes regarding pihole-FTL.db) = 5.7 TB a day (!) will certainly have a (negative) impact on the lifetime of the NVMe.

Looking for root causes

Now summarizing these information (especially because of that ~ 7 seconds mark) I wonder if this might be related to Loading query log takes several seconds since core 6.3.

Are there any regular maintenance jobs matching the rough 45 to 50 seconds interval?
E. g. statistics generation?

Information

  • System is a quite fast Pi 5 with a NVMe, otherwise I guess this whole usage would take much longer (e. g. when using a SD card).
  • I also have the Pi-Hole integration in Home Assistant, but there the API request and sensor update interval is hard coded for every 5 minutes.

If it's Loading query log takes several seconds since core 6.3 then this will hopefully soon be sorted out, otherwise I'm curious to see what makes Pi-Hole use those machine resources that excessively. Because of the high DISK READ usage I am quite sure it is related to the pihole-FTL.db (3.9 GB currently).

Once every minute (unless changed), queries are stored in the long-term database in one batch. This avoids many small writes and was verified to use much less IO back when it was developed (already quite some time ago). Reading in the entire database file (seemingly, at least) is definitely not what is expected to happen here.

First of, I can reassure you that this is not always fulfilled by the NVMe but the kernel will have this cached to reduce actual disk I/O. This caching is, however, transparent to the application layer so any metrics gathering cannot tell apart if it actually came from a disk or straight for cache (in RAM). This, of course, assumes you have enough RAM. The kernel will be using "free" memory for this (it is the yellow part of the par shown, e.g., by htop).

I'd suggest you enable FTL's debug flags as everything in FTL can be traced. For the beginning, let's start with

sudo pihole-FTL --config debug.database true

and then have a close eye on /var/log/pihole/FTL.log during the time of high I/O pressure.

1 Like

I meanwhile tend to say: it actually might be memory related. During my observations in the original post, the machine was running for roughly 60 days and - for unknown reasons, maybe „that’s how Linux works and handles memory“ - while RAM seemed to be available (don‘t recall „free -h“ output with buffer and actual free information), indeed SWAP file was almost completely full. That’s very likely not due to Pi-Hole but other services the same machine provides.

I am mentioning this because after your statement I checked the system and - after being rebooted for a few days - now even there is a regular pihole-FTL CPU usage (quite handsome), but only very little DISK I/O usage - near to nothing compared to what I originally discovered and documented above.

I‘m curious to see if that DISK I/O pressure returns once RAM (or SWAP) fills up. If I find time I could certainly provoke this situation synthetically (then enable the debug for FTL as suggested), otherwise wait some time (days to weeks) for the memory to „naturally“ fill up.

For now I enabled graphical long time logging of DISK I/O on a system level. That way I can discover once this situation pops up naturally.

1 Like

That makes your description of periodic intense disk activity make much more sense.

If you haven't already, and are not otherwise CPU bound, it may be worth investigating zswap or possibly zram for use on that system. Much in-memory data is eminently compressible and using this can reduce or even eliminate the need for swapping to disk.