Just updated one of my production Pi-Holes to 6.0 from 5.x. I have a very large gravity list >9M items. v5.x had no issues handling this gravity list and is able to start handling DNS (and web admin) requests immediately after startup. However, v6.0 won't handle DNS or web admin requests for ~20-30 seconds after startup. I have another set of PiHoles in my lab that were running on the development branch. They have gravity lists ~1M entries that do not appear to exhibit this behavior. I have not updated them to 6.0 release yet. Not sure what changed in how the lists are pulled in and processed, but it's a noticeable degradation from 5.x. I'm more than happy to help the dev team troubleshoot this.
During the Pi-Hole v6 Beta period there was a short discussion about the amount of RAM vs. the size of Gravity Lists and it turned out that it's better to have 1 GB of RAM or more in order to process them within a reasonable period.
Soo...
How much RAM does your Pi-Holve v6 running machine have ?
I have 4GB of RAM allocated per Pi-Hole VM Here's the current output from 'free':
total used free shared buff/cache available
Mem: 4005796 300952 120064 20632 3584780 3428516
Swap: 0 0 0
I also have 2 processor cores allocated as well. They're VMs running on a proxmox cluster, and the underlying CPU is a Ryzen 5 PRO 2400GE, so plenty of horsepower there.
OK next option :
What happens when you raise the amount of Threads/Cores in /etc/pihole/pihole.toml from 1 to 2 or more ?!
Threads was set to 0 (auto), should I force it to 2?
Hmm... I thought it was 1 by default...
You can try 2 or more and see if it helps
Hmmm, still resolution timeouts after FTL restart. Slightly faster this time, but still a delay, maybe about 10-15 seconds. I'll try allocating more cores to the VM quick. I'd be interested to find out why there's this significant difference after the upgrade, though.
Same behavior after bumping up the core count to 4 and upping threads in the config to match.
Actually, after restart, took FTL about 1 minute to become responsive.
DAMN... not what I was hoping for...
Time for one of the developers to take a look I guess
Turned some debugging on, and found at least where the time hit takes place:
2025-02-20 18:11:18.802 UTC [708M] DEBUG_DATABASE: Imported 2 rows from disk.sqlite_sequence
2025-02-20 18:15:39.971 UTC [708M] INFO: Imported 160655 queries from the on-disk database (it has 112633405 rows)
You can see it takes just over 4 min. for FTL to read-in the on-disk database. The first log line is not the database its reading in. Not sure why it's so slow in 6 compared to 5.
Okay, strangely enough, it instantly reads this in after just stopping the FTL service. It now only has this time hit after a full restart.
Okay, figured it out. For anyone else experiencing this issue, it was hanging pulling in my pihole-FTL.db. The database was 6.8GB in size. Over 100M rows. Ran the following to clean-up all but the last 30-days worth of data:
NOTE: You may need to install sqlite3 on your system first for this.
sudo systemctl stop pihole-FTL.service
sudo sqlite3 /etc/pihole/pihole-FTL.db
Then, once in the database, I did:
DELETE FROM query_storage WHERE timestamp <= strftime('%s', datetime('now','-30 day'));
VACUUM;
.quit
This took several minutes to complete, but reduced the database size down to ~270MB. I restarted the VM and, after it rebooted, DNS resolution was already available.
All that said, for those upgrading from a long-standing PiHole 5.x install, you might want to purge old data from your pihole-FTL.db database BEFORE upgrading to 6 to save yourslelf some headache.
For a little more context, it appears that FTL does a select statement in this database (I don't know the statement being executed), extracts a subset of the data (explaining the 'Imported 160655 queries from the on-disk database' notice in the log line posted earlier), and drops it into a temporary DB in /dev/shm. That would explain the fast start-up times after first start. It's not doing that select statement again, just pulling the data from what's already in /dev/shm. But, since /dev/shm is ephemeral, this temporary database disappears on system restart.
What was the duration of your database (the MAXDBDAYS parameter in the old FTL configuration file)?
Thanks to @binderth and your thread, my issue has been resolved, methinks. The pendulum has stopped swinging and connection has been stable for the last TEN minutes (unlike a few reboots ago).
Where do I drop a contribution for a beverage of your choice? Thanks.
Regards.
Just did a rollback to find out. Looks like that entry was missing altogether from that server. So, I assume that meant "keep forever"?
If the value was missing, it was using the default value. Probably 365, not forever.
This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.