Pi-hole dns stop responding, after the sqlite size reach 200mb

The issue I am facing:

I need to rotate the data after 1 day Query database - Pi-hole documentation, and that cause the data collected is useless To use MySQL as database - #43 by DL6ER, because after reaching 200mb of sqlite data, even without any cpu jump, the dns stop responding. I need to manually split the DB, and restart the system, then the dns start responding again. Is there any way to move the rotation automatically, and move it into some other RDBMS? So we can still look into the data? Our client is small, only 100-200 client.

Details about my system:
piHole on Ubuntu X86, Proxmox, 4 Core (native socket)
AMD Ryzen Threadripper 1920X 12-Core Processor
4gb ram
PHP CGI limit 1024mb ( I don't think this really mater as the dns service stop fully )

What I have changed since installing Pi-hole:
Nothing, it's normall install as it.

What does "small" mean in numbers of queries within 24 hours? 200 MB per day sounds like multi-million queries a day.

This should not happen, are you sure this is related to the database and not the large number of queries? I'm asking because a restart will reload the queries from disk, potentially bringing you into the same situation again. A database rotation/deletion causes you to start afresh.

While you have a pretty beefy CPU - 4 GB is RAM is really on the low side for Ubuntu with Pi-hole, especially thinking about multi-million queries per day. Hence, my assumption is that you are simply running out of memory and this causing issues. for you.

My suggestion is to disable the database for testing and checking if you run into the same issue after setting

MAXDBDAYS=0

in /etc/pihole/pihole-FTL.conf (create if it does not exist).

Others than than, please provide the last lines from /var/log/pihole/FTL.log when DNS stopped working, I'm pretty sure we'll see warnings about memory running out a few minutes/hours before DNS eventually stops responding.

It would also help us if you generate a debug log, upload the log when prompted and post the token URL here.

Hello, Sorry for late reply. Seems your guest are right. We have a wrong network configuration that cause outside NAT network flooding the dns, and request about 2mil request under 10 minutes. It's happening after I backup the data, and do manual query on the db sqlite.

The strange thing is, pi-hole don't crash, just stop responding. I do check the logs, try to see it thoroughly, seems no error of OOM/ running of low memory, and on the logs of proxmox, the used memory only between 1gb-2.5gb, and never beyond that.

I can't post the log, seems it's too big. The whole log is here

After limiting and disabling the NAT DNS, it's working now, just the WebUI seems slow when rendering the dashboard. But at least it works.

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.