The previous debug log has expired. Please provide a new one.
Do not delete/flush logs or restart the VM before running the log.
This will show the current state of your Pi-hole and it will tell us if there is a storage problem.
Also, when DL6ER asked about the hardware you explained Pi-hole runs on a VM. How much memory this VM uses?
And also, have you been able to trigger that warning using nslookups for unknown domains as suggested earlier?
Tailing logs (e.g. by pihole -t) is meant as a short term measure, useful e.g. when you actively watch the logs. That's especially true if log files are rotated regularly (as is the case for Pi-hole's logs).
Note that I did suggest running pihole -t in such a short term context, for the purpose of checking aforementioned lookups for unknown domains.
For analysing logs, just use the log files directly.
When you observe a freeze again, the following commands may reveal overly active clients or excessively requested domains:
Just saw this on the dnsmasq-discuss mailing list:
A couple posts on the OpenWrt forum have indicated that they are having DNS issues that are resolved by raising the forward limit from OpenWrt's default of 150 to 500.
dns-forward-max=500
Assuming that there are indeed a lot of queries going on simultaneously, what does this affect? Does it increase the transient storage used during these peaks or is the memory footprint of dnsmasq increased overall? My experiments say, "it's only transient", but I'm curious if the experts could weigh in.
I did a quick test with both settings on a box with dnsmasq 2.86, loaded 100 pages in the old web browser, full of cdn refs and such, and couldn't see any functional difference in CPU or RAM usage between the two tests.
So, I guess the real question is, is there any reason not to raise the default for the typically small-memory, limited-CPU boxes that run OpenWrt?