Version 2.9.1 error Datatables Warning

Hello,

Thank you very much for this amazing ads killer.

Version : 2.9.1 Web Interface: 1.4

When I request "Query Log" from admin website I have un error :
DataTables warning : tableid=all-queries - Ajax error. For more information about this error, please see http://datatables/tn/7

I have another strange pb : all dashboard are empty.

Yesterday I was using version 2.9 of pi-hole and everything worked well, but since the 2.9.1 update that function less well

Thx for your help

Sorry for my english, I'm french.

I have this exact same issue, I didn't get a count off the dashboard when it died but it was probably near 100,000 queries in just about an hour. Does the AJAX integration have a limit for objects in a certain time period?

Update for this, it seems that PHP memory size is set to 128MB, if too many requests are run through this the file size will exceed PHP's ability to load it resulting in the above errors. I have increased PHP's memory limit to 768MB and the dashboard is now parsing EXTREMELY slowly and the query log is blank with no error. Watching as I write this the live counters in the top are still incrementing but the other pie charts are not filling in as expected. There are 400,000 records in the pihole.log file and I expect this problem showed up around 100-150k records. Is there a way to truncate the pihole.log file in /var/log to prevent this type of problem? Possibly make it show hourly instead of open-ended?

EDIT: I configured the machine with 1GB of memory and it continued to peg the limit so I have upped it to 4GB with the end result being that the system uses almost 50% of the memory loading a 442k record file and after load drops memory usage to around 1GB. The admin page loads almost in real time since the memory has been upgraded. Suggestion to other users, if you intend to use this on a mid to large business or provider level setup put it on its own machine and give the system a path to clean up its log from time to time as to not overflow the disk and prevents the log file from being unmanageable.

You can run pihole -f to flush the log file. This is an issue with PHP and how long it takes on a Pi to parse the log file. We're moving to a database solution with the Python rewrite, which should remedy most of these issues. I would suggest an hourly cron that flushes the log, or something that checks the size every once in a while and flushes if it's too big. If Pi-hole is being run on a Pi with large logs, PHP will have trouble handling the stats generation.

I found this while looking around last night, it also appears that on my virtual machine since cron wasn't installed it failed to put the flush function in which is why my file was growing into the gigabytes.