Querying long term data failed due to limited allowed memory for PHP resulting in "An unknown error occurred with loading the data"

Please follow the below template, it will help us to help you!

Please ensure that you are running the latest version of the beta code.
Run pihole -up to update to the latest, then verify that the problem still exists before reporting it.

Problem with Beta 5.0:
I am not sure if this is a problem or if it's by design. Maybe a better error message is required if that's the case. Under Long Term Data menu, you have three options to search by. Graphics, Query Log, and Top Lists. In Graphics if you choose a time frame longer than your PI has data for. It still displays a chart that just ends sooner based on the data in the database. On the other two options if you perform the same action you get an error "An unknown error occurred with loading the data". My initial reaction was I should have received that same result as the graphics menu, but again if I am wrong maybe change the error so the end-user understands this is because of a lack of data for the chosen lookup period. The way it is now implies a problem with the system.

Debug Token:
https://tricorder.pi-hole.net/vity25u79z

I have a 12 hour old fresh install of Pi-Hole 5.0 beta, and here is what I see on these screens (does not match what you see). In each case, I selected "Last 7 days" from the date and time range. So, it is not by design.

Check.

I ran a query from the day before yesterday to some day next week or even the week after. I see no errors.

everyone I ran an update on my pihole and tried again. I am now able to chose any time frame for Graphics and Top Lists. I still get the error on Query log, even though I am choosing 7 days and I have 6 days worth of data. Not sure what is so unique about my configuration.

How long does it take until the message comes up? Maybe you're simply running out of maximum allowed processing time for the script. Please check /var/log/lighttpd/error.log for any error messages.

Can you please confirm that path. Also the error pops up in less that 5 seconds. Doesn’t feel like a time out.

Path corrected above.

I get this error repeatedly....

(mod_fastcgi.c.2543) FastCGI-stderr: PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 20480 bytes) in /var/www/$

This is like 128 MB. Either your device doesn't have enough memory available or (more likely) there is some configuration somewhere on your device that limits the amount of memory PHP can use. When it hits this barrier, PHP is killed and you see the error on your web interface.

Look at

http://php.net/memory_limit

1 Like

I’m running pihole in an Ubuntu 18.04 LTS server VM I will investigate where I can make this change. Thanks for the feedback.

I had to bump my VM to 4GB of ram, and set PHP to consume up to 2GB. Once I made this change everything started to work. This implies to me that my 1GB ram pi 3B+ won't be able to run this in the manner I would like from a logging perspective. I wonder if this is a bug or is this the intended operation of the new update to pihole?

Just wanted to jump in here and say that I recently ran into the same problem.

My setup has 14 total devices, and generates an average of between 60k and 80k queries per day (with some days peaking at 100k). 1GB memory_limit, what I set now, seems to be working, but the performance is lacking: loading "this year" takes almost 30 seconds to respond and download the data, plus another 10/15 for the browser to start responding again. It needs to get ~83 MB of data for the 1.4M+ queries in that time period.
This is not ideal, especially when the number of clients starts growing.

Thankfully (?) I'm running on a Raspberry Pi 4 that pretty much only doing pihole so I was able to push PHP's memory to 1GB and I do have 4 cores for the processing and another 3GB of RAM for the Pi, but this doesn't apply to everyone (like Gordon, who is using a Pi 3B+, or me last month, running it on an "original" 3B)

Would you consider, for the next release, server-side (in SQLite) query paging as a requirement?
A big change would be in the search function, which would now need to do AJAX requests to the backend (and those would need to be implemented as LIKE and ORDER BY queries, or the backend would run into the same exact bottleneck) instead of returning everything to the client.

We have server-side pagination already implemented for our next-gen API, which is currently scheduled for Pi-hole v6.0.

3 Likes

I encounter the same issue.

Using the long-term database to query 30 days results in "An unknown error occurred with loading the data" and

(mod_fastcgi.c.421) FastCGI-stderr: PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 16777224 bytes) in /var/www/html/admin/api_db.php on line 148

in /var/log/lightttp/error.log

I'm running vanilla armbian on NanoPi Neo Plus 2 (1GB RAM, 128M were allowed for php) with only Pihole installed and running. I have a small setup with max 15 clients and <15k queries a day.

Changing memory_limit in /etc/php/7.3/cgi/php.ini to 512M and

sudo service lighttpd force-reload

solved the issue partially.

Querying 30 days gives now 442.000 queries, this year and all time failed despite of 512M ram.

I also saw that only 1 CPU is used during processing.

I'm aware this issue is due to memory constrains but is it possible to change the warning in the WEB GUI to something more meaningful ("Error: Not enough RAM allocated to PHP")? - the issue likely will pop up more often when v5.0 is released.

1 Like

Set the memory to -1 and it will work.

1 Like

Doesn't that risk that the device runs out of memory completely?


Add

It does.
It worked for "this year" (1.3M queries) but the device got total unresponsive querying "all time". It run out of memory, out of swap and pihole didn't resolve DNS anymore. I hard resetted it after a few minutes.

I would not recommend to set it to -1.

Works fine for the nanopi neo LTS that I'm running on. 512M Ram/256M swap.

Maybe you database is much smaller? How many queries do you retrieve for 'all time'?

I don't think 30 seconds is a bad number at all for 1.4M queries! I'd have actually anticipated this to be much slower. Imagine what you are doing: You are requesting the data from PHP, a high-level language not really meant for data processing. Dividing your 30 seconds by the 1.4M queries means your Pi-hole is only using 21 µs (yes, micro not milli) per query. This is actually very very fast.

But I still see your point and server-side pagination is definitely something we already have implemented and tested for the next generation of Pi-hole (where it is written in c and compiled to native machine code for maximum speed). However, you'll also understand that we will not backport this into PHP for Pi-hole v5.x.

It all comes down to saying: Don't do this :wink: Don't query a year worth of domains. If you want to do research spanning large time windows, I recommend to interact directly with the long-term database.

For instance,


sqlite3 /etc/pihole/pihole-FTL.db "SELECT(*) FROM queries WHERE domain = 'google.com' AND client = '192.168.0.1';"

will compute the number of times google.com has been requested by 192.168.0.1 within your long-term data. This query will be both faster and much much less memory consuming than doing something like this trough the Pi-hole frontend.

Well, 442000 queries. That's quite a lot. Assume the average domain is 20 bytes in length, then there is the timestamp, the client IP, the reply type and delay, etc. easily making 100 bytes for each result row.

While this is already a quite conservative estimate, it already goes up to 442000*100 Bytes = (roughly) 442 Megabytes. Add the bookkeeping data PHP needs to build the result array on top. Furthermore, there is also the SQL interface querying this data which needs to have memory for itself so it can process the requests with the all the filtering by time and maybe status. There isn't much we can do about this raw amount of data.

FTL can handle even much larger amounts of queries, however, that's because it uses a clever 8are you allowed to say this about your own thing?) compression/memory-optimization algorithm that ensures not only minimal overhead but also reduced needs for memory when a given string (like a domain name) comes up several times. With PHP, we cannot do this.

I don't think that this has changed from v4.x -> v5.0

There isn't anything we can do about this as this is given due to PHP works. PHP and SQLite aren't able to parallelize this task.

Well, not really. The browser merely sees an error 500 being returned from the server when PHP is killed due to memory exhaustion. The error message that gets logged by PHP into /var/log/lighttpd/error.log is not available to the client.

Just printing the message

"Error: Not enough RAM allocated to PHP"

because we assume that this may be the reason can easily be misleading, only imagine users did modifications to their local source code and the error 500 is actually caused by a syntax error.

However, I see two possible solutions here:

  1. Extend the text we show

    An unknown error occurred with loading the data

    to something like

    An unknown error occurred with loading the data.
    Check the server's log files (/var/log/lighttpd/error.log when you're using the default Pi-hole web server) for details. You may need to increase the memory available for Pi-hole in case you requested a lot of data.

  2. While it is not possible to catch fatal errors (as they are fatal and not simple exceptions), it is actually possible to implement our own error handling routines which could handle this kind of error. However, this is somewhat lower-level and should be explored for v5.1

I'm assigning myself to this topic to remember myself to have a look.

2 Likes