Querying long term data failed due to limited allowed memory for PHP resulting in "An unknown error occurred with loading the data"

This is like 128 MB. Either your device doesn't have enough memory available or (more likely) there is some configuration somewhere on your device that limits the amount of memory PHP can use. When it hits this barrier, PHP is killed and you see the error on your web interface.

Look at

http://php.net/memory_limit

1 Like

I’m running pihole in an Ubuntu 18.04 LTS server VM I will investigate where I can make this change. Thanks for the feedback.

I had to bump my VM to 4GB of ram, and set PHP to consume up to 2GB. Once I made this change everything started to work. This implies to me that my 1GB ram pi 3B+ won't be able to run this in the manner I would like from a logging perspective. I wonder if this is a bug or is this the intended operation of the new update to pihole?

Just wanted to jump in here and say that I recently ran into the same problem.

My setup has 14 total devices, and generates an average of between 60k and 80k queries per day (with some days peaking at 100k). 1GB memory_limit, what I set now, seems to be working, but the performance is lacking: loading "this year" takes almost 30 seconds to respond and download the data, plus another 10/15 for the browser to start responding again. It needs to get ~83 MB of data for the 1.4M+ queries in that time period.
This is not ideal, especially when the number of clients starts growing.

Thankfully (?) I'm running on a Raspberry Pi 4 that pretty much only doing pihole so I was able to push PHP's memory to 1GB and I do have 4 cores for the processing and another 3GB of RAM for the Pi, but this doesn't apply to everyone (like Gordon, who is using a Pi 3B+, or me last month, running it on an "original" 3B)

Would you consider, for the next release, server-side (in SQLite) query paging as a requirement?
A big change would be in the search function, which would now need to do AJAX requests to the backend (and those would need to be implemented as LIKE and ORDER BY queries, or the backend would run into the same exact bottleneck) instead of returning everything to the client.

We have server-side pagination already implemented for our next-gen API, which is currently scheduled for Pi-hole v6.0.

3 Likes

I encounter the same issue.

Using the long-term database to query 30 days results in "An unknown error occurred with loading the data" and

(mod_fastcgi.c.421) FastCGI-stderr: PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 16777224 bytes) in /var/www/html/admin/api_db.php on line 148

in /var/log/lightttp/error.log

I'm running vanilla armbian on NanoPi Neo Plus 2 (1GB RAM, 128M were allowed for php) with only Pihole installed and running. I have a small setup with max 15 clients and <15k queries a day.

Changing memory_limit in /etc/php/7.3/cgi/php.ini to 512M and

sudo service lighttpd force-reload

solved the issue partially.

Querying 30 days gives now 442.000 queries, this year and all time failed despite of 512M ram.

I also saw that only 1 CPU is used during processing.

I'm aware this issue is due to memory constrains but is it possible to change the warning in the WEB GUI to something more meaningful ("Error: Not enough RAM allocated to PHP")? - the issue likely will pop up more often when v5.0 is released.

1 Like

Set the memory to -1 and it will work.

1 Like

Doesn't that risk that the device runs out of memory completely?


Add

It does.
It worked for "this year" (1.3M queries) but the device got total unresponsive querying "all time". It run out of memory, out of swap and pihole didn't resolve DNS anymore. I hard resetted it after a few minutes.

I would not recommend to set it to -1.

Works fine for the nanopi neo LTS that I'm running on. 512M Ram/256M swap.

Maybe you database is much smaller? How many queries do you retrieve for 'all time'?

I don't think 30 seconds is a bad number at all for 1.4M queries! I'd have actually anticipated this to be much slower. Imagine what you are doing: You are requesting the data from PHP, a high-level language not really meant for data processing. Dividing your 30 seconds by the 1.4M queries means your Pi-hole is only using 21 µs (yes, micro not milli) per query. This is actually very very fast.

But I still see your point and server-side pagination is definitely something we already have implemented and tested for the next generation of Pi-hole (where it is written in c and compiled to native machine code for maximum speed). However, you'll also understand that we will not backport this into PHP for Pi-hole v5.x.

It all comes down to saying: Don't do this :wink: Don't query a year worth of domains. If you want to do research spanning large time windows, I recommend to interact directly with the long-term database.

For instance,


sqlite3 /etc/pihole/pihole-FTL.db "SELECT(*) FROM queries WHERE domain = 'google.com' AND client = '192.168.0.1';"

will compute the number of times google.com has been requested by 192.168.0.1 within your long-term data. This query will be both faster and much much less memory consuming than doing something like this trough the Pi-hole frontend.

Well, 442000 queries. That's quite a lot. Assume the average domain is 20 bytes in length, then there is the timestamp, the client IP, the reply type and delay, etc. easily making 100 bytes for each result row.

While this is already a quite conservative estimate, it already goes up to 442000*100 Bytes = (roughly) 442 Megabytes. Add the bookkeeping data PHP needs to build the result array on top. Furthermore, there is also the SQL interface querying this data which needs to have memory for itself so it can process the requests with the all the filtering by time and maybe status. There isn't much we can do about this raw amount of data.

FTL can handle even much larger amounts of queries, however, that's because it uses a clever 8are you allowed to say this about your own thing?) compression/memory-optimization algorithm that ensures not only minimal overhead but also reduced needs for memory when a given string (like a domain name) comes up several times. With PHP, we cannot do this.

I don't think that this has changed from v4.x -> v5.0

There isn't anything we can do about this as this is given due to PHP works. PHP and SQLite aren't able to parallelize this task.

Well, not really. The browser merely sees an error 500 being returned from the server when PHP is killed due to memory exhaustion. The error message that gets logged by PHP into /var/log/lighttpd/error.log is not available to the client.

Just printing the message

"Error: Not enough RAM allocated to PHP"

because we assume that this may be the reason can easily be misleading, only imagine users did modifications to their local source code and the error 500 is actually caused by a syntax error.

However, I see two possible solutions here:

  1. Extend the text we show

    An unknown error occurred with loading the data

    to something like

    An unknown error occurred with loading the data.
    Check the server's log files (/var/log/lighttpd/error.log when you're using the default Pi-hole web server) for details. You may need to increase the memory available for Pi-hole in case you requested a lot of data.

  2. While it is not possible to catch fatal errors (as they are fatal and not simple exceptions), it is actually possible to implement our own error handling routines which could handle this kind of error. However, this is somewhat lower-level and should be explored for v5.1

I'm assigning myself to this topic to remember myself to have a look.

2 Likes

Seeing those actual numbers makes it quite clear why the device runs out of memory...

... maybe I should just restrain myself from executing huge queries.

You are if it's true :wink:

I think number one is sufficient, number two might be not worth the effort just for error handling.

I thought about an (additional) third possibility: warning the user if a huge number of queries has to be processed. This could be done fast by counting the number of queries between the selected timestamps. As the database has an index on timestamp you could easily subtract the corresponding ROWIDs. After selecting the time range this calculation could be done and a dialog presented to the user ("Your selection will retrieve >100.000 queries. There is a risk that your device runs out of memory. Do you really want to proceed?")

This would need a ping-pong approach where we do extra work to send something to the user. Also the limit is fairly arbitrary. 100,000 may already be too much for a 128 MB SoC.

We'll likely go with option 1

Oh I absolutely agree :smiley:
I was trying to think from a normal-user perspective though: seeing their browser (or at least tab) completely stuck for 30+ seconds is somewhat unpleasant.
As you said, though, it's unavoidable.
FWIW I've been running 5.x for a couple months now (due to an unrelated problem) and I'm very happy with it. If push comes to shove I can definitely either use SQL or push everything in my Elasticsearch instance and do analysis there.

That's great to hear! I didn't think this was going to end up in 5.x, but the fact this is already implemented in 6.x is awesome. Keep up the great work! When 6.x is stable-ish, even if alpha, I'll definitely help testing!

I guess I figured out one thing my constantly-overheating RPi 4 is good for..? :stuck_out_tongue:

Thanks again for the info!

Hi All,

I understand this is not an issue that would have been resolved with V5.0 (non Beta) - as I did encounter the same "An unknown error occurred while loading the data" message as well when trying to fetch a query log even just for the current month (So from May 1st till 13th with ~ 25k of queries per day).

The /var/log/lighttpd/error.log also shows

2020-05-13 22:31:03: (mod_fastcgi.c.421) FastCGI-stderr: PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 4096 bytes) in /var/www/html/admin/api_db.php on line 112

or

2020-05-13 22:58:16: (mod_fastcgi.c.421) FastCGI-stderr: PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 14684160 bytes) in /var/www/html/admin/api_db.php on line 422

Pi-Hole V5.0 is running here on a Raspi 4 with 2 GB Ram - nothing else on it.

Are there any recommendations for this memory_limit value in /etc/php/7.3/cgi/php.ini for this particular case or would you have any other advice on how to proceed with this?

Thanks a lot for your help!

Chris

This issue cannot be solved easily. It will be solved with Pi-hole v6.0 which is targeting server-side pagination to reduce the workload on the PHP-engine. Currently, you can set the memory limit to whatever seems appropriate for your individual machine.

For instance:

  • You have 2 GB of memory
  • Typically, only 200 MB are used by the system and running processes
  • 2 GB - 200 MB * 75% safety margin = 1350 MB as upper limit for PHP
1 Like

2 posts were split to a new topic: How do I increase PHP's memory?