You may need to increase the memory available...

Annotation 2020-08-29 163222
Get error as shown on attachment

Long-term data / Query Log more than 'Yesterday' not possible.
Running Docker, can't find a error.log file to read anything.

My Portainer shows the following, among many other info, but this point to /lighttpd/ which I can't find.

PHP_ENV_CONFIG	/etc/lighttpd/conf-enabled/15-fastcgi-php.conf
PHP_ERROR_LOG	/var/log/lighttpd/error.log

.yml for Docker compose

      - NET_ADMIN
    container_name: pihole
      - ${NS1}
      - ${NS2}
      - DNS1=${NS1}
      - DNS2=${NS2}
      - ServerIP=${PIHOLE_SERVERIP}
      - TZ=${TZ}
      driver: json-file
        max-file: 10
        max-size: 200k
    restart: unless-stopped
      - /etc/localtime:/etc/localtime:ro
      - ${DOCKERCONFDIR}/pihole/dnsmasq.d:/etc/dnsmasq.d
      - ${DOCKERCONFDIR}/pihole/pihole:/etc/pihole
      - ${DOCKERSHAREDDIR}:/shared

If you are talking about the

      driver: json-file
        max-file: 10
        max-size: 200k

I have already made it 500k, and still get the same error; which that seemed like the easy solution. But still I don't see any error.log of any kind. Is there a line that should be added/edited to actually create a log file so I can try to debug?

I think that web interface error is from the docker having small /dev/shm by default (all docker containers, not just ours). shm_size can be customized in docker-compose as described in that link.

Same issue here.
Anything beyond "yesterday" and the error pops up.

I've bind mounted /dev/shm to the container, and (via Portainer) there is no set limit on memory.

# docker exec pihole df -h /dev/shm
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           7.8G  207M  7.6G   3% /dev/shm
2020-09-28 10:54:43: (log.c.217) server started
2020-09-28 10:55:57: (mod_fastcgi.c.2543) FastCGI-stderr: PHP Fatal error:  Allowed memory size of 134217728 bytes exhausted (tried to allocate 20480 bytes) in /var/www/html/admin/api_db.php on line 112

/dev/shm is RAM, don't bind mount it.

The problem you have is PHP allocation, you need to increase the amount of memory allowed to PHP. You only have 128M allowed to PHP.

If using default php-cgi and not php-fpm:

Not sure how to make that persistent in Docker.

1 Like

You can copy the php.ini (or the folder) out of the container, and bind mount it back in, which should make it persist.

I do note that after a restart/flush logs, the error doesn't occur - it's only after a days log when clearly something "got large". My FTL.db was 1.6GB when I last checked - and doesn't resize when "Flush logs" gets run (not sure if that's right/wrong?)

Also, the Docker image doesn't have vi or nano installed and "service X restart" doesn't work within the containers either - pihole restartdns kind of works, and kill -9'ing the php-cgi processes works as they are "supervised" and get restarted.

Looks like that one only flushes the log files and not the data in the database:

When invoked manually, this command will allow you to empty Pi-hole's log, which is located at /var/log/pihole.log. The command also serves to rotate the log daily, if the logrotate application is installed.

That is correct on both parts. The containers are not meant to be edited in use. There are no extra packages inside them, and there is no init system. s6-init handles some of the init system functions but docker as a concept does not use in-container inits.

Well for the time being I've kludged it to 256M and will see what occurs tomorrow :slight_smile:

It should also remove the last 24h from the ftl.db, but this is not reflected in the file size as deleted entries do not free space but will be overwritten with new entries. Unless you are vacuum the database it will increase in size only.

So, how does this get fixed? I am interested on being able to see info past yesterday.
Thanks in advance!


does anybody have a solution here? I have the same problem, but not using a docker.
If I increase the PHP Memory, the system is freezing, even if I have 2GB of free RAM available.

1 Like

Did you increase the php memory limit already?

yes, I tried it in several steps.

  1. 256
  2. 512
  3. 1024
    ...but it did not fixed the problem. So I did a clean new install of Ubuntu Core, updated and upgraded everything. Now it is Version: Ubuntu 20.04.2 LTS

After this, installed Pihole and directly set memory limitation to 1000M
Today its about one month ago that I did this installation and it seems to work fine. Now I can retrieve data 25+ days :slight_smile:

So for all others out there with the same problem, maybe an OS update fixes the problem :wink:
For me it also helps with core temp, which is about 5°C lower now.

The most realistic fix at this stage is probably to wait for v6.0 - which will include, among other things, server side pagination for the datatables, making OOM exceptions a thing of the past!

1 Like

<1a62da6bfbb5> is the name of my pi-hole docker container (replace it with the name of your pi-hole container name)

<2048M> is the new amount of ram I set for my pi-hole conatiner (replace this with the amount of ram you want)

I used this to get the name of my pi-hole docker:

docker container list

results of command:

CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS                   PORTS               NAMES
1a62da6bfbb5        pihole/pihole:latest   "/s6-init"               21 minutes ago      Up 6 minutes (healthy)                       pihole-template

I used to change the php.ini file:

docker exec 1a62da6bfbb5 sed -i 's/memory_limit = 128M/memory_limit = 2048M/g' /etc/php/7.3/cgi/php.ini

I used this to confirm the changes took:

docker exec 1a62da6bfbb5 cat /etc/php/7.3/cgi/php.ini |grep memory_limit

I then restarted the docker "container for pi-hole"

Which would have restarted the container and wiped out any changes you made.

I fixed the wording in my previous post.

From: I then restarted the docker
To: I then restarted the docker container for pi-hole