You may need to increase the memory available...

Annotation 2020-08-29 163222
Get error as shown on attachment

Long-term data / Query Log more than 'Yesterday' not possible.
Running Docker, can't find a error.log file to read anything.

My Portainer shows the following, among many other info, but this point to /lighttpd/ which I can't find.

ENV
PHP_ENV_CONFIG	/etc/lighttpd/conf-enabled/15-fastcgi-php.conf
PHP_ERROR_LOG	/var/log/lighttpd/error.log

.yml for Docker compose

services:
  pihole:
    cap_add:
      - NET_ADMIN
    container_name: pihole
    dns:
      - 127.0.0.1
      - ${NS1}
      - ${NS2}
    environment:
      - DNS1=${NS1}
      - DNS2=${NS2}
      - ServerIP=${PIHOLE_SERVERIP}
      - TZ=${TZ}
      - WEBPASSWORD=${PIHOLE_WEBPASSWORD}
    logging:
      driver: json-file
      options:
        max-file: 10
        max-size: 200k
    restart: unless-stopped
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - ${DOCKERCONFDIR}/pihole/dnsmasq.d:/etc/dnsmasq.d
      - ${DOCKERCONFDIR}/pihole/pihole:/etc/pihole
      - ${DOCKERSHAREDDIR}:/shared

diginc
If you are talking about the

logging:
      driver: json-file
      options:
        max-file: 10
        max-size: 200k

I have already made it 500k, and still get the same error; which that seemed like the easy solution. But still I don't see any error.log of any kind. Is there a line that should be added/edited to actually create a log file so I can try to debug?

I think that web interface error is from the docker having small /dev/shm by default (all docker containers, not just ours). shm_size can be customized in docker-compose as described in that link.

Same issue here.
Anything beyond "yesterday" and the error pops up.

I've bind mounted /dev/shm to the container, and (via Portainer) there is no set limit on memory.

# docker exec pihole df -h /dev/shm
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           7.8G  207M  7.6G   3% /dev/shm
2020-09-28 10:54:43: (log.c.217) server started
2020-09-28 10:55:57: (mod_fastcgi.c.2543) FastCGI-stderr: PHP Fatal error:  Allowed memory size of 134217728 bytes exhausted (tried to allocate 20480 bytes) in /var/www/html/admin/api_db.php on line 112

/dev/shm is RAM, don't bind mount it.

The problem you have is PHP allocation, you need to increase the amount of memory allowed to PHP. You only have 128M allowed to PHP.

If using default php-cgi and not php-fpm:

Not sure how to make that persistent in Docker.

You can copy the php.ini (or the folder) out of the container, and bind mount it back in, which should make it persist.

I do note that after a restart/flush logs, the error doesn't occur - it's only after a days log when clearly something "got large". My FTL.db was 1.6GB when I last checked - and doesn't resize when "Flush logs" gets run (not sure if that's right/wrong?)

Also, the Docker image doesn't have vi or nano installed and "service X restart" doesn't work within the containers either - pihole restartdns kind of works, and kill -9'ing the php-cgi processes works as they are "supervised" and get restarted.

Looks like that one only flushes the log files and not the data in the database:

When invoked manually, this command will allow you to empty Pi-hole's log, which is located at /var/log/pihole.log. The command also serves to rotate the log daily, if the logrotate application is installed.

https://docs.pi-hole.net/core/pihole-command/#log-flush

That is correct on both parts. The containers are not meant to be edited in use. There are no extra packages inside them, and there is no init system. s6-init handles some of the init system functions but docker as a concept does not use in-container inits.

Well for the time being I've kludged it to 256M and will see what occurs tomorrow :slight_smile:

It should also remove the last 24h from the ftl.db, but this is not reflected in the file size as deleted entries do not free space but will be overwritten with new entries. Unless you are vacuum the database it will increase in size only.

So, how does this get fixed? I am interested on being able to see info past yesterday.
Thanks in advance!

2 Likes

does anybody have a solution here? I have the same problem, but not using a docker.
If I increase the PHP Memory, the system is freezing, even if I have 2GB of free RAM available.

Did you increase the php memory limit already?