Docker image run-time (internal) process errors

After being away for a week, came home to find my instance not working correctly. Still appeared to be resolving DNS, but the admin portal was not accepting connections. This seems to have been a common issue before.

On my Raspberry, port 80 was being used, but bound to the docker image, as per the run script.
Inside the runtime image, port 80 was not being used.

Rather than trouble shooting the httpd process, I trashed my pihole/pihole image and downloaded again. The image was just updated 13 days ago.

However, the same symptoms / problem occurred.
In looking at the runtime processes within the docker image, it looks like there may be shell scripting errors in the start up scripts, as the ps output shows "if/then" logic.
For example:
if /etc/s6/init/init-stage2-redirfd foreground if if s6-echo -n -- [s6-init] making user provided files available at /var/run/s6/etc...

"ps -ef" listing:

pi@raspberrypi:~ $ docker exec -it pihole /bin/bash
root@25655f99589f:/# ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 01:40 ? 00:00:00 s6-svscan -t0 /var/run/s6/services
root 29 1 0 01:40 ? 00:00:00 foreground if /etc/s6/init/init-stage2-redirfd foreground if if s6-echo -n -- [s6-init] making user provided files available at /va
root 30 1 0 01:40 ? 00:00:00 s6-supervise s6-fdholderd
root 37 29 0 01:40 ? 00:00:00 if /etc/s6/init/init-stage2-redirfd foreground if if s6-echo -n -- [s6-init] making user provided files available at /var/run/s6/etc...
root 38 37 0 01:40 ? 00:00:00 foreground if if s6-echo -n -- [s6-init] making user provided files available at /var/run/s6/etc... foreground backtick -n S6_RUNTIME_
root 43 38 0 01:40 ? 00:00:00 if if -t s6-test -d /var/run/s6/etc/cont-init.d if s6-echo [cont-init.d] executing container initialization scripts... if pipeline s6-ls
root 212 43 0 01:40 ? 00:00:00 if pipeline s6-ls -0 -- /var/run/s6/etc/cont-init.d pipeline s6-sort -0 -- forstdin -o 0 -0 -- i importas -u i i if s6-echo --
root 215 212 0 01:40 ? 00:00:00 forstdin -o 0 -0 -- i importas -u i i if s6-echo -- [cont-init.d] ${i}: executing... foreground /var/run/s6/etc/cont-init.d/${i} importas -u ? ? if s6-echo
root 216 215 0 01:40 ? 00:00:00 [s6-ls]
root 217 215 0 01:40 ? 00:00:00 [s6-sort]
root 218 215 0 01:40 ? 00:00:00 foreground /var/run/s6/etc/cont-init.d/20-start.sh importas -u ? ? if s6-echo -- [cont-init.d] 20-start.sh: exited ${?}. ifelse s6-test 2 -eq 0 exit 0
root 220 218 0 01:40 ? 00:00:00 bash /var/run/s6/etc/cont-init.d/20-start.sh
root 330 1 3 01:40 ? 00:00:01 pihole-FTL test
root 345 220 0 01:40 ? 00:00:00 bash /opt/pihole/gravity.sh
root 410 0 2 01:41 pts/0 00:00:00 /bin/bash
root 417 345 0 01:41 ? 00:00:00 timeout 1 getent hosts pi.hole
root 418 417 0 01:41 ? 00:00:00 getent hosts pi.hole
root 419 410 0 01:41 pts/0 00:00:00 ps -ef
root@25655f99589f:/#

Expected Behaviour:

Admin page should work

Actual Behaviour:

Admin page not accepting requests.
Internal docker process listings look wrong.

Debug Token:

pihole -d generated a script error at then end of the report process.

[?] Would you like to upload the log? [y/N] y
* Using curl for transmission.
/opt/pihole/piholeDebug.sh: line 1151: warning: command substitution: ignored null byte in input

Debug log did show:
*** [ DIAGNOSING ]: Pi-hole processes

[✗] lighttpd daemon is inactive

[✗] pihole-FTL daemon is inactive

It also seems like the docker image continually re-starts:

pi@raspberrypi:~ $ date
Sat Jun 15 14:09:29 UTC 2019
pi@raspberrypi:~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
25655f99589f pihole/pihole:latest "/s6-init" 13 hours ago Up 2 minutes (healthy) 0.0.0.0:53->53/tcp, 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:53->53/udp, 67/udp pihole
pi@raspberrypi:~ $ date; docker ps
Sat Jun 15 14:45:44 UTC 2019
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
25655f99589f pihole/pihole:latest "/s6-init" 14 hours ago Up 4 minutes (healthy) 0.0.0.0:53->53/tcp, 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:53->53/udp, 67/udp pihole
pi@raspberrypi:~ $

Either the health check is force restarting the container because it is unhealthy or it is crashing. docker logs pihole should tell you what is happening. We don't need to see all of it, just the part around a restart so you might add | tail -200 to avoid printing too much.

You might try starting a fresh container without any pre-existing volume data as a control to see if we should be looking closer at your volume configs for corruption.

Thanks for the reply. I did download the latest container yesterday ( had been updated approx 2 weeks ago).

I will attach the log files later tonight.

pihole_docker.logs.txt (42.6 KB)

looks like you're hitting a pretty standard bug WARNING Misconfigured DNS in /etc/resolv.conf: Primary DNS should be 127.0.0.1 (found 192.168.1.197)

The workaround is setup your docker run to have --dns 127.0.0.1 --dns 1.1.1.1

Hello !

Adding the dns option to the run script fixed the problem !!

I was skeptical, but it worked. Its just that my pi-hole ran for over 2 weeks with no problems, and then all of a sudden it stopped working. ( The internal docker 'ps -ef' output is nice and clean now ).

Oh well.

Thanks for the help.

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.