Pi-Hole doesn't start again after stopping

I use Pi-hole on a Raspberry Pi as a Docker container. When I stop the Pi-Hole container manually I can't start it again, always get the Failure "Request failed with status code 403". I have to click on duplicate and then deploy the container without changing anything, Pi-Hole will start again.

Pi-Hole also doesn't start again automatically if I restart the Raspberry Pi or Watchtower just changed the Pi-Hole image. What can I do to enable a automatically restart? RESTART POLICIES is set to Always of course.

That sounds as if you wouldn't handle Docker via CLI, but would use some third party web UI instead.

Since 403 is an HTML response code, your observation would be specific to that webUI rather than the Pi-hole container.

That's right, I use Portainer. If I stop the container in terminal and try to start it I get this:

$ docker start pihole
Error response from daemon: cannot create endpoint on configuration-only network
Error: failed to start containers: pihole

If I recreate the container Pihole start, but after stopping it I can't start it again without recreate the container.

I don't have Portainer, but that error message seems to suggest one wouldn't be able to just run Docker's CLI when using Portainer, unless Portainer would somehow expose its configuration details for a container to be consumed by Docker directly.
Either that, or something could be off with your container's network configuration.

EDIT:
In that latter case, I'd have expected Portainer to show the same Error response from daemon you've encountered via CLI.
The 403 response code you see instead would point to some kind of permission issue, which makes it look more like a Portainer specific issue.

Maybe someone running Portainer will be able to help you further.
You could also consider to consult Portainer's support for more knowledgable advice.

Tian,

Let's try to see some logs from your container.

Please, stop and start the container again, then go to the container Log screen (on Portainer):
Log icon (highlighted)

Then on the Log screen, click on the Copy button:

and post the logs here.

I see only entries for stopping the container in the Log, but I tried to start it without success. So again I click on Duplicate/Edit, Deploy the container and overwrite everyhing without any change, then the container start again:

[i] List stayed unchanged

[i] Building tree...
[✓] Building tree
[i] Swapping databases...
[✓] Swapping databases
[✓] The old database remains available
[i] Number of gravity domains: 2177800 (1831257 unique domains)
[i] Number of exact blacklisted domains: 0
[i] Number of regex blacklist filters: 0
[i] Number of exact whitelisted domains: 16
[i] Number of regex whitelist filters: 0
[i] Flushing DNS cache...
[✓] Flushing DNS cache
[i] Cleaning up stray matter...
[✓] Cleaning up stray matter

[✓] FTL is listening on port 53
[✓] UDP (IPv4)
[✓] TCP (IPv4)
[✓] UDP (IPv6)
[✓] TCP (IPv6)

[✓] Pi-hole blocking is enabled

Pi-hole version is v5.17.1 (Latest: v5.17.1)
AdminLTE version is v5.20.1 (Latest: v5.20.1)
FTL version is v5.23 (Latest: v5.23)
Container tag is: 2023.05.2

s6-rc: info: service legacy-services: stopping
s6-rc: info: service legacy-services successfully stopped
s6-rc: info: service _postFTL: stopping
s6-rc: info: service _postFTL successfully stopped
s6-rc: info: service lighttpd: stopping
Stopping lighttpd
s6-rc: info: service lighttpd successfully stopped
s6-rc: info: service pihole-FTL: stopping
Stopping pihole-FTL
s6-rc: info: service pihole-FTL successfully stopped
s6-rc: info: service _startup: stopping
s6-rc: info: service _startup successfully stopped
s6-rc: info: service _uid-gid-changer: stopping
s6-rc: info: service _uid-gid-changer successfully stopped
s6-rc: info: service cron: stopping
Stopping cron
s6-rc: info: service cron successfully stopped
s6-rc: info: service legacy-cont-init: stopping
s6-rc: info: service legacy-cont-init successfully stopped
s6-rc: info: service fix-attrs: stopping
s6-rc: info: service fix-attrs successfully stopped
s6-rc: info: service s6rc-oneshot-runner: stopping
s6-rc: info: service s6rc-oneshot-runner successfully stopped

Next time it refuses to start, please copy all the logs and paste here. I hope we find the reason on the logs.
You can also turn on the "Display timestamps" control. This would show the time when the container stopped.


The log shows the container was stopped.

Did you stop the container on purpose or did it stop by itself?

2023-06-02T18:41:08.928497760Z   [i] Target: xxx
2023-06-02T18:41:10.304690539Z   [i] Status: Pending...
  [✓] Status: Retrieval successful
2023-06-02T18:41:17.689530087Z 
  [i] Processed 1% of downloaded list
  (...)
  [i] Processed 99% of downloaded list
  [✓] Parsed 826133 exact domains and 0 ABP-style domains (ignored 43 non-domain entries)
2023-06-02T18:41:17.689780916Z       Sample of non-domain entries:
2023-06-02T18:41:17.689809546Z         - "xxx"
2023-06-02T18:41:17.689827805Z         - "xxx"
2023-06-02T18:41:17.689845175Z         - "xxx"
2023-06-02T18:41:17.689861823Z         - "xxx"
2023-06-02T18:41:17.689878415Z         - "xxx"
2023-06-02T18:41:17.689895174Z 
2023-06-02T18:41:17.799466324Z   [i] List stayed unchanged
2023-06-02T18:41:17.818762154Z 
2023-06-02T18:41:17.819104482Z   [i] Target: xxx
2023-06-02T18:41:18.290705454Z   [i] Status: Pending...
  [✓] Status: Retrieval successful
2023-06-02T18:41:18.331878407Z 
  [✓] Parsed 0 exact domains and 0 ABP-style domains (ignored 55 non-domain entries)
2023-06-02T18:41:18.331944295Z       Sample of non-domain entries:
2023-06-02T18:41:18.331963961Z         - "xxx"
2023-06-02T18:41:18.332230550Z         - "xxx"
2023-06-02T18:41:18.332309363Z         - "xxx"
2023-06-02T18:41:18.332328085Z         - "xxx"
2023-06-02T18:41:18.332345455Z         - "xxx"
2023-06-02T18:41:18.332362085Z 
2023-06-02T18:41:18.336372300Z   [i] List stayed unchanged
2023-06-02T18:41:18.357635859Z 
2023-06-02T18:41:18.357776579Z   [i] Target: xxx
2023-06-02T18:41:18.966054002Z   [i] Status: Pending...
  [✓] Status: Retrieval successful
2023-06-02T18:41:19.523335848Z 
  [✓] Parsed 72893 exact domains and 0 ABP-style domains (ignored 0 non-domain entries)
2023-06-02T18:41:19.547883041Z   [i] List has been updated
2023-06-02T18:41:19.587731755Z 
2023-06-02T18:41:19.588177822Z   [i] Target: xxx
2023-06-02T18:41:20.123967409Z   [i] Status: Pending...
  [✓] Status: Retrieval successful
2023-06-02T18:41:20.269361874Z 
  [✓] Parsed 4 exact domains and 7323 ABP-style domains (ignored 6676 non-domain entries)
2023-06-02T18:41:20.269453872Z       Sample of non-domain entries:
2023-06-02T18:41:20.269473650Z         - "-*-*-*-*.panda^$script,third-party"
2023-06-02T18:41:20.269491242Z         - "-1688-wp-media/ads/"
2023-06-02T18:41:20.269508520Z         - "-90mh-gg"
2023-06-02T18:41:20.269525612Z         - ".bid/ads/"
2023-06-02T18:41:20.269542630Z         - ".blog/ads/"
2023-06-02T18:41:20.269558852Z 
2023-06-02T18:41:20.281086747Z   [i] List has been updated
2023-06-02T18:41:20.300475965Z 
2023-06-02T18:41:20.300942680Z   [i] Target: xxx
2023-06-02T18:41:20.737611491Z   [i] Status: Pending...
  [✓] Status: Retrieval successful
2023-06-02T18:41:21.235220005Z 
  [✓] Parsed 2 exact domains and 52603 ABP-style domains (ignored 576 non-domain entries)
2023-06-02T18:41:21.235314022Z       Sample of non-domain entries:
2023-06-02T18:41:21.235336300Z         - "||xxx^"
2023-06-02T18:41:21.235354262Z         - "||xxx^"
2023-06-02T18:41:21.235371281Z         - "||fxhpaoxqyajvmdg"
2023-06-02T18:41:21.235388095Z         - "||rgfftupf"
2023-06-02T18:41:21.235404743Z         - "||xxx^"
2023-06-02T18:41:21.235421002Z 
2023-06-02T18:41:21.256396788Z   [i] List has been updated
2023-06-02T18:41:21.292089974Z 
2023-06-02T18:41:29.370397019Z   [i] Building tree...
  [✓] Building tree
2023-06-02T18:41:29.401172005Z   [i] Swapping databases...
  [✓] Swapping databases
2023-06-02T18:41:29.401401186Z   [✓] The old database remains available
2023-06-02T18:41:30.348552322Z   [i] Number of gravity domains: 2177130 (1831084 unique domains)
2023-06-02T18:41:30.377865960Z   [i] Number of exact blacklisted domains: 0
2023-06-02T18:41:30.385870521Z   [i] Number of regex blacklist filters: 0
2023-06-02T18:41:30.393784990Z   [i] Number of exact whitelisted domains: 16
2023-06-02T18:41:30.401999122Z   [i] Number of regex whitelist filters: 0
2023-06-02T18:41:30.436069538Z   [i] Flushing DNS cache...
  [✓] Flushing DNS cache
2023-06-02T18:41:30.455428089Z   [i] Cleaning up stray matter...
  [✓] Cleaning up stray matter
2023-06-02T18:41:30.467047465Z 
2023-06-02T18:41:30.516676046Z   [✓] FTL is listening on port 53
2023-06-02T18:41:30.533070644Z      [✓] UDP (IPv4)
2023-06-02T18:41:30.536588830Z      [✓] TCP (IPv4)
2023-06-02T18:41:30.552293346Z      [✓] UDP (IPv6)
2023-06-02T18:41:30.555830883Z      [✓] TCP (IPv6)
2023-06-02T18:41:30.555904604Z 
2023-06-02T18:41:30.562702425Z   [✓] Pi-hole blocking is enabled
2023-06-02T18:41:30.564246160Z 
2023-06-02T18:41:33.795166235Z   Pi-hole version is v5.17.1 (Latest: v5.17.1)
2023-06-02T18:41:33.797422941Z   AdminLTE version is v5.20.1 (Latest: v5.20.1)
2023-06-02T18:41:33.799454724Z   FTL version is v5.23 (Latest: v5.23)
2023-06-02T18:41:33.802999891Z   Container tag is: 2023.05.2
2023-06-02T18:41:33.803111167Z 
2023-06-02T21:38:47.539721381Z s6-rc: info: service legacy-services: stopping
2023-06-02T21:38:47.548452380Z s6-rc: info: service legacy-services successfully stopped
2023-06-02T21:38:47.548867559Z s6-rc: info: service _postFTL: stopping
2023-06-02T21:38:47.553417510Z s6-rc: info: service _postFTL successfully stopped
2023-06-02T21:38:47.554210239Z s6-rc: info: service lighttpd: stopping
2023-06-02T21:38:47.569039257Z Stopping lighttpd
2023-06-02T21:38:47.601865674Z s6-rc: info: service lighttpd successfully stopped
2023-06-02T21:38:47.602331704Z s6-rc: info: service pihole-FTL: stopping
2023-06-02T21:38:47.616983485Z Stopping pihole-FTL
2023-06-02T21:38:47.625953461Z s6-rc: info: service pihole-FTL successfully stopped
2023-06-02T21:38:47.626632544Z s6-rc: info: service _startup: stopping
2023-06-02T21:38:47.631089162Z s6-rc: info: service _startup successfully stopped
2023-06-02T21:38:47.631673654Z s6-rc: info: service _uid-gid-changer: stopping
2023-06-02T21:38:47.636591284Z s6-rc: info: service _uid-gid-changer successfully stopped
2023-06-02T21:38:47.637555158Z s6-rc: info: service cron: stopping
2023-06-02T21:38:47.651231935Z Stopping cron
2023-06-02T21:38:47.659617624Z s6-rc: info: service cron successfully stopped
2023-06-02T21:38:47.660589110Z s6-rc: info: service legacy-cont-init: stopping
2023-06-02T21:38:47.675088319Z s6-rc: info: service legacy-cont-init successfully stopped
2023-06-02T21:38:47.675939380Z s6-rc: info: service fix-attrs: stopping
2023-06-02T21:38:47.680185261Z s6-rc: info: service fix-attrs successfully stopped
2023-06-02T21:38:47.681142673Z s6-rc: info: service s6rc-oneshot-runner: stopping
2023-06-02T21:38:47.688683541Z s6-rc: info: service s6rc-oneshot-runner successfully stopped

In this case I stopped the container by myself for demonstration. But I guess it will be the same problem if Watchtower stop the container, replace the image and try to start the container again.

Wait... if you stop your container, this is the expected log output.

I don't use Watchtower (and we don't recommend using it for updating Pi-hole), but I think you have set Watchtower to stop the container after the update.

Can't be a watchtower problem, it's the same if the complete Raspberry restart. All container start correctly after that, pihole doesn't:

2023-06-04T16:31:10.255118794Z s6-rc: info: service legacy-services: stopping
2023-06-04T16:31:10.307460894Z s6-rc: info: service legacy-services successfully stopped
2023-06-04T16:31:10.307541708Z s6-rc: info: service _postFTL: stopping
2023-06-04T16:31:10.323862820Z s6-rc: info: service _postFTL successfully stopped
2023-06-04T16:31:10.323944633Z s6-rc: info: service lighttpd: stopping
2023-06-04T16:31:10.401341569Z Stopping lighttpd
2023-06-04T16:31:10.457902422Z s6-rc: info: service lighttpd successfully stopped
2023-06-04T16:31:10.458000050Z s6-rc: info: service pihole-FTL: stopping
2023-06-04T16:31:10.503894930Z Stopping pihole-FTL
2023-06-04T16:31:10.520756886Z s6-rc: info: service pihole-FTL successfully stopped
2023-06-04T16:31:10.524756642Z s6-rc: info: service _startup: stopping
2023-06-04T16:31:10.538606920Z s6-rc: info: service _startup successfully stopped
2023-06-04T16:31:10.538775713Z s6-rc: info: service _uid-gid-changer: stopping
2023-06-04T16:31:10.548619069Z s6-rc: info: service _uid-gid-changer successfully stopped
2023-06-04T16:31:10.548761029Z s6-rc: info: service cron: stopping
2023-06-04T16:31:10.589584429Z Stopping cron
2023-06-04T16:31:10.603182951Z s6-rc: info: service cron successfully stopped
2023-06-04T16:31:10.603630778Z s6-rc: info: service legacy-cont-init: stopping
2023-06-04T16:31:10.637163636Z s6-rc: info: service legacy-cont-init successfully stopped
2023-06-04T16:31:10.637976495Z s6-rc: info: service fix-attrs: stopping
2023-06-04T16:31:10.651347650Z s6-rc: info: service fix-attrs successfully stopped
2023-06-04T16:31:10.651438574Z s6-rc: info: service s6rc-oneshot-runner: stopping
2023-06-04T16:31:10.703588826Z s6-rc: info: service s6rc-oneshot-runner successfully stopped

And it stay in status exited until I redeploy the container.

I re-read the whole topic and I noticed you never said how you started the container (we supposed you started it using Portainer).

We need more details about your config.

If you started the container using a compose file or docker run command, please post it here.

If you started just using Portainer interface please let us know which options you are using:

  • Are you using volumes? which ones?
  • Which network mode are you using? host, bridge, macvlan?
  • Which ports are you mapping?
  • Are you setting any environment variables? which ones?

I use the portainer interface to start pihole.

ENV:

|DNS1|xxx|
|DNS2|xxx|
|DNSMASQ_USER|pihole|
|FTL_CMD|no-daemon|
|FTLCONF_LOCAL_IPV4|0.0.0.0|
|IPv6|True|
|PATH|/opt/pihole:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin|
|PHP_ERROR_LOG|/var/log/lighttpd/error-pihole.log|
|phpver|php|
|S6_BEHAVIOUR_IF_STAGE2_FAILS|2|
|S6_CMD_WAIT_FOR_SERVICES_MAXTIME|0|
|S6_KEEP_ENV|1|
|ServerIP|xxx|
|ServerIPv6|xxx|
|TZ|Europe/Berlin|
|WEBPASSWORD|xxx|

Volume:

container /etc/dnsmasq.d -> host /home/pi/pihole/dnsmasq.d
container /etc/pihole -> host /home/pi/pihole/pihole

Ports:

53:53 TCP
80:80 TCP
443:443 TCP
53:53 UDP
67:67 UDP

Network:
net and netConf. net is set to macvlan.


Are you using both networks???

I think (90% sure) this is causing the error message:

I still don't know how the container is working when you initially start it, but this is very likely the issue.

You should not use the netConf network.

Looking at the image:

  • net is the macvlan network (you should use this one) and
  • netConf is a template configuration for the other network. There is no network driver attached to it (the driver column is showing null).

Recreate the container using only the macvlan (net) network.
After the container finishes starting, stop it and then verify if it starts again.


Note:
On more thing: You don't need to set ports when using macvlan. They are actually ignored with macvlan.
This network mode creates the container with its own IP and every port is already accessible.

I used this tutorial for the configuration maybe that explain the network part. netConf just contain the configuration for net.

There is only the "net" network set in the container:

How should I configure "net", same like "NetConfig"?


The image you posted shows 2 different networks on the "Connected networks" list.

Is there a "Leave network" button to remove the netConf network? Like this?

It is, but when I try to leave the network I get "Failure container xxx is not connected to network netConf". So I guess I have to stop the container, then delete netConfig and net and create a new net?

I'm not sure what is causing these issues, but apparently there is something wrong with your macvlan config.

When you create a macvlan Network in Portainer, usually you need to first create a config/template network and after that create the real network.

In other words, first create a config network selecting Configuration, then create the real network selecting Creation (and set the previously created config on the field below):
portainer

Did you create your network using these steps?

It was half a year ago, but according to the tutorial I mentioned before, I did that.

Edit: I guess I solved it. I deleted the old net and netConfig networks and create both new with exactly the same details. Then redeployed the pihole container and now I can start and stop the container as it should be. Thank you for your help.

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.