I use Pi-hole on a Raspberry Pi as a Docker container. When I stop the Pi-Hole container manually I can't start it again, always get the Failure "Request failed with status code 403". I have to click on duplicate and then deploy the container without changing anything, Pi-Hole will start again.
Pi-Hole also doesn't start again automatically if I restart the Raspberry Pi or Watchtower just changed the Pi-Hole image. What can I do to enable a automatically restart? RESTART POLICIES is set to Always of course.
I don't have Portainer, but that error message seems to suggest one wouldn't be able to just run Docker's CLI when using Portainer, unless Portainer would somehow expose its configuration details for a container to be consumed by Docker directly.
Either that, or something could be off with your container's network configuration.
EDIT:
In that latter case, I'd have expected Portainer to show the same Error response from daemon you've encountered via CLI.
The 403 response code you see instead would point to some kind of permission issue, which makes it look more like a Portainer specific issue.
Maybe someone running Portainer will be able to help you further.
You could also consider to consult Portainer's support for more knowledgable advice.
I see only entries for stopping the container in the Log, but I tried to start it without success. So again I click on Duplicate/Edit, Deploy the container and overwrite everyhing without any change, then the container start again:
[i] List stayed unchanged
[i] Building tree...
[✓] Building tree
[i] Swapping databases...
[✓] Swapping databases
[✓] The old database remains available
[i] Number of gravity domains: 2177800 (1831257 unique domains)
[i] Number of exact blacklisted domains: 0
[i] Number of regex blacklist filters: 0
[i] Number of exact whitelisted domains: 16
[i] Number of regex whitelist filters: 0
[i] Flushing DNS cache...
[✓] Flushing DNS cache
[i] Cleaning up stray matter...
[✓] Cleaning up stray matter
[✓] FTL is listening on port 53
[✓] UDP (IPv4)
[✓] TCP (IPv4)
[✓] UDP (IPv6)
[✓] TCP (IPv6)
[✓] Pi-hole blocking is enabled
Pi-hole version is v5.17.1 (Latest: v5.17.1)
AdminLTE version is v5.20.1 (Latest: v5.20.1)
FTL version is v5.23 (Latest: v5.23)
Container tag is: 2023.05.2
s6-rc: info: service legacy-services: stopping
s6-rc: info: service legacy-services successfully stopped
s6-rc: info: service _postFTL: stopping
s6-rc: info: service _postFTL successfully stopped
s6-rc: info: service lighttpd: stopping
Stopping lighttpd
s6-rc: info: service lighttpd successfully stopped
s6-rc: info: service pihole-FTL: stopping
Stopping pihole-FTL
s6-rc: info: service pihole-FTL successfully stopped
s6-rc: info: service _startup: stopping
s6-rc: info: service _startup successfully stopped
s6-rc: info: service _uid-gid-changer: stopping
s6-rc: info: service _uid-gid-changer successfully stopped
s6-rc: info: service cron: stopping
Stopping cron
s6-rc: info: service cron successfully stopped
s6-rc: info: service legacy-cont-init: stopping
s6-rc: info: service legacy-cont-init successfully stopped
s6-rc: info: service fix-attrs: stopping
s6-rc: info: service fix-attrs successfully stopped
s6-rc: info: service s6rc-oneshot-runner: stopping
s6-rc: info: service s6rc-oneshot-runner successfully stopped
Next time it refuses to start, please copy all the logs and paste here. I hope we find the reason on the logs.
You can also turn on the "Display timestamps" control. This would show the time when the container stopped.
The log shows the container was stopped.
Did you stop the container on purpose or did it stop by itself?
2023-06-02T18:41:08.928497760Z [i] Target: xxx
2023-06-02T18:41:10.304690539Z [i] Status: Pending...
[✓] Status: Retrieval successful
2023-06-02T18:41:17.689530087Z
[i] Processed 1% of downloaded list
(...)
[i] Processed 99% of downloaded list
[✓] Parsed 826133 exact domains and 0 ABP-style domains (ignored 43 non-domain entries)
2023-06-02T18:41:17.689780916Z Sample of non-domain entries:
2023-06-02T18:41:17.689809546Z - "xxx"
2023-06-02T18:41:17.689827805Z - "xxx"
2023-06-02T18:41:17.689845175Z - "xxx"
2023-06-02T18:41:17.689861823Z - "xxx"
2023-06-02T18:41:17.689878415Z - "xxx"
2023-06-02T18:41:17.689895174Z
2023-06-02T18:41:17.799466324Z [i] List stayed unchanged
2023-06-02T18:41:17.818762154Z
2023-06-02T18:41:17.819104482Z [i] Target: xxx
2023-06-02T18:41:18.290705454Z [i] Status: Pending...
[✓] Status: Retrieval successful
2023-06-02T18:41:18.331878407Z
[✓] Parsed 0 exact domains and 0 ABP-style domains (ignored 55 non-domain entries)
2023-06-02T18:41:18.331944295Z Sample of non-domain entries:
2023-06-02T18:41:18.331963961Z - "xxx"
2023-06-02T18:41:18.332230550Z - "xxx"
2023-06-02T18:41:18.332309363Z - "xxx"
2023-06-02T18:41:18.332328085Z - "xxx"
2023-06-02T18:41:18.332345455Z - "xxx"
2023-06-02T18:41:18.332362085Z
2023-06-02T18:41:18.336372300Z [i] List stayed unchanged
2023-06-02T18:41:18.357635859Z
2023-06-02T18:41:18.357776579Z [i] Target: xxx
2023-06-02T18:41:18.966054002Z [i] Status: Pending...
[✓] Status: Retrieval successful
2023-06-02T18:41:19.523335848Z
[✓] Parsed 72893 exact domains and 0 ABP-style domains (ignored 0 non-domain entries)
2023-06-02T18:41:19.547883041Z [i] List has been updated
2023-06-02T18:41:19.587731755Z
2023-06-02T18:41:19.588177822Z [i] Target: xxx
2023-06-02T18:41:20.123967409Z [i] Status: Pending...
[✓] Status: Retrieval successful
2023-06-02T18:41:20.269361874Z
[✓] Parsed 4 exact domains and 7323 ABP-style domains (ignored 6676 non-domain entries)
2023-06-02T18:41:20.269453872Z Sample of non-domain entries:
2023-06-02T18:41:20.269473650Z - "-*-*-*-*.panda^$script,third-party"
2023-06-02T18:41:20.269491242Z - "-1688-wp-media/ads/"
2023-06-02T18:41:20.269508520Z - "-90mh-gg"
2023-06-02T18:41:20.269525612Z - ".bid/ads/"
2023-06-02T18:41:20.269542630Z - ".blog/ads/"
2023-06-02T18:41:20.269558852Z
2023-06-02T18:41:20.281086747Z [i] List has been updated
2023-06-02T18:41:20.300475965Z
2023-06-02T18:41:20.300942680Z [i] Target: xxx
2023-06-02T18:41:20.737611491Z [i] Status: Pending...
[✓] Status: Retrieval successful
2023-06-02T18:41:21.235220005Z
[✓] Parsed 2 exact domains and 52603 ABP-style domains (ignored 576 non-domain entries)
2023-06-02T18:41:21.235314022Z Sample of non-domain entries:
2023-06-02T18:41:21.235336300Z - "||xxx^"
2023-06-02T18:41:21.235354262Z - "||xxx^"
2023-06-02T18:41:21.235371281Z - "||fxhpaoxqyajvmdg"
2023-06-02T18:41:21.235388095Z - "||rgfftupf"
2023-06-02T18:41:21.235404743Z - "||xxx^"
2023-06-02T18:41:21.235421002Z
2023-06-02T18:41:21.256396788Z [i] List has been updated
2023-06-02T18:41:21.292089974Z
2023-06-02T18:41:29.370397019Z [i] Building tree...
[✓] Building tree
2023-06-02T18:41:29.401172005Z [i] Swapping databases...
[✓] Swapping databases
2023-06-02T18:41:29.401401186Z [✓] The old database remains available
2023-06-02T18:41:30.348552322Z [i] Number of gravity domains: 2177130 (1831084 unique domains)
2023-06-02T18:41:30.377865960Z [i] Number of exact blacklisted domains: 0
2023-06-02T18:41:30.385870521Z [i] Number of regex blacklist filters: 0
2023-06-02T18:41:30.393784990Z [i] Number of exact whitelisted domains: 16
2023-06-02T18:41:30.401999122Z [i] Number of regex whitelist filters: 0
2023-06-02T18:41:30.436069538Z [i] Flushing DNS cache...
[✓] Flushing DNS cache
2023-06-02T18:41:30.455428089Z [i] Cleaning up stray matter...
[✓] Cleaning up stray matter
2023-06-02T18:41:30.467047465Z
2023-06-02T18:41:30.516676046Z [✓] FTL is listening on port 53
2023-06-02T18:41:30.533070644Z [✓] UDP (IPv4)
2023-06-02T18:41:30.536588830Z [✓] TCP (IPv4)
2023-06-02T18:41:30.552293346Z [✓] UDP (IPv6)
2023-06-02T18:41:30.555830883Z [✓] TCP (IPv6)
2023-06-02T18:41:30.555904604Z
2023-06-02T18:41:30.562702425Z [✓] Pi-hole blocking is enabled
2023-06-02T18:41:30.564246160Z
2023-06-02T18:41:33.795166235Z Pi-hole version is v5.17.1 (Latest: v5.17.1)
2023-06-02T18:41:33.797422941Z AdminLTE version is v5.20.1 (Latest: v5.20.1)
2023-06-02T18:41:33.799454724Z FTL version is v5.23 (Latest: v5.23)
2023-06-02T18:41:33.802999891Z Container tag is: 2023.05.2
2023-06-02T18:41:33.803111167Z
2023-06-02T21:38:47.539721381Z s6-rc: info: service legacy-services: stopping
2023-06-02T21:38:47.548452380Z s6-rc: info: service legacy-services successfully stopped
2023-06-02T21:38:47.548867559Z s6-rc: info: service _postFTL: stopping
2023-06-02T21:38:47.553417510Z s6-rc: info: service _postFTL successfully stopped
2023-06-02T21:38:47.554210239Z s6-rc: info: service lighttpd: stopping
2023-06-02T21:38:47.569039257Z Stopping lighttpd
2023-06-02T21:38:47.601865674Z s6-rc: info: service lighttpd successfully stopped
2023-06-02T21:38:47.602331704Z s6-rc: info: service pihole-FTL: stopping
2023-06-02T21:38:47.616983485Z Stopping pihole-FTL
2023-06-02T21:38:47.625953461Z s6-rc: info: service pihole-FTL successfully stopped
2023-06-02T21:38:47.626632544Z s6-rc: info: service _startup: stopping
2023-06-02T21:38:47.631089162Z s6-rc: info: service _startup successfully stopped
2023-06-02T21:38:47.631673654Z s6-rc: info: service _uid-gid-changer: stopping
2023-06-02T21:38:47.636591284Z s6-rc: info: service _uid-gid-changer successfully stopped
2023-06-02T21:38:47.637555158Z s6-rc: info: service cron: stopping
2023-06-02T21:38:47.651231935Z Stopping cron
2023-06-02T21:38:47.659617624Z s6-rc: info: service cron successfully stopped
2023-06-02T21:38:47.660589110Z s6-rc: info: service legacy-cont-init: stopping
2023-06-02T21:38:47.675088319Z s6-rc: info: service legacy-cont-init successfully stopped
2023-06-02T21:38:47.675939380Z s6-rc: info: service fix-attrs: stopping
2023-06-02T21:38:47.680185261Z s6-rc: info: service fix-attrs successfully stopped
2023-06-02T21:38:47.681142673Z s6-rc: info: service s6rc-oneshot-runner: stopping
2023-06-02T21:38:47.688683541Z s6-rc: info: service s6rc-oneshot-runner successfully stopped
In this case I stopped the container by myself for demonstration. But I guess it will be the same problem if Watchtower stop the container, replace the image and try to start the container again.
I think (90% sure) this is causing the error message:
I still don't know how the container is working when you initially start it, but this is very likely the issue.
You should not use the netConf network.
Looking at the image:
net is the macvlan network (you should use this one) and
netConf is a template configuration for the other network. There is no network driver attached to it (the driver column is showingnull).
Recreate the container using only the macvlan (net) network.
After the container finishes starting, stop it and then verify if it starts again.
Note:
On more thing: You don't need to set ports when using macvlan. They are actually ignored with macvlan.
This network mode creates the container with its own IP and every port is already accessible.
It is, but when I try to leave the network I get "Failure container xxx is not connected to network netConf". So I guess I have to stop the container, then delete netConfig and net and create a new net?
I'm not sure what is causing these issues, but apparently there is something wrong with your macvlan config.
When you create a macvlan Network in Portainer, usually you need to first create a config/template network and after that create the real network.
In other words, first create a config network selecting Configuration, then create the real network selecting Creation (and set the previously created config on the field below):
It was half a year ago, but according to the tutorial I mentioned before, I did that.
Edit: I guess I solved it. I deleted the old net and netConfig networks and create both new with exactly the same details. Then redeployed the pihole container and now I can start and stop the container as it should be. Thank you for your help.