Unable to access UI

I have PiHole running in a docker container, everything appears to be running correctly but I am not able to access the UI. I have tried http://pi.hole and http:///admin but nothing works. Nothing jumps out at me as being wrong when I run the debugger. Any help is appreciated, thanks!

Debug token: https://tricorder.pi-hole.net/gIaTYcGk/

Please, post the compose file or docker run command used to start the container.

It was setup by Private Router as part of a Wireguard server. The full install script is here:

The docker run command specifically for PiHole is:

docker run -d
--name=pihole
-e TZ=America/New_York
-e WEBPASSWORD=${PIHOLE_PASS}
-e FTLCONF_REPLY_ADDR4=172.28.5.253
-v /root/.pihole/etc:/etc/pihole
-v /root/.pihole/dnsmasq.d:/etc/dnsmasq.d
--restart=unless-stopped
--network privaterouter
--ip=172.28.5.253
pihole/pihole

That script is not part of Pi-hole.
Your first point of contact should have been the maintainer of that script.

You have set your Pi-hole container's local IPv4 to a Docker internal IPv4.
Don't know if that's applied or intended by that script, but it means that only other containers will be able to communicate to that IP.

Run from the machine hosting Docker, what's the output of:

ip -4 address

ip -4 address

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    inet 192.168.1.45/24 metric 100 brd 192.168.1.255 scope global dynamic enp2s0
       valid_lft 43921sec preferred_lft 43921sec
4: br-b9689e29e55a: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    inet 172.28.5.254/16 brd 172.28.255.255 scope global br-b9689e29e55a
       valid_lft forever preferred_lft forever
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever

I have no way of knowing whether that script's maintainer intentionally restricted Pi-hole to be reachable only docker-internally.

enp2s0 seems to be your host's genuine interface.

If you'd want clients from your 192.168.1.0/24 network to be able to access Pi-hole's UI, you should set your Pi-hole container's recommended FTLCONF_LOCAL_IPV4 to that IP of your host.
Your debug log shows you've also set FTLCONF_REPLY_ADDR4, which has been replaced by the former and is now considered deprecated. You can completely omit that.

You should probably discuss this with the script maintainer.
They either should be able to explain why they would limit access in such a way, or they would have to adjust their script (assuming it is their script that sets that docker-internal IP).

The point is for PiHole to be reachable only when connected to the Wireguard VPN. The host machine runs as a Wireguard server (also in a docker container) along with PiHole and a few other apps. I have had this for over a year and until very recently it was all working just fine. Their support told me to reach out to PiHole support but I will reach back out to them for additional help troubleshooting

What client did you use to access PI-hole's UI?

If 172.28.5.0/16 would happen to be the wireguard network, traffic from those wireguard IPs should be able to reach your Pi-hole at 172.28.5.253.
Traffic of clients from 192.168.1.0/24 wouldn't.

Thank you - that helped me figure it out. My Wireguard client was previously configured to Allow ALL IPs (0.0.0.0/0) and that's when it was working, but I recently recreated the client and secured the allowed IPs. I had put in 192.168.1.0/24 but not 172.28.5.0/16. Adding this to the AllowedIPs list and restarting the Wireguard tunnel fixed the issue and now I am able to access the Pi-Hole UI again.

I really appreciate your help and your responses. Thanks!

I just realised that the script you mentioned seems to be part of a paid VPN subscription.

I also note that their support seems to offload their support burden by pointing you to a component's open source community for an issue that isn't even caused by that component.

One would imagine that they'd be capable of nudging you in the right direction much quicker than us here, given that they are privvy to all the petty details of their components' configuration.

Anyway, glad that it's working for you again. :wink:

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.