Implementing a Second Pihole - Can't Access Admin Page

Summary
I have a primary Pihole up and running just fine. It's installed via Docker and NGING Proxy Manager manages its admin connection - more on this later. I have a second RPi4 that I am using in my IOT network and I would like to create a High Availability setup of Pihole. This would mean utilizing Gravity Sync across VLANs.

  • I created a docker-compose.yaml file for my second Pihole instance. Used exactly the same config and my primary pihole.
  • Verified the container is running.
  • Created a new host, we will call it pihole2 in NGINX Proxy Manager to fwd me to the second pihole via IP Address and port.

The issue I am facing:
I can't access the Admin panel? I've tried numerous URLs such as

  • https://<my_nginx_subdomain_url>/
  • http://<IP_address_of_second_Pihole>:80/
  • http://<IP_address_of_second_Pihole>:80/admin/
  • http://<pi.hole>/admin/ (in theory, I did not think this would work anyways because my first pihole would be conflicting with the same URL?)

Details about my system:
Hardware: RPi4 - 4GB
Installation: Docker

Additional Investigation
I've always had local DNS issues since I have implement my first Pihole across VLANs. If I try to ssh or ping via the host name, sometimes it works and sometimes it doesn't. That's why I haven't relied on host names in my troubleshooting but rather the IP Address.

Debug Tokens
Primary Pihole Debug https://tricorder.pi-hole.net/CMfS6a2L/
Second Pihole Debug https://tricorder.pi-hole.net/TN3PAxIn/

Apparently your second Raspberry Pi (or just the container) is not connecting to the internet:

[✗] dig return code: 9
[✗] dig response: ;; connection timed out; no servers could be reached
[✗] Error: dig command failed - Unable to check OS

and

[✗] Failed to resolve doubleclick.com via a remote, public DNS server (8.8.8.8)

For that to work, your nginx has to correctly handle the forward to http://<2nd-pihole-host>/admin.

Note that you have to specify /admin for access via IP address.

It would work, but you'd end up on the UI of the Pi-hole that the requesting client would have used for DNS, as that would have translated pi.hole to its IP.

Your debug log shows that you have enabled Conditional Forwarding to allow local resolution through your router's DNS resolver, but you haven't configured a local domain name to go along with it.
Without that detail, Pi-hole would forward a client's requests for <hostname>.<search.domain> to its public upstreams, which naturally would not know anything about your local private network.

A failure to resolve may also suggest routing issues, i.e. some VLANs may not be allowed inter-VLAN communications to Pi-hole.

To analyse further how DNS is involved, use dig or nslookup from a client you suspect to have DNS issues.

Oh, maybe because my second RPi is pointing to the first Pi for DNS. I block any other DNS providers that are not my RPi, like 8.8.8.8. This is a good find. I guess before implementing Pihole on the second RPi, I should change its local network config to not rely on the primary RPi for DNS.

Alright, I think I jumped the gun on this setup without making further necessary changes. I guess I need to do the following:

  • Change /etc/dhcpcd.conf on the second RPi to the following:

    # Example static IP configuration:
    interface eth0
    static ip_address=<second_RPi_IP_address>
    #static ip6_address=fd51:42f8:caae:d92e::ff/64
    static routers=<gateway>
    static domain_name_servers=127.0.0.1
    
  • verify less /etc/resolve.conf shows

    # Generated by resolvconf
    nameserver 127.0.0.1
    
  • Make sure all my networks' DHCP name server include the second Pihole's IP Address

  • Verify FW rules allow DNS requests made to both Piholes

Thanks @rdwebdesign for this catch.

I've updated changes and still having issues accessing the admin web page.

Debug Token
https://tricorder.pi-hole.net/RvtwsjTx/

For that to work, your nginx has to correctly handle the forward to http://<2nd-pihole-host>/admin.
Note that you have to specify /admin for access via IP address.

Ah, I didn't realize the need to include "/admin" when using IP address @Bucking_Horn. And now that you mention it - it seems I never had this properly configured on my NGINX config. I think I may have shoehorned a fix by creating a bookmark that included the redirect to "/admin" - I can't remember what I did honestly. But now that I try to utilize NGINX's fwd location, it isn't working - which is good because that is to be expected.

Research
Looking at several forums/posts about this subject - like this one. It seems NGINX provides a simple fix to handle redirects for situation like this by applying a "custom location".

I applied this change, but it is still not working. I know this forum is for Pihole, but anyone who has experience with NGINX's config to accommodate redirects and can educate me what I am still doing wrong I would be very grateful.

Your debug log shows that you have enabled Conditional Forwarding to allow local resolution through your router's DNS resolver, but you haven't configured a local domain name to go along with it.
Without that detail, Pi-hole would forward a client's requests for <hostname>.<search.domain> to its public upstreams, which naturally would not know anything about your local private network.

I guess the wording on Pihole's setting page read like it was optional. But you are telling me I do need to include this?

Does your router provide a local <search.domain>?

If it does, a client may append <search.domain> (as defined by your router) to its DNS requests. If it chooses to do so, your current configuration would forward such requests to a public upstream.

Usually, you'd see client software trying to resolve the plain, non FQDN hostname first before appending <search.domain> with another request (and some may send them in parallel at once, like nslookup does).

If you'd ticked 'Never forward non-FQDN...', that would have Pi-hole return NXDOMAIN for the plain hostname (unless Pi-hole would hold a local DNS record for that), while <hostname>.<search.domain> would still be forwarded and answered.

I think the easiest fix for me was to change the container's default port so that it and NPM would not overlap. In the past, this was not a problem when I was utilizing the Container's name to proxy the traffic but when you start using the host's IP address port come in to play.

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.