PiHole on Raspberry Pi with dedicated IP

Expected Behaviour:

PiHole Web interface exposed on dedicated IP

Actual Behaviour:

ports blocked --> no access
[✗] DNS resolution is currently unavailable

Debug Token:

pihole -d
pihole: command not found
ditto pi.hole and variants

Summary:
briefly:
RPi configured to provide an IoT environment on Docker + Portainer (MQtt --> Node-Red --> InfluxDb 1.x --> Grafana) using IoTstack.
PiHole already successfully deployed on Synology NAS DS916+ using Portainer. Used as Primary DNS.
PiHole working very well.
Adding PiHole to Raspberry Pi as fallback/Secondary DNS in case of Primary DNS failure
(Note: Synology NAS PiHole installation used https://www.wundertech.net/how-to-install-pi-hole-on-portainer/)
Was hoping to replicate for the RPi but the problem appears to be the use of the macvlan network which causes no problems on the Synology NAS.

NAS:
Fixed IP: 192.168.1.15 (ports exposed only for existing services, not for PiHole)
macvlan network on 192:16.1.16/32 (only the macvlan ports exposed, as expected: 53, 80)
(port 67 not needed as DHCP served by router on 192.168.1.1)
Web Interface to PiHole available on port 80

RPi:
After numerous failed attempts to follow same Wundertech script, opted to install PiHole using IoTstack
Uses "iotstack_default" network
The IoTstack install of PiHole functions correctly when router DNS set to 192.168.1.50 (the fixed IP of the Raspberry Pi):

  • blocks queries on port 53 and Web Interface available on port 80 as expected
  • i.e., normal functional behaviour

RPi - different IP:
The ONLY change made is to switch from "iotstack_default" network to the "ph_network" network (as advocated in the Wundertech tutorial): the same network that I've used successfully on the NAS. Here's how "ph_network" is created:

sudo docker network create -d macvlan -o parent=wlan0 --subnet=192.168.1.0/24 --gateway=192.168.1.1 --ip-range=192.168.1.17/32 ph_network

Note the restrictive subnet.

Here's the output of the PiHole Log on Portainer:

s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service cron: starting
s6-rc: info: service cron successfully started
s6-rc: info: service _uid-gid-changer: starting
s6-rc: info: service _uid-gid-changer successfully started
s6-rc: info: service _startup: starting
  [i] Starting docker specific checks & setup for docker pihole/pihole
  [i] Setting capabilities on pihole-FTL where possible
  [i] Applying the following caps to pihole-FTL:
        * CAP_CHOWN
        * CAP_NET_BIND_SERVICE
        * CAP_NET_RAW
        * CAP_NET_ADMIN
  [i] Ensuring basic configuration by re-running select functions from basic-install.sh
  [i] Installing configs from /etc/.pihole...
  [i] Existing dnsmasq.conf found... it is not a Pi-hole file, leaving alone!
  [i] Installing /etc/dnsmasq.d/01-pihole.conf...
  [✓] Installed /etc/dnsmasq.d/01-pihole.conf
  [i] Installing /etc/.pihole/advanced/06-rfc6761.conf...
  [✓] Installed /etc/dnsmasq.d/06-rfc6761.conf
  [i] Installing latest logrotate script...
	[i] Existing logrotate file found. No changes made.
  [i] Assigning password defined by Environment Variable
  [✓] Password Removed
  [i] Added ENV to php:
                    "TZ" => "Etc/UTC",
                    "PIHOLE_DOCKER_TAG" => "",
                    "PHP_ERROR_LOG" => "/var/log/lighttpd/error-pihole.log",
                    "CORS_HOSTS" => "",
                    "VIRTUAL_HOST" => "Pi-Hole-DNS-ph_net",
  [i] Using IPv4 and IPv6
  [i] Installing latest Cron script...
  [✓] Installing latest Cron script
  [i] Preexisting ad list /etc/pihole/adlists.list detected (exiting setup_blocklists early)
  [i] Setting DNS servers based on PIHOLE_DNS_ variable
  [i] Applying pihole-FTL.conf setting LOCAL_IPV4=0.0.0.0
  [i] Applying pihole-FTL.conf setting MAXDBDAYS=365
  [i] FTL binding to default interface: eth0
  [i] Enabling Query Logging
  [i] Testing lighttpd config: Syntax OK
  [i] All config checks passed, cleared for startup ...
  [i] Docker start setup complete
  [i] pihole-FTL (no-daemon) will be started as pihole
s6-rc: info: service _startup successfully started
s6-rc: info: service pihole-FTL: starting
s6-rc: info: service pihole-FTL successfully started
s6-rc: info: service lighttpd: starting
s6-rc: info: service lighttpd successfully started
s6-rc: info: service _postFTL: starting
s6-rc: info: service _postFTL successfully started
s6-rc: info: service legacy-services: starting
  Checking if custom gravity.db is set in /etc/pihole/pihole-FTL.conf
s6-rc: info: service legacy-services successfully started
  [✗] DNS resolution is currently unavailable

I'm really struggling with DNS concepts (quite new to me) and am quite happy to believe that the RPi implements its network differently from the Synology NAS. I don't know what diagnostics to run and am finding the interpretation of the outputs of various checks I have made (netstat, ifconfig -a, for example, difficult).

That's enough info for now but glad to provide results of any other checks you deem necessary.

Any suggestions how to move forward greatly appreciated.

Thanks,
Ric

Please share your docker-compose or docker run script for your Pi-hole container.

Also, please upload a debug log and post just the token URL that is generated after the log is uploaded by running the following command from the Pi-hole host terminal:

pihole -d

or do it through the Web interface:

Tools > Generate Debug Log

pihole is installed via IoTstack using the latest script version. There is no docker-compose; the nearest being the pihole template:

pihole:
  container_name: pihole
  image: pihole/pihole:latest
  ports:
    - "80:80/tcp"
    - "53:53/tcp"
    - "53:53/udp"
    - "67:67/udp"
  environment:
    - TZ=${TZ:-Etc/UTC}
    - WEBPASSWORD=
    # see https://sensorsiot.github.io/IOTstack/Containers/Pi-hole/#adminPassword
    - INTERFACE=eth0
    - FTLCONF_MAXDBDAYS=365
    - PIHOLE_DNS_=8.8.8.8;8.8.4.4
    # see https://github.com/pi-hole/docker-pi-hole#environment-variables
  volumes:
    - ./volumes/pihole/etc-pihole:/etc/pihole
    - ./volumes/pihole/etc-dnsmasq.d:/etc/dnsmasq.d
  dns:
    - 127.0.0.1
    - 1.1.1.1
  cap_add:
    - NET_ADMIN
  restart: unless-stopped

IoTstack uses its own network "iotstack_default". Pihole works perfectly using the static address assigned to the Pi (192.168.1.50) against which all the other applications (Mosquitto, Node-Red, etc) are exposed.

On the Synology NAS, also with Docker/Portainer, the network "ph_network", described in the Wundertech tutorial works fine. But not on the Pi.

I ran docker exec pihole pihole -r twice, once using 'iotstack_default' with IPaddress 192.168.1.50, the other using 'ph_network' with IP address 192.168.1.17.

In neither instance was a log saved. I did, however, capture both to text file and can still supply these (but too long to paste here).

I have just read that pihole -r can be used to reconfigure the network but I have not yet been able to find associated documentation (Edit: just tried it but "Function not supported in Docker images". But in any case, pretty much everything can be changed in the Portainer interface).

Nonetheless, I feel it should be possible to have Mosquitto, Node-red, etc remain on network IP 192.168.1.50 with their respective ports and Pihole on network 192.168.1.17 using ports 53 and 80.

Regards,
Ric

What you've shared is a docker-compose file - are you saying it's fictional, made up by manually filling values into Pi-hole's template that you believe to be set?

If there is no way to share your actual configuration, it will be harder to help you

Hmm, I was asking for a debug token, generated either by using Pi-hole's web UI or by running pihole -d, not pihole -r. A typo, perhaps?

As for your reconfiguration attempts:
Configuration of a container should be done by setting environment variables, or via Pi-hole's UI. The former is guaranteed to be persistent, while the latter would require creating the required volumes (e.g. as suggested by Pi-hole's template).

Let's check for the most common reason for Pi-hole's failure to start: another DNS service already claiming port 53.
Run from your machine hosting your Pi-hole container, what's the result of:

sudo ss -tulpn sport = 53

Hello and thanks for your reply.

There is no docker-compose; the nearest being the pihole template:

This is the actual template that IoTStack provides and uses to build its containers.

Yep typo, apologies. Ran: docker exec pihole pihole -d twice. (I can't use the web UI when 'ph_network' is used: ports 53, 80, 67, 443 (all tried at various times) are blocked (port scan over network). When 'iotstack_default'network is used then ports 53, 80, etc., are open.) But see below.

Configuration of a container should be done by setting environment variables, or via Pi-hole's UI. The former is guaranteed to be persistent

That's my understanding too and that's what I have been doing. Portainer makes this really easy.

On the running instance of PiHole (192.156.1.50, using iotstack_default network) I can obtain a debug log. Here's the token: https://tricorder.pi-hole.net/e3nBHoSp/

Portainer also shows a Pihole log at startup. Here it is:

s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service cron: starting
s6-rc: info: service cron successfully started
s6-rc: info: service _uid-gid-changer: starting
s6-rc: info: service _uid-gid-changer successfully started
s6-rc: info: service _startup: starting
  [i] Starting docker specific checks & setup for docker pihole/pihole
  [i] Setting capabilities on pihole-FTL where possible
  [i] Applying the following caps to pihole-FTL:
        * CAP_CHOWN
        * CAP_NET_BIND_SERVICE
        * CAP_NET_RAW
        * CAP_NET_ADMIN
  [i] Ensuring basic configuration by re-running select functions from basic-install.sh
  [i] Installing configs from /etc/.pihole...
  [i] Existing dnsmasq.conf found... it is not a Pi-hole file, leaving alone!
  [i] Installing /etc/dnsmasq.d/01-pihole.conf...
  [✓] Installed /etc/dnsmasq.d/01-pihole.conf
  [i] Installing /etc/.pihole/advanced/06-rfc6761.conf...
  [✓] Installed /etc/dnsmasq.d/06-rfc6761.conf
  [i] Installing latest logrotate script...
	[i] Existing logrotate file found. No changes made.
  [i] Assigning password defined by Environment Variable
  [✓] Password Removed
  [i] Added ENV to php:
                    "TZ" => "Etc/UTC",
                    "PIHOLE_DOCKER_TAG" => "",
                    "PHP_ERROR_LOG" => "/var/log/lighttpd/error-pihole.log",
                    "CORS_HOSTS" => "",
                    "VIRTUAL_HOST" => "Pi-Hole-DNS-ph_net",
  [i] Using IPv4 and IPv6
  [i] Installing latest Cron script...
  [✓] Installing latest Cron script
  [i] Preexisting ad list /etc/pihole/adlists.list detected (exiting setup_blocklists early)
  [i] Setting DNS servers based on PIHOLE_DNS_ variable
  [i] Applying pihole-FTL.conf setting LOCAL_IPV4=0.0.0.0
  [i] Applying pihole-FTL.conf setting MAXDBDAYS=365
  [i] FTL binding to default interface: eth0
  [i] Enabling Query Logging
  [i] Testing lighttpd config: Syntax OK
  [i] All config checks passed, cleared for startup ...
  [i] Docker start setup complete
  [i] pihole-FTL (no-daemon) will be started as pihole
s6-rc: info: service _startup successfully started
s6-rc: info: service pihole-FTL: starting
s6-rc: info: service pihole-FTL successfully started
s6-rc: info: service lighttpd: starting
s6-rc: info: service lighttpd successfully started
s6-rc: info: service _postFTL: starting
  Checking if custom gravity.db is set in /etc/pihole/pihole-FTL.conf
s6-rc: info: service _postFTL successfully started
s6-rc: info: service legacy-services: starting
s6-rc: info: service legacy-services successfully started
  [i] Neutrino emissions detected...
  [✓] Pulling blocklist source list into range
  [i] Preparing new gravity database...
  [✓] Preparing new gravity database
  [i] Creating new gravity databases...
  [✓] Creating new gravity databases
  [i] Using libz compression
  [i] Target: https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts
  [i] Status: Pending...
  [✓] Status: Retrieval successful
  [✓] Parsed 168786 exact domains and 0 ABP-style domains (ignored 1 non-domain entries)
      Sample of non-domain entries:
        - "0.0.0.0"
  [i] List stayed unchanged
  [i] Building tree...
  [✓] Building tree
  [i] Swapping databases...
  [✓] Swapping databases
  [✓] The old database remains available
  [i] Number of gravity domains: 168786 (168786 unique domains)
  [i] Number of exact blacklisted domains: 0
  [i] Number of regex blacklist filters: 0
  [i] Number of exact whitelisted domains: 0
  [i] Number of regex whitelist filters: 0
  [i] Cleaning up stray matter...
  [✓] Cleaning up stray matter
  [✓] FTL is listening on port 53
     [✓] UDP (IPv4)
     [✓] TCP (IPv4)
     [✓] UDP (IPv6)
     [✓] TCP (IPv6)
  [✓] Pi-hole blocking is enabled
  Pi-hole version is v5.17.3 (Latest: v5.17.3)
  web version is v5.21 (Latest: v5.21)
  FTL version is v5.25.1 (Latest: v5.25.1)
  Container tag is: 2024.02.2

Your request:

sudo ss -tulpn sport = 53
Netid    State     Recv-Q    Send-Q       Local Address:Port       Peer Address:Port   Process                                      
udp      UNCONN    0         0                  0.0.0.0:53              0.0.0.0:*       users:(("docker-proxy",pid=429902,fd=4))    
udp      UNCONN    0         0                        *:53                    *:*       users:(("docker-proxy",pid=429910,fd=4))    
tcp      LISTEN    0         4096               0.0.0.0:53              0.0.0.0:*       users:(("docker-proxy",pid=429879,fd=4))    
tcp      LISTEN    0         4096                  [::]:53                 [::]:*       users:(("docker-proxy",pid=429886,fd=4))   

regards,
Ric

Are you trying to access the web interface from the host machine itself or from a different device?

That process is binding port 53 and preventing pihole-FTL from starting.
You need to disable or uninstall it, or move it to a different port and/or interface.

I can access the working version of PiHole (using iotstack_default network on 192.168.1.50) from any machine on the network including the host. When I switch to ph_network, ports 53 and 80 disappear from 192.168.1.50 (as expected) and nothing appears on 192.168.1.17. Those ports appear to be blocked. My guess is they're not being exposed rather than blocked.
R

About the other network (iotstack_default):

  • is this also a macvlan network?
  • is this parent interface also wlan0?

docker network inspect iotstack_default --format='{{.Driver}} {{.Options.parent}}' should return these values.

OK, that explains why the Pihole.log (live) fills up over time showing that blocking IS working, but the FTL.log remains stubbornly empty? IPiHole is blocking queries but without the long-term database? But if I access Long-term Data in the menu and select a time period, I can see the bar chart and the query log stretching back... so is this data from elsewhere?

... and if I check, /etc/pihole/pihole-FTL. db doesn't exist. In fact there is no /etc/pihole/ folder

OK, out of my depth right now. This is a docker issue, not a PiHole configuration issue. Correct?

Regards,
Ric

Hi,
This also confused me greatly... At the moment the RPi is sitting on my desk using wlan0 (WiFi).
ifconfig shows this is connected to the local network:

wlan0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.1.50  netmask 255.255.255.0  broadcast 192.168.1.255

In Portainer, the ENV section clearly shows:
INTERFACE eth0
I did change this to 'wlan0' but then nothing seemed to work at all... so it's now set back to 'eth0'. All the other containers seem to use eth0 as well.

I wasn't too worried because when configured, I planned to house the RPi in an enclosure and connect it to the wired network - which is most definitely eth0.

Your suggestion:

docker network inspect iotstack_default --format='{{.Driver}} {{.Options.parent}}'
bridge <no value>

.. and now I'm really out of my depth. I'll try to catch up.
R

Your previous network type is bridge.

Looks like your issue is related to your new network type: macvlan.

As far as I know, macvlan networks usually don't work with WiFi interfaces, because IEEE 802.11 doesn’t like multiple MAC addresses on a single client (reference: https://hicu.be/macvlan-vs-ipvlan).

If your device has a Ethernet interface, you could try to use it.

Note:

This is expected.
The compose file you posted above (you call it "Template") is explicitly setting this variable as eth0.

IoTstack calls them templates; I've simply mirrored their terminology.

But yes, exactly, and that's why I tried to reset the interface to 'wlan0' because I assumed IoTstack itself assumed an RPi connected to a wired network, which didn't apply to my user-case.

I went down the route of IoTstack because I believed it would give an environment where all the essential parameters had been configured and all the IoT applications I wanted to use (MQTT, InfluxDB, Grafana) would sit in a single harmonised environment. You know, it's hard to make choices in the absence of hindsight, so you do your reading and do your best.

I am left with a number of issues:

  1. the existing, apparently working, instance of piHole may not be functioning correctly because of the conflict with docker-proxy. I don't yet know how to resolve that.

  2. the same problem may exist for my (apparently) working version of PiHole on the Synology NAS for exactly the same reason.

  3. I still need to resolve the issue of having PiHole on the Raspberry Pi having its own dedicated IP (for comparison, the Synology NAS has IP 192.168.1.15 and its PiHole in docker uses 192.168.1.16. The dual IP solution avoids all the port conflicts resulting from the existing applications running on the NAS).

  4. If I cannot configure the RPi to use two different IP addresses (192.168.1.50 for IoT and 192.16.1.17 for DNS/PiHole) then I am faced with using a second RPi and configuring that to use 192.168.1.17. But that would be 2 RPi 5 devices both under-utilised not doing very much at all at some expense.

Again, when starting out, it's difficult to know which decisions are the best ones.

So the focus is:

  • is the IoTstack installation somehow flawed? If so how, where, how resolved?
  • how do I resolve the docker-proxy issue?
  • how do I get the RPi to use a different IP just for PiHole?

Thanks for staying with me so far,
Ric

Yes, perhaps facilitated by that IoT stack.
As docker-proxy should be a (dated) Docker internal, that may suggest that some other container deployed in that stack is claiming that port before Pi-hole can.

As mentioned, you could probably try to map the port against a specific interface IP.
If you can't identify the offending process or container, you may try that with your Pi-hole container's docker-compose, e.g.:

    - "192.168.1.x:53:53/tcp"
    - "192.168.1.x:53:53/udp"

Worth a try, but this may still fail, as your ss output shows that docker-proxy itself binds to 0.0.0.0 (i.e. all interfaces), which may still cause conflicts.

If it doesn't work, you should consider to also raise an issue with the installation's guide creator or IoTStack.
They may be in a better position to help you mitigate potential conflicts with different services in their stack.

This is as interesting as it is perplexing..

I created a fresh install of Raspberry Pi OS 64 bit on a spare Micro SD Card. The installation assumed the same IP as the previous posts because of the reservation in the router (192.168.1.50)

I installed docker without recourse either to PiBuilder (Andreas Speiss' 5 stage install of docker) or to IOTstack

I installed Portainer and then installed PiHole (CLI install).

The new instance of PiHole was immediately available and accessing via port 80 showed it was up and running. The debug log was as posted previously.

So I ran sudo ss -tulpn sport = 53

And the result:

sudo ss -tulpn sport = 53
Netid        State         Recv-Q        Send-Q               Local Address:Port               Peer Address:Port       Process                                         
udp          UNCONN        0             0                          0.0.0.0:53                      0.0.0.0:*           users:(("docker-proxy",pid=57305,fd=4))        
udp          UNCONN        0             0                                *:53                            *:*           users:(("docker-proxy",pid=57313,fd=4))        
tcp          LISTEN        0             4096                       0.0.0.0:53                      0.0.0.0:*           users:(("docker-proxy",pid=57286,fd=4))        
tcp          LISTEN        0             4096                          [::]:53                         [::]:*           users:(("docker-proxy",pid=57293,fd=4))

The above means it's hard to see how the 'docker-proxy' issue has anything to do with PiBuilder or IoTstack.

How can PiHole function and block ads (as it appears to do) but yet remain "UNCONN"? (Which I assume to mean "unconnected").

The network is docker-pi-hole_default.

Thoughts?

Thanks for the suggestions in your last post.

Regards,
Ric
<smile>...and all I wanted to do was block ads on the home network...</smile>

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.