Pi-hole setup with unbound queries timeout

I have a rPi 4 and running pi-hole in a docker container - used GitHub - geerlingguy/internet-pi: Raspberry Pi config for all things Internet. to set it up. That part seemed to work fine and I just pointed the upstream to my existing zentyal host (which has firewall access to get out of the house). That works great. I can browse, dig, nslookup, etc.

I read the guide (unbound - Pi-hole documentation) to setup unbound with the intention of retiring the zentyal host and when I setup as directrected, everything times out. During the setup, I did succeed in running the dig tests and they worked find. As soon as I changed the upstream DNS via the pi-hole web interface, everything started timing out.

Managed to troubleshoot a bit and did find that I had forgotten I only allowed the zentyal host out and replies to come back to his internally natted IP address. Adjusted my juniper firewall to allow the pi-hole as well and no longer see firewall deny messages in splunk. So I'm fairly confident that the request to whatever upstream is the default is going out and the return traffic should be coming back as well.

When I look at the query log via the web UI, I see the entry "forwarded to localhost#5335" so I know that part is working but I never see an entry saying "OK (cached)" which I would expect if it gets a good answer.

I searched here and found one other recent thread about unbound on pi-hole and that didn't help my issue. Time should be good as I have my own NTP source (another rPi with GPS connection) and it shows synched.

If I set the upstream DNS to include my zentyal host (.172), then everything works great and I can see in the log that host is answering the queries.

Any help pointing me in another direction to look would be appreciated.

Guessing I did a poor job of explaining the problem since no one has replied. I have to guess it's just something simple but I obviously can't see it - yet.

First check if your effected by a recent change that was made in the openresolv package that comes with Pi-OS Bullseye:

pi@ph5b:~ $ lsb_release -d
Description:    Raspbian GNU/Linux 11 (bullseye)
pi@ph5b:~ $ apt policy openresolv
  Installed: 3.12.0-1

Does below file exist and what is its content?

cat /etc/unbound/unbound.conf.d/resolvconf_resolvers.conf

If exists, delete it with below one because it configures Unbound to function like a regular forwarding DNS server instead of a recursive DNS server and could also create a DNS loop which will cause things to break:

sudo rm /etc/unbound/unbound.conf.d/resolvconf_resolvers.conf

And run below one to make sure above file does not get re-created when rebooting.
It places a hash/comment sign # in front of the config line that creates above file:

sudo sed -i 's\^unbound_conf=\#unbound_conf=\' /etc/resolvconf.conf

Run below to apply or reboot:

sudo service unbound restart

And test again.

Thanks for the reply. I do have the version of Raspbian and openresolv that you indicated so is that an error/issue and how to correct it?

I had already removed and disabled unbound-resolve but thanks for pointing it out.

I've included the resolv.conf as well.

pi@inet-pi:~$ lsb_release -d
Description:	Debian GNU/Linux 11 (bullseye)
pi@inet-pi:~$ sudo apt policy openresolv
  Installed: 3.12.0-1
  Candidate: 3.12.0-1
  Version table:
 *** 3.12.0-1 500
        500 http://deb.debian.org/debian bullseye/main arm64 Packages
        500 http://deb.debian.org/debian bullseye/main armhf Packages
        100 /var/lib/dpkg/status
pi@inet-pi:~$ ls -l /etc/unbound/unbound.conf.d/
total 8
-rw-r--r-- 1 root root 3048 Sep  7 16:12 pi-hole.conf
-rw-r--r-- 1 root root  190 Feb  9  2021 root-auto-trust-anchor-file.conf
pi@inet-pi:~$ cat /etc/resolvconf.conf
# Configuration for resolvconf(8)
# See resolvconf.conf(5) for details

# If you run a local name server, you should uncomment the below line and
# configure your subscribers configuration files below.

# Mirror the Debian package defaults for the below resolvers
# so that resolvconf integrates seemlessly.

pi@inet-pi:~$ cat /etc/resolv.conf
# Generated by resolvconf
search home-lan.net
nameserver fe80::cac7:50ff:fef3:2525%wlan0

Also, here's the dhcpcd.conf:

pi@inet-pi:~$ sudo cat /etc/dhcpcd.conf
# A sample configuration for dhcpcd.
# See dhcpcd.conf(5) for details.

# Allow users of this group to interact with dhcpcd via the control socket.
#controlgroup wheel

# Inform the DHCP server of our hostname for DDNS.

# Use the hardware address of the interface for the Client ID.
# or
# Use the same DUID + IAID as set in DHCPv6 for DHCPv4 ClientID as per RFC4361.
# Some non-RFC compliant DHCP servers do not reply with this set.
# In this case, comment out duid and enable clientid above.

# Persist interface configuration when dhcpcd exits.

# Rapid commit support.
# Safe to enable by default because it requires the equivalent option set
# on the server to actually work.
option rapid_commit

# A list of options to request from the DHCP server.
option domain_name_servers, domain_name, domain_search, host_name
option classless_static_routes
# Respect the network MTU. This is applied to DHCP routes.
option interface_mtu

# Most distributions have NTP support.
#option ntp_servers

# A ServerID is required by RFC2131.
require dhcp_server_identifier

# Generate SLAAC address using the Hardware Address of the interface
#slaac hwaddr
# OR generate Stable Private IPv6 Addresses based from the DUID
slaac private

# Example static IP configuration:
interface eth0
static ip_address=
static routers=
static domain_name_servers=
static search=home-lan.net
static domain_search=home-lan.net

# It is possible to fall back to a static IP if DHCP fails:
# define static profile
#profile static_eth0
#static ip_address=
#static routers=
#static domain_name_servers=

# fallback to static profile on eth0
#interface eth0
#fallback static_eth0

No if you fixed /etc/resolvconf.conf , that should fix that rogue Unbound configuration file which is not desired.
The rest looks good.
Sounds like some Docker networking specific issue of which I know little about.

From what I understand, you run Pi-hole in a Docker container but you run Unbound bare metal on the same host thats running Docker am I correct?
If so, why not run Unbound in a container as well?
Below one comes to mind that OOTB does recursive lookups just like the Pi-hole guide:

Thanks for clarifying.

Yes pi-hole is running in a docker container. The ansible build I referenced above is how it was built/installed and that happened to be docker. I don't know enough about containers yet to know how to run unbound in one. The link you provided doesn't look like it would work directly with pi-hole and I don't know I would be able to troubleshoot that configuration any better than my current.

I'm suspecting it may still be a firewall issue maybe the outbound request is getting out but not the reply. I don't know where unbound is going to try to get an answer nor do I know what IP it would use - the primary eth0 interface? May have to temporarily disable all firewall rules to verify then turn on debugging which is VERY chatty.

Thanks for the look!

As previously mentioned, I suspect it to be an Docker networking setup issue.
Try search here or DuckDuck/Google for the phrase "macvlan" in combination with Docker and Pi-hole.

But why would you run one in a Docker container and the other bare metal?
Why not install both bare metal ... or as previously suggested, run both in Docker.
When adding Docker to the mix, be prepared to do some more investigating for best practices etc.

Also when you suspect a firewall blocking, most firewalls allow to enable logging for specific firewall rules where you should be able to see if anything gets dropped by the FW.

I don't disagree about the docker setup but as I mentioned before the ansible build I referenced in my original post is how it was built/installed and that happened to be docker. I don't know enough about containers yet to know how to run unbound in one.

I did turn on all of my tracelogging and flow debugging in my juniper and could see the outbound packets heading off to the internet port 53. I also see the firewall accept packet in splunk which is there I send my juniper syslogs to. So I'm pretty confident that it's something on the pi itself.

So I turned off unbound as an upstream and checked the two google IPV4 boxes and it works. Further troubleshooting shows a query coming from my desktop to pi-hole looking for bolt.dropbox. That request then goes out to (this is with the two google IPV4 checkboxes off and just turned on) and shortly I see the packet come back in.

pi@inet-pi:~$ sudo tcpdump -nn port 5335 or port 53
07:14:48.699512 IP > 59636+ A? bolt.dropbox.com. (34)
07:14:49.703916 IP > 59636+ A? bolt.dropbox.com. (34)
07:14:51.707041 IP > 59636+ A? bolt.dropbox.com. (34)
07:14:55.711211 IP > 59636+ A? bolt.dropbox.com. (34)
07:15:02.643189 IP > 61962+ PTR? (38)
07:15:02.659272 IP > 61962 1/0/0 PTR dns.google. (62)
07:15:03.714237 IP > 59636+ A? bolt.dropbox.com. (34)
07:15:16.671592 IP > 59765+ PTR? (38)
07:15:16.688305 IP > 59765 1/0/0 PTR dns.google. (62)

I noticed that I never see the port 5335 packet even though it shows up in the pi-hole query log

I am wondering if there should be an entry in the pi-hole's iptables to allow for the port 5335 traffic? Currently there is none just entries for port 53:

pi@inet-pi:~$ sudo iptables --list --line-number --numeric
Chain INPUT (policy ACCEPT)
num  target     prot opt source               destination

Chain FORWARD (policy DROP)
num  target     prot opt source               destination
1    DOCKER-USER  all  --  
2    DOCKER-ISOLATION-STAGE-1  all  --  
3    ACCEPT     all  --              ctstate RELATED,ESTABLISHED
4    DOCKER     all  --  
5    ACCEPT     all  --  
6    ACCEPT     all  --  
7    ACCEPT     all  --              ctstate RELATED,ESTABLISHED
8    DOCKER     all  --  
9    ACCEPT     all  --  
10   ACCEPT     all  --  
11   ACCEPT     all  --              ctstate RELATED,ESTABLISHED
12   DOCKER     all  --  
13   ACCEPT     all  --  
14   ACCEPT     all  --  
15   ACCEPT     all  --              ctstate RELATED,ESTABLISHED
16   DOCKER     all  --  
17   ACCEPT     all  --  
18   ACCEPT     all  --  

Chain OUTPUT (policy ACCEPT)
num  target     prot opt source               destination

Chain DOCKER (4 references)
num  target     prot opt source               destination
1    ACCEPT     tcp  --             tcp dpt:9100
2    ACCEPT     tcp  --             tcp dpt:443
3    ACCEPT     tcp  --             tcp dpt:9115
4    ACCEPT     tcp  --             tcp dpt:80
5    ACCEPT     udp  --             udp dpt:67
6    ACCEPT     tcp  --             tcp dpt:9090
7    ACCEPT     tcp  --             tcp dpt:53
8    ACCEPT     tcp  --             tcp dpt:9798
9    ACCEPT     udp  --             udp dpt:53
10   ACCEPT     tcp  --             tcp dpt:3000

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
num  target     prot opt source               destination
1    DOCKER-ISOLATION-STAGE-2  all  --  
2    DOCKER-ISOLATION-STAGE-2  all  --  
3    DOCKER-ISOLATION-STAGE-2  all  --  
4    DOCKER-ISOLATION-STAGE-2  all  --  
5    RETURN     all  --  

Chain DOCKER-ISOLATION-STAGE-2 (4 references)
num  target     prot opt source               destination
1    DROP       all  --  
2    DROP       all  --  
3    DROP       all  --  
4    DROP       all  --  
5    RETURN     all  --  

Chain DOCKER-USER (1 references)
num  target     prot opt source               destination
1    RETURN     all  --  

The other odd thing is that if I manually lookup something via dig pointing to unbound, it works!

pi@inet-pi:~$ dig @ -p5335 www.google.com

; <<>> DiG 9.16.27-Debian <<>> @ -p5335 www.google.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 50429
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

; EDNS: version: 0, flags:; udp: 1232
;www.google.com.			IN	A

www.google.com.		300	IN	A

;; Query time: 39 msec
;; WHEN: Sat Sep 10 07:48:20 EDT 2022
;; MSG SIZE  rcvd: 59

So I'm still confused but convinced it's something on the pi and not in my network.

FYI, I noticed later that Pi-hole also has some Docker networking bits documented:

To keep things KISS at first, my advice is to run both Pi-hole and Unbound directly on the Pi instead of one in a container and the other bare metal.
Both are well documented below:

Adding Docker to the mix only complicates matters if not a bit familiar with its networking modes and best practices.
Plus it makes no sense to complicate matters by adding Docker if you would not have the intend to run more Docker containers on it instead of only the Pi-hole one.

I believe you have to supply the -i any argument to sniff on any interfaces including the loopback interface named lo eg:

sudo tcpdump -ni any port 53 or port 5335

Also it might be handy to tail the Pi-hole logs live while running those dig queries.
It will show you if the query was received, to whom it was forwarded and if it got a reply from upstream.

pihole -t

I determined the issue was definitely with the docker network config/setup for the container and since I know less about that than I do about images/containers themselves, I went and found an image that had everything in there - pihole-unbound image - and it's working great. I recommend it to anyone looking to do pihole with unbound.


1 Like