Pi-hole setup with unbound queries timeout

I have a rPi 4 and running pi-hole in a docker container - used GitHub - geerlingguy/internet-pi: Raspberry Pi config for all things Internet. to set it up. That part seemed to work fine and I just pointed the upstream to my existing zentyal host (which has firewall access to get out of the house). That works great. I can browse, dig, nslookup, etc.

I read the guide (unbound - Pi-hole documentation) to setup unbound with the intention of retiring the zentyal host and when I setup as directrected, everything times out. During the setup, I did succeed in running the dig tests and they worked find. As soon as I changed the upstream DNS via the pi-hole web interface, everything started timing out.

Managed to troubleshoot a bit and did find that I had forgotten I only allowed the zentyal host out and replies to come back to his internally natted IP address. Adjusted my juniper firewall to allow the pi-hole as well and no longer see firewall deny messages in splunk. So I'm fairly confident that the request to whatever upstream is the default is going out and the return traffic should be coming back as well.

When I look at the query log via the web UI, I see the entry "forwarded to localhost#5335" so I know that part is working but I never see an entry saying "OK (cached)" which I would expect if it gets a good answer.

I searched here and found one other recent thread about unbound on pi-hole and that didn't help my issue. Time should be good as I have my own NTP source (another rPi with GPS connection) and it shows synched.

If I set the upstream DNS to include my zentyal host (.172), then everything works great and I can see in the log that host is answering the queries.

Any help pointing me in another direction to look would be appreciated.

Guessing I did a poor job of explaining the problem since no one has replied. I have to guess it's just something simple but I obviously can't see it - yet.

First check if your effected by a recent change that was made in the openresolv package that comes with Pi-OS Bullseye:

pi@ph5b:~ $ lsb_release -d
Description:    Raspbian GNU/Linux 11 (bullseye)
pi@ph5b:~ $ apt policy openresolv
openresolv:
  Installed: 3.12.0-1

Does below file exist and what is its content?

cat /etc/unbound/unbound.conf.d/resolvconf_resolvers.conf

If exists, delete it with below one because it configures Unbound to function like a regular forwarding DNS server instead of a recursive DNS server and could also create a DNS loop which will cause things to break:

sudo rm /etc/unbound/unbound.conf.d/resolvconf_resolvers.conf

And run below one to make sure above file does not get re-created when rebooting.
It places a hash/comment sign # in front of the config line that creates above file:

sudo sed -i 's\^unbound_conf=\#unbound_conf=\' /etc/resolvconf.conf

Run below to apply or reboot:

sudo service unbound restart

And test again.

Thanks for the reply. I do have the version of Raspbian and openresolv that you indicated so is that an error/issue and how to correct it?

I had already removed and disabled unbound-resolve but thanks for pointing it out.

I've included the resolv.conf as well.

pi@inet-pi:~$ lsb_release -d
Description:	Debian GNU/Linux 11 (bullseye)
pi@inet-pi:~$ sudo apt policy openresolv
openresolv:
  Installed: 3.12.0-1
  Candidate: 3.12.0-1
  Version table:
 *** 3.12.0-1 500
        500 http://deb.debian.org/debian bullseye/main arm64 Packages
        500 http://deb.debian.org/debian bullseye/main armhf Packages
        100 /var/lib/dpkg/status
pi@inet-pi:~$ ls -l /etc/unbound/unbound.conf.d/
total 8
-rw-r--r-- 1 root root 3048 Sep  7 16:12 pi-hole.conf
-rw-r--r-- 1 root root  190 Feb  9  2021 root-auto-trust-anchor-file.conf
pi@inet-pi:~$ cat /etc/resolvconf.conf
# Configuration for resolvconf(8)
# See resolvconf.conf(5) for details

resolv_conf=/etc/resolv.conf
# If you run a local name server, you should uncomment the below line and
# configure your subscribers configuration files below.
#name_servers=127.0.0.1


# Mirror the Debian package defaults for the below resolvers
# so that resolvconf integrates seemlessly.
dnsmasq_resolv=/var/run/dnsmasq/resolv.conf
pdnsd_conf=/etc/pdnsd.conf
#unbound_conf=/etc/unbound/unbound.conf.d/resolvconf_resolvers.conf

pi@inet-pi:~$ cat /etc/resolv.conf
# Generated by resolvconf
search home-lan.net
nameserver 10.20.15.176
nameserver fe80::cac7:50ff:fef3:2525%wlan0

Also, here's the dhcpcd.conf:

pi@inet-pi:~$ sudo cat /etc/dhcpcd.conf
# A sample configuration for dhcpcd.
# See dhcpcd.conf(5) for details.

# Allow users of this group to interact with dhcpcd via the control socket.
#controlgroup wheel

# Inform the DHCP server of our hostname for DDNS.
hostname

# Use the hardware address of the interface for the Client ID.
clientid
# or
# Use the same DUID + IAID as set in DHCPv6 for DHCPv4 ClientID as per RFC4361.
# Some non-RFC compliant DHCP servers do not reply with this set.
# In this case, comment out duid and enable clientid above.
#duid

# Persist interface configuration when dhcpcd exits.
persistent

# Rapid commit support.
# Safe to enable by default because it requires the equivalent option set
# on the server to actually work.
option rapid_commit

# A list of options to request from the DHCP server.
option domain_name_servers, domain_name, domain_search, host_name
option classless_static_routes
# Respect the network MTU. This is applied to DHCP routes.
option interface_mtu

# Most distributions have NTP support.
#option ntp_servers

# A ServerID is required by RFC2131.
require dhcp_server_identifier

# Generate SLAAC address using the Hardware Address of the interface
#slaac hwaddr
# OR generate Stable Private IPv6 Addresses based from the DUID
slaac private

# Example static IP configuration:
interface eth0
static ip_address=10.20.15.176/24
static routers=10.20.15.254
static domain_name_servers=10.20.15.176
static search=home-lan.net
static domain_search=home-lan.net


# It is possible to fall back to a static IP if DHCP fails:
# define static profile
#profile static_eth0
#static ip_address=192.168.1.23/24
#static routers=192.168.1.1
#static domain_name_servers=192.168.1.1

# fallback to static profile on eth0
#interface eth0
#fallback static_eth0

No if you fixed /etc/resolvconf.conf , that should fix that rogue Unbound configuration file which is not desired.
The rest looks good.
Sounds like some Docker networking specific issue of which I know little about.

From what I understand, you run Pi-hole in a Docker container but you run Unbound bare metal on the same host thats running Docker am I correct?
If so, why not run Unbound in a container as well?
Below one comes to mind that OOTB does recursive lookups just like the Pi-hole guide:

Thanks for clarifying.

Yes pi-hole is running in a docker container. The ansible build I referenced above is how it was built/installed and that happened to be docker. I don't know enough about containers yet to know how to run unbound in one. The link you provided doesn't look like it would work directly with pi-hole and I don't know I would be able to troubleshoot that configuration any better than my current.

I'm suspecting it may still be a firewall issue maybe the outbound request is getting out but not the reply. I don't know where unbound is going to try to get an answer nor do I know what IP it would use - the primary eth0 interface? May have to temporarily disable all firewall rules to verify then turn on debugging which is VERY chatty.

Thanks for the look!

As previously mentioned, I suspect it to be an Docker networking setup issue.
Try search here or DuckDuck/Google for the phrase "macvlan" in combination with Docker and Pi-hole.

But why would you run one in a Docker container and the other bare metal?
Why not install both bare metal ... or as previously suggested, run both in Docker.
When adding Docker to the mix, be prepared to do some more investigating for best practices etc.

Also when you suspect a firewall blocking, most firewalls allow to enable logging for specific firewall rules where you should be able to see if anything gets dropped by the FW.

I don't disagree about the docker setup but as I mentioned before the ansible build I referenced in my original post is how it was built/installed and that happened to be docker. I don't know enough about containers yet to know how to run unbound in one.

I did turn on all of my tracelogging and flow debugging in my juniper and could see the outbound packets heading off to the internet port 53. I also see the firewall accept packet in splunk which is there I send my juniper syslogs to. So I'm pretty confident that it's something on the pi itself.

So I turned off unbound as an upstream and checked the two google IPV4 boxes and it works. Further troubleshooting shows a query coming from my desktop to pi-hole looking for bolt.dropbox. That request then goes out to 8.8.8.8:53 (this is with the two google IPV4 checkboxes off and just 127.0.0.1#5335 turned on) and shortly I see the packet come back in.

pi@inet-pi:~$ sudo tcpdump -nn port 5335 or port 53
07:14:48.699512 IP 10.20.15.132.49194 > 10.20.15.176.53: 59636+ A? bolt.dropbox.com. (34)
07:14:49.703916 IP 10.20.15.132.49194 > 10.20.15.176.53: 59636+ A? bolt.dropbox.com. (34)
07:14:51.707041 IP 10.20.15.132.49194 > 10.20.15.176.53: 59636+ A? bolt.dropbox.com. (34)
07:14:55.711211 IP 10.20.15.132.49194 > 10.20.15.176.53: 59636+ A? bolt.dropbox.com. (34)
07:15:02.643189 IP 10.20.15.176.38102 > 8.8.8.8.53: 61962+ PTR? 8.8.8.8.in-addr.arpa. (38)
07:15:02.659272 IP 8.8.8.8.53 > 10.20.15.176.38102: 61962 1/0/0 PTR dns.google. (62)
07:15:03.714237 IP 10.20.15.132.49194 > 10.20.15.176.53: 59636+ A? bolt.dropbox.com. (34)
07:15:16.671592 IP 10.20.15.176.59564 > 8.8.8.8.53: 59765+ PTR? 4.4.8.8.in-addr.arpa. (38)
07:15:16.688305 IP 8.8.8.8.53 > 10.20.15.176.59564: 59765 1/0/0 PTR dns.google. (62)

I noticed that I never see the port 5335 packet even though it shows up in the pi-hole query log

I am wondering if there should be an entry in the pi-hole's iptables to allow for the port 5335 traffic? Currently there is none just entries for port 53:

pi@inet-pi:~$ sudo iptables --list --line-number --numeric
Chain INPUT (policy ACCEPT)
num  target     prot opt source               destination

Chain FORWARD (policy DROP)
num  target     prot opt source               destination
1    DOCKER-USER  all  --  0.0.0.0/0            0.0.0.0/0
2    DOCKER-ISOLATION-STAGE-1  all  --  0.0.0.0/0            0.0.0.0/0
3    ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
4    DOCKER     all  --  0.0.0.0/0            0.0.0.0/0
5    ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0
6    ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0
7    ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
8    DOCKER     all  --  0.0.0.0/0            0.0.0.0/0
9    ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0
10   ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0
11   ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
12   DOCKER     all  --  0.0.0.0/0            0.0.0.0/0
13   ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0
14   ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0
15   ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
16   DOCKER     all  --  0.0.0.0/0            0.0.0.0/0
17   ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0
18   ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0

Chain OUTPUT (policy ACCEPT)
num  target     prot opt source               destination

Chain DOCKER (4 references)
num  target     prot opt source               destination
1    ACCEPT     tcp  --  0.0.0.0/0            172.18.0.2           tcp dpt:9100
2    ACCEPT     tcp  --  0.0.0.0/0            172.21.0.2           tcp dpt:443
3    ACCEPT     tcp  --  0.0.0.0/0            172.18.0.3           tcp dpt:9115
4    ACCEPT     tcp  --  0.0.0.0/0            172.21.0.2           tcp dpt:80
5    ACCEPT     udp  --  0.0.0.0/0            172.21.0.2           udp dpt:67
6    ACCEPT     tcp  --  0.0.0.0/0            172.18.0.5           tcp dpt:9090
7    ACCEPT     tcp  --  0.0.0.0/0            172.21.0.2           tcp dpt:53
8    ACCEPT     tcp  --  0.0.0.0/0            172.18.0.6           tcp dpt:9798
9    ACCEPT     udp  --  0.0.0.0/0            172.21.0.2           udp dpt:53
10   ACCEPT     tcp  --  0.0.0.0/0            172.18.0.4           tcp dpt:3000

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
num  target     prot opt source               destination
1    DOCKER-ISOLATION-STAGE-2  all  --  0.0.0.0/0            0.0.0.0/0
2    DOCKER-ISOLATION-STAGE-2  all  --  0.0.0.0/0            0.0.0.0/0
3    DOCKER-ISOLATION-STAGE-2  all  --  0.0.0.0/0            0.0.0.0/0
4    DOCKER-ISOLATION-STAGE-2  all  --  0.0.0.0/0            0.0.0.0/0
5    RETURN     all  --  0.0.0.0/0            0.0.0.0/0

Chain DOCKER-ISOLATION-STAGE-2 (4 references)
num  target     prot opt source               destination
1    DROP       all  --  0.0.0.0/0            0.0.0.0/0
2    DROP       all  --  0.0.0.0/0            0.0.0.0/0
3    DROP       all  --  0.0.0.0/0            0.0.0.0/0
4    DROP       all  --  0.0.0.0/0            0.0.0.0/0
5    RETURN     all  --  0.0.0.0/0            0.0.0.0/0

Chain DOCKER-USER (1 references)
num  target     prot opt source               destination
1    RETURN     all  --  0.0.0.0/0            0.0.0.0/0

The other odd thing is that if I manually lookup something via dig pointing to unbound, it works!

pi@inet-pi:~$ dig @127.0.0.1 -p5335 www.google.com

; <<>> DiG 9.16.27-Debian <<>> @127.0.0.1 -p5335 www.google.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 50429
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;www.google.com.			IN	A

;; ANSWER SECTION:
www.google.com.		300	IN	A	142.250.73.228

;; Query time: 39 msec
;; SERVER: 127.0.0.1#5335(127.0.0.1)
;; WHEN: Sat Sep 10 07:48:20 EDT 2022
;; MSG SIZE  rcvd: 59

So I'm still confused but convinced it's something on the pi and not in my network.

FYI, I noticed later that Pi-hole also has some Docker networking bits documented:

To keep things KISS at first, my advice is to run both Pi-hole and Unbound directly on the Pi instead of one in a container and the other bare metal.
Both are well documented below:

Adding Docker to the mix only complicates matters if not a bit familiar with its networking modes and best practices.
Plus it makes no sense to complicate matters by adding Docker if you would not have the intend to run more Docker containers on it instead of only the Pi-hole one.

I believe you have to supply the -i any argument to sniff on any interfaces including the loopback interface named lo eg:

sudo tcpdump -ni any port 53 or port 5335

Also it might be handy to tail the Pi-hole logs live while running those dig queries.
It will show you if the query was received, to whom it was forwarded and if it got a reply from upstream.

pihole -t

I determined the issue was definitely with the docker network config/setup for the container and since I know less about that than I do about images/containers themselves, I went and found an image that had everything in there - pihole-unbound image - and it's working great. I recommend it to anyone looking to do pihole with unbound.

Thanks!

1 Like

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.