PTR lookups for Docker networks

Please follow the below template, it will help us to help you!

Please ensure that you are running the latest version of the beta code.
Run pihole -up to update to the latest, then verify that the problem still exists before reporting it.

Problem with Beta 5.0:
With the most recent version of v5.0 running in Docker it appears that PTR lookups for the Docker based networks are experiencing issues.

Included are screenshots of before and after with v4 and v5.

It appears that for whatever reason the upstream DNS provider in this case does not display the name from the PTR record, and most if not all of the actual containers in the network 172.18.0.x also do not return their names from the PTR record.

Manually running an nslookup or dig within the pihole container returns the appropriate results:

root@pihole:/# nslookup

Non-authoritative answer:       name = dotproxy.nerv_net.

Authoritative answers can be found from:
root@pihole:/# nslookup

Non-authoritative answer:        name = logstash-elastiflow.nerv_net.

Authoritative answers can be found from:
root@pihole:/# dig dotproxy.nerv_net

; <<>> DiG 9.10.3-P4-Debian <<>> dotproxy.nerv_net
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 60319
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;dotproxy.nerv_net.             IN      A

dotproxy.nerv_net.      600     IN      A

;; Query time: 0 msec
;; WHEN: Wed Feb 26 15:00:14 AEDT 2020
;; MSG SIZE  rcvd: 68
root@pihole:/# dig logstash-elastiflow.nerv_net

; <<>> DiG 9.10.3-P4-Debian <<>> logstash-elastiflow.nerv_net
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 10585
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;logstash-elastiflow.nerv_net.  IN      A

logstash-elastiflow.nerv_net. 600 IN    A

;; Query time: 0 msec
;; WHEN: Wed Feb 26 15:01:56 AEDT 2020
;; MSG SIZE  rcvd: 90

I have attempted to flush my networks table suspecting that something in there perhaps was causing issues to no avail.
Rolling back to v4 will result in the correct results being displayed.

Debug Token:
v4: 14t8kwpu29
v5: s48mn3t5q4

It’s also worth calling out this issue did not occur on the very first v5.0 beta container, it was recent a day or so before I actually posted this seemed like when the issue began.

Oddly as a I mentioned some containers from the subnet seem to resolve to their name, is there a way to force Pihole to refresh and lookup addresses in the “Queries answered by” and “Top Clients (total)”?

Previously Settings -> Restart DNS resolver would force a refresh for all of these but this does not seem consistent for the Docker subnet.

Did you specify this server as one of your upstreams for Pi-hole?

If you mean yes, it’s my only upstream DNS setup in pi-hole (outside of some custom dnsmasq configuration) which also resolves all internet bound DNS via Cloudflare.

If you mean this is the default DNS server that’s generated in to the /etc/resolv.conf for all containers within a Docker network, it is the Docker networks DNS resolver.

Yes, I mean this. This is the issue here: Only knows the right answer to your question. FTL (running on cannot know the answer if it doesn’t know it can ask This has changed from v4.x to v5.0 as we have changed a few things in the background like not overwriting resolv.conf on the Pi-hole itself (we did this before to force the local machine to use

Does adding as custom DNS server for Pi-hole resolve the issue for you?
Please remember that name resolution is not instantaneous after a restart of the DNS resolver and it may take up to a few minutes to pick up all the names.

Sorry what has changed between 4.x to 5.x that makes this not work? Even though the Pihole container itself where FTL is running can resolve these, are you saying that FTL no longer utilises what is specified in the /etc/resolv.conf?

My observations suggested that the /etc/resolv.conf for a 4.x and 5.x are identical and 4.x did not do any overwriting of said file, my apologies if I’ve misunderstanding what you’ve said.

Including as an upstream server will display the correct information, but this was not necessary before, if this is required happy to do it, but I want to try and understand why this doesn’t work anymore.

Yes. We had to overwrite using resolv.conf by to get name resolution for names FTL knows (maybe through Custom DNS or by being the DHCP server or …).

We overwrote this file but the docker version may be special and didn’t do this.

Sure, hopefully my explanation above is okay for you, if not, you’re invited to ask more questions. We’ll need to discuss how to go forward with this change for the docker with @diginc

This changes the landscape for what’s recommend in Docker too, for example the recommendation to set the Docker DNS parameters to

It doesn’t quite explain the odd intermittent nature that some of those IP addresses return a name, if this was indeed always pointing to instead of what is in the /etc/resolv.conf these realistically should never return a PTR.

Last question, is there any chance we will have the opportunity to change this behaviour back via a configurable parameter or something?

The recommended configuration for Docker technically queries Pihole via the Docker DNS Resolver anyway so sticking to the resolv.conf would be preferable.
I’ll be able to configure as a workaround in the interim, thanks for the information and keep up the great work!

Yes, that’s possible and not much work, I will look into it.

@nightah Please try

pihole checkout ftl tweak/FORCE_LOCAL_RESOLVER

This does two things:

  1. It adds a new config flag FORCE_LOCAL_RESOLVER defaulting to true. You can set it to false in /etc/pihole/pihole-FTL.conf to prevent FTL from forcing itself as DNS resolver for its internal name resolution at all. This will make FTL only use the servers specified in /etc/resolv.conf. edit: Removed this setting.
  2. I changed the behavior from replacing the first DNS server in /etc/resolv.conf by the last one. As libc typically only tracks the first three (regardless of how many nameservers you put in there!), it should lead to the same result, however, it will, at least, preserve the first two.

Point 2. should make point 1 superfluous but it still seems a sensible option to have. Furthermore, it may turn out that point 2 prevents FTL from working as intended and needs to be rolled back in which case you’d want the new FORCE_LOCAL_RESOLVER setting.

1 Like

I’ve just tested and this reverts the behaviour and works per the pre5.0 behaviour.

Thanks for this @DL6ER.

ToDo for myself: Test whether clients defined by Custom DNS still resolve to their names with #710.

So I can definitely confirm that custom DNS works with these changes because I use that myself.

Having said that, I only have a single custom dns set so I haven’t checked if this works with multiple.

Thanks for testing. however, you said above

so you wouldn’t even notice this change in a Docker environment if I’m not mistaken.

Sorry it might have been unclear.

So in a docker environment the resolv.conf includes only which is the docker network internal DNS resolver, but any DNS servers that have been passed into the container via the —dns flag are iterated through if the hostname cannot be resolved through the internal DNS resolver.

So all internet based traffic for example will fallback to (the first defined DNS server) if you follow the recommended setup guide and my understanding is if you have custom DNS setup that Pihole will forward it to said server.

At least in my case all of my internet based resolutions are definitely going to what I have defined in my custom DNS.

Thanks, I was able to confirm the proper function of my changes in the proposed PR in various scenarios. Fingers crossed that the other core developers will find the time to review it soon so the changes can be merged into the regular release/v5.0 branch for final testing in a greater audience.

Short update: This has been merged into the regular release/v5.0 code. Time to go back on track @nightah. And thanks for using the Pi-hole!

Thanks for all your assistance on this @DL6ER, yours and the teams efforts are much appreciated!