Cannot use DNS name for Custom Upstream DNS Server

Versions

  • Pi-hole: 5.8.1
  • AdminLTE: 5.10.1
  • FTL: 5.13

Platform

  • OS and version: Debian 10 Buster (official pihole container)
  • Platform: Raspberry Pi / Docker Swarm

Expected behavior

I expected the Custom Upstream DNS Servers to be able to take a local DNS value.

Actual behavior / bug

Addresses which are not IP addresses are reported as invalid and rejected.

Steps to reproduce

Steps to reproduce the behavior:

  1. Go to 'Settings > DNS'
  2. Tick 'Custom IPv4 1'
  3. Enter {dns_address}#{port}
  4. Click save

Screenshots

Additional context

I understand why this is the case (the field is even marked as ipv4) but I don't think it needs to be and being able to use an internal DNS address would be really helpful.
In my case, I am running pihole in a docker swarm with another container running cloudflared. Both containers are attached to a small subnet that is internal to docker only over which they communicate, so pihole is the only machine able to make requests to the cloudflared container. Unfortunatey, in Docker Swarm and Compose v3, you cannot assign a static IP address to your containers so every time I redeploy the service or the cloudflared container is migrated to another host, it gets a new IP and pihole doesn't know this.

Containers within docker are able to resolve IPs of other containers by their container name, in this case "cloudflared". Given the limitations in Docker at the moment, I tried to use "cloudflared#5053" as the value for my custom upstream DNS server only to find that it cannot be used.

I think this would be a really useful and powerful feature for use in docker deployments.

Am I understanding correctly you want to enter a name instead of an IP address?
Think about it for a moment, what if the cache is still empty, how would Pi-hole know the DNS server IP address to connect to if it only knows a name?

1 Like

Yes, that is what I want to do. I know it initially sounds strange.

The situation is that I'm deploying Pihole in a Docker Swarm where it can spin up on any node and be moved between nodes, as can the container running cloudflared. Unfortunately I cannot set a static IP for either machine in Docker Swarm.

When the Pihole machine is deployed, Environment Variables are used to set the IP of the upstream server (ie the cloudflared container) however this IP is unknown at this time and may change.

Within Docker, containers can resolve each others IPs using the container name. In this case the Pihole container can always reach the cloudflared container using "cloudflared", regardless of the IP address.

The point that deHakkelaar is trying to make is this:
You want to use a domain name to configure Pi-hole's upstream DNS resolver.
Yet there is no upstream DNS resolver that Pi-hole can ask to resolve that domain name.

And if you would provide Docker's internal DNS server IP to be able to resolve Docker internal names, then Pi-hole would obviously by-pass your dnscrypt resolver.

1 Like

I understand what you're both saying, but in this scenario that isn't how docker works.

My compose file is:

version: "3.5"

services:
  cloudflared:
    image: crazymax/cloudflared:latest
    command: proxy-dns
    environment:
      TUNNEL_DNS_UPSTREAM: "https://1.1.1.1/dns-query,https://1.0.0.1/dns-query,https://9.9.9.9/dns-query,https://149.112.112.9/dns-query"
      TUNNEL_DNS_PORT: 5053
      TUNNEL_DNS_ADDRESS: "0.0.0.0"
    networks:
      internal:
        ipv4_address: 172.30.9.2
  
  pihole:
    image: pihole/pihole:latest
    ports:
      - "53:53/tcp"
      - "53:53/udp"
      - "80:80/tcp"
      - "67:67/tcp"
    environment:
      TZ: 'Europe/London'
      WEBPASSWORD: ${WEBPASSWORD}
      DNS1: '172.30.9.2#5053'
      DNS2: 'no'
      DNSMASQ_LISTENING: 'all'
    volumes:
      - 'pihole:/etc/pihole'
      - 'dnsmasq:/etc/dnsmasq.d'
    networks:
      internal:
        ipv4_address: 172.30.9.3
    deploy:
      mode: replicated
      replicas: 1
    depends_on:
      - cloudflared

networks:
  internal:
    ipam:
      config:
        - subnet: 172.30.9.0/29
 
volumes:
  pihole:
    driver: local
    driver_opts:
      type: 'none'
      o: 'bind'
      device: '/data/pihole'
  dnsmasq:
    driver: local
    driver_opts:
      type: 'none'
      o: 'bind'
      device: '/data/dnsmasq'

(Ignore the static IP settings, they don't work)

When these containers are started their resolv.conf files both look like this:

nameserver 127.0.0.11
options ndots:0

If you create two containers on a custom network then they use the docker embedded DNS server and can resolve each others IP from their container name.
(As opposed to using the default bridge network which inherits the docker hosts DNS settings).

So I already know that the name resolution works in this scenario, and it's the simplest way to spin up a pihole/cloudflared stack on a docker swarm, whilst maintaining invisibility of the cloudflared server, without having to manually configure the upstream settings after deployment, or after the cloudflared container is automatically moved to another node.

The only alternative I can think of is to forget the custom internal network, to expose the cloudflare port on the host directly and then to configure every node to run keepalived to provide a single VIP to use for the Pihole configuration and iptables to block traffic from any device except the pihole device, whose IP I will have to figure out and update the iptables somehow. This is... messy. And it means I can't use the same port for testing new deployments.

The request to use static IPs in a docker stack has been open since 2016 so there's little hope of that coming to fruition. I have suggested being able to expand/resolve the hostname in the docker-compose file, so this is not my only avenue, but it is the one for which I am most hopeful.

Even if this was an experimental feature that could be toggled on but defaulted to off, that would be great. And afterwards I can give you a docker-compose file which spins up a truly highly available pihole. I also have an ansible playbook to deploy the hosts, including replicated storage.

The host systems DNS resolution is irrelevant for Pi-hole's operation, regardless whether that is a VM, Docker or bare metal: Pi-hole is only using the upstream DNS servers that you explicitly configure it for.

EDIT: You probably could give the following approach a try:
I'm not aware if this is possible, but if you were able to attach a Docker specific search domain exclusively for Docker's internal network, you may try to use a custom dnsmasq configuration like

server=/docker.internal/172.30.9.x

where you substitute docker.internal with aforementioned Docker internal search domain and 172.30.9.x with Docker's respective internal IP address.

But for being able to test this, you may also have to adopt the web page validation.

EDIT: As dnsmasq only accepts IPs for specifiying an upstream resolver, above cannot work without changing dnsmasq respectively.

1 Like

Simply put, without an initial IP connection to a DNS server, you wouldn't be able to resolve any names to IP's other than those stored in the /etc/hosts file.
With emphases on IP connection bc the Internet (the network part) doesnt connect with names but with IP's.
DNS comes later into play.

Not being able to assign a static IP/reservation to the container sounds more like an issue you should take up at Docker Swarm support.

This really is a problem of Docker Swarm because of

This is neither possible in the DNS server dnsmasq we embed in Pi-hole nor in any other DNS server I'm aware of (unbound, bind, pdns or said dnsmasq).

But rest assured there is no problem Pi-hole does not try to solve :slight_smile: My proposal is to try the following to circumvent the inability of assigning static addresses in swarm: Add an extra step that happens after cloudflared was started but before pihole is started doing something like

sed -i "/^server=/d" /etc/dnsmasq.d/01-pihole.conf
echo "server=$(dig +short cloudflared @127.0.0.11)" >> /etc/dnsmasq.d/01-pihole.conf

to replace the upstream server IP address dynamically. You can do this with another intermediate container or maybe there is some more fancy magic that allows you to trigger this on the host in between container starts. This is something you will be able to answer better because my experience with swarm is exactly zero.

2 Likes

@ deHakkelaar So it turns out I didn't understand what you were saying. Despite the fact that the machine can resolve the IP, the Pihole application will only use the upstream DNS server and therefore this is impossible.

You are all correct, this is a Docker issue. It was only when I realised Docker were not going to fix that issue that I started looking at this route.

@ DL6ER thank you for your suggestion. This is definitely easier than using keepalived etc. Does Pihole periodically read the data in 01-pihole.conf? Or is this only read at launch and on saves?
If it is read regularly then I can periodically check the IP of cloudflared and then update the config and this can be an extremely lightweight container that's always running. If it doesn't then I have some ideas I haven't fully fleshed out.

This seemed like such a 'simple' and obvious thing to do originally but it has turned in to a nightmare of limitations and workarounds.

Thanks again, all of you, for your time, answers and patience.

Only on startup which is why I suggested an intermediate step that would run before the pihole container starts. Maybe you can find another cloudflared+dnsmasq swarm configuration and check how they do it.

But guys, while it may be very rare, it is a totally legit and reasonable request, respectively limitation of Pi-hole.

Pi-hole's upstream DNS has nothing to do with the Pi-hole host system DNS. It is perfectly possible that the system and hence Pi-hole can resolve cloudflared via an /etc/hosts or /etc/resolv.conf IPv4 entry, either directly or via local DNS server which knows this hostname. Pi-hole then can send its client's queries to the now resolved IP of the upstream DNS.

The question is only how Pi-hole internally queries the upstream DNS, whether it uses a method which implies (host) system DNS resolving when required, or not.

However, use cases are obviously very rare and indeed a (plain) DNS server with a dynamic IP sounds strange in general.

`dnsmasq` could well use resolvers as provided by the host system, but that wouldn't address the issue (click for more).

Pi-hole's configuration is suppressing this via dnsmasq's no-resolv option, in order to enforce usage of explicitly intended upstream servers only.

However, even if you would drop this option, any host-side resolvers would be treated as yet another additional upstream, equally to the upstream DNS servers that you can configure via Pi-hole's UI - and that would disagree with OP's intention:
sikotic wants to use cloudflared for encrypted upstream DNS services exclusively.
Providing any additional DNS resolvers would allow Pi-hole to forego cloudflared as upstream. In fact, it would be likely that Pi-hole's preference for the fastest responding upstream would always pick the local resolver over cloudflared.

To meet the requirements of this edge case within pihole-FTL/dnsmasq, you could probably define a conditional forwarding rule for Docker's internal network (which seems possible by a couple of configuration options similar to the one I've mentioned above) - but you'd then also had to change dnsmasq's implementation of only accepting IP addresses as upstream (which I had failed to realise in my above post).

So DL6ER's suggestion currently seems the most promising approach towards an immediate solution or workaround.

Another solution would be to add special handling hostname upstreams to the docker start scripts so they take care of resolving the name to an IP address before populating the config file from ENV vars. We will discuss this.

edit This is also how other docker containers offering dnsmasq and said feature work, e.g.

Another idea would be that Pi-hole/dnsmasq uses the hosts DNS resolver only for resolving its for client requests configured upstream DNS, but not for resolving client requests themselves. Not sure whether this is (easily) possible though. Also a loop must be prevented, i.e. the upstream DNS hostname must not be send upstream, e.g. when the host itself is configured to use Pi-hole, being a chicken & egg issue anyway.

Generally it makes sense to see Pi-hole's internal and host DNS resolving completely independent from client requests resolving. Usually it does not make any sense to use Pi-hole as its own host's DNS resolver, as long as you do not use a GUI and browser on the server itself as well, or for monitoring reasons. So client requests strictly use the Pi-hole/dnsmasq wise configured upstream DNS, but for its own/internal queries it can use the host systems DNS, which of course can be cloudflared, Unbound or dnscrypt-proxy etc for additional privacy (while with those you have the same issue with dynamic upstream IPs or hostnames).

but you'd then also had to change dnsmasq 's implementation of only accepting IP addresses as upstream ( which I had failed to realise in my above post ).

Okay, then it would be likely too much efforts to implement for such edge cases.

@DL6ER
[[ ! -z "${items[1]}" ]] == [[ -n "${items[1]}" ]] == [[ "${items[1]}" ]] no need for doubled negation :wink:.

To avoid 4 pipes, awk can be used:

server=$(ping -4 -c 1 ${items[0]} | awk -F'[()]' '{print $2;exit}')

This isn't my code, I just wanted to show how others workaround this issue. I assume awk may not be available in the container, either.

I very much disagree on this. It feels wrong to me that we say "a network-wide ad blocker" (and tracking protector or even more depending on your lists) but then exclude this central point of your network from said "protection" (quotes intended) just to ensure repairing works when FTL is crashed for some reason. The repair script could have covered this automatically.
Compare the millions of installations of Pi-hole (guessing only from the public docker stats, we don't actually have real numbers) to the few cases where it was necessary to reset the host's resolver to something like 8.8.8.8. I don't think this can be expressed in percent with any sensible number of decimals.

Leaving this aside, I do agree on the idea to support specifying a server by a hostname, I just don't know where we would want to add it:

  1. In the docker entry/start scripts: This is where the few dnsmasq containers I found that support this have it. They (try to) resolve the hostname and then add the result as server=<ipaddress> line into the dnsmasq config.
  2. We could also add this into dnsmasq itself, it wouldn't be too hard (I already checked this). A loop cannot happen because there can be no loop while dnsmasq is still parsing its config (it doesn't listen to queries at this point). Things can be wrong when the system resolver is indeed 127.0.0.1 but then this will simply end up in a timeout and we should be fine with mentioning this in the man page.

(1) would be reasonably simple to so, I'd be ready for writing and submitting a patch for (2) to the mailing list but we might want to collect some more concise (!) arguments for this here to support this feature (which may not be liked too much upstream).

@sikotic In preparation of sending the new feature of supporting hostnames as upstream servers to the dnsmasq project, I'd appreciate if you could test my solution. For this, please use nightly container and run

pihole checkout ftl new/dnsmasq_server_hostnames

Anyone else is invited with testing can try switching to the same branch, of course.

This version of FTL supports server=localhost etc. syntax.

Final comment: There is no guarantee whatsoever that this feature is being liked and accepted upstream and I don't think it'd be a good idea to maintain this ourselves in case it gets rejected upstream.

1 Like

Thank you so much.
I understand what you're saying about the request possibly being rejected upstream.

I'm not in a position to test this tonight as I have plans but if I get a chance I will, otherwise tomorrow.

Ah sorry, dnsmasq Docker container, I should have seen it :sweat_smile:.

But which ad and tracker should ever be loaded on a Linux server as long as you do not visit a website with an HTML interpreting and JavaScript loading and executing web browser. We are not talking about Windows or macOS here where I could imagine everything but about well reviewed open source platforms with package repositories. Of course when someone installs random untrusted software from untrusted sources, it may be good to keep watching for a while what unexpected requests it sends out, but otherwise?

True that the cases where FTL fails or needs to be stopped while Internet/DNS is required to repair/maintain things, but the issue could be bigger when a hostname based VPN is used for the SSH connection, which then cannot (re)connect, or similar. Given that there is (usually) no ad/tracker connection required on Linux (servers), that it adds additional overhead/load and rare but possible issues to even debug or repair a falling Pi-hole, I always recommend to keep using the upstream DNS directly for the host system. And it is good that the Pi-hole installer does not touch this either anymore.

But this is going off topic of the actual issue/idea, sorry :sweat_smile:.

Awesome that you forged this feature for dnsmasq already. I'll give it a try tonight.

You could also check out this branch inside the container on startup by doing something like:

Create file on your host name 01-checkout-branch.sh with the following contents:

#!/usr/bin/with-contenv bash
set -e
custom_ftl_branch="new/dnsmasq_server_hostnames"
s6-echo "Switching FTL to ${custom_ftl_branch}"
yes | pihole checkout ftl "${custom_ftl_branch}"

And then mount it, i.e:

volumes:
 - './01-checkout-branch.sh:/etc/cont-init.d/01-checkout-branch.sh:ro'

(I may or may not look at adding this as a standard file for the nightly, which could respond to ENV vars... maybe. Not sure how much value it adds)

The feature will be tracked here:

https://github.com/pi-hole/dnsmasq/pull/9

Any comments of improving the text before I submit it to dnsmasq ? (always keep in mind English isn't my first language)