Why use extra_hosts and DEFAULT_HOST with Docker jwilder/nginx-proxy container?

I am using the pihole Docker image along with jwilder/nginx-proxy on my personal websever.

As per pihole's README, I was able to get this configuration to work by setting nginx-proxy's DEFAULT_HOST environment variable to pihole.<my-domain>. Futher along in my project, I realize two things

  1. When accessing my webserver, I don't want <subdomain-that-doesnt-exist>.<my-domain> to go to pihole.<my-domain>
  2. I don't want pihole.<my-domain> to be publicly accessible

What I would like to do is

  1. Remove DEFAULT_HOST=pihole.<my-domain> from the nginx-proxy container
  2. Add NETWORK_ACCESS=internal to my pihole container (restricts container to internal network only)

So what I need to know is: why is DEFAULT_HOST needed, and can I remove it?

As a side question, what is the purpose of populating the extra_hosts list in the pihole container? What issues will I run into if I fail to update this list while creating new sub-domains via other containers?

Default host is part of jwilder's container that tells NGINX what to route unknown hostnames to. This doesn't necessarily redirect it to your pihole host name when it does it though, as least as far as I recall. I switched over to Traefik a while ago.

As a side question, what is the purpose of populating the extra_hosts list in the pihole container? What issues will I run into if I fail to update this list while creating new sub-domains via other containers?

If you have secondary LAN computers you want to list in certain conditions, like you aren't using pihole DHCP, this is one way to populate pi-hole brain with the list of other computer hostnames on your network. You can also use the local.list file.

Default host is part of jwilder’s container that tells NGINX what to route unknown hostnames to. This doesn’t necessarily redirect it to your pihole host name when it does it though, as least as far as I recall. I switched over to Traefik a while ago.

So in the docker-pi-hole example compose file (line 10 of commit e5ad7ae) https://github.com/pi-hole/docker-pi-hole/blob/master/docker-compose-jwilder-proxy.yml

You are saying that

DEFAULT_HOST: pihole.yourDomain.lan

is uncessesary for pi-hole to function correctly?

If you have secondary LAN computers you want to list in certain conditions, like you aren’t using pihole DHCP, this is one way to populate pi-hole brain with the list of other computer hostnames on your network. You can also use the local.list file.

I take it that this just avoids a round-trip over the internet to find the A record that sends *.<my-domain> back to my local network?

Sorry for got to mention it is related to a non-default mode of blocking, the web block where it shows 'blocked by pi-hole' for all the ad domains when you go to them in a browser. the default block mode is now NULL blocking where nothing is returned since it performs better. So it is not required if you never want to switch the config setting back to web mode

1 Like

Splendid! Thank you!

Yes it is a more direct route - but hairpin nat routing can take care of that if you're exposing all your LAN dns to the internet...however Your internal 192.168.X.Y / 10.0.0.X shouldn't have to have DNS defined on the internet. LAN records are typically internal only for privacy and since they are often not valid domain / top level domains, like mine is diginc.lan and that isn't registered anywhere in the internet as a domain.

Another topic / keyword to read up on for this is 'split dns' or 'split brain dns' if you have more interest in securely using a real domain name for internal hostnames while maintaining privacy.

Thanks for the info. In my case, the records are valid, publicly-accessible, domains. Assuming <my-domain> = example.com, then:

(Google Domains) *.example.com -> example.com -> (Asus DDNS) my router -> (port forward 80,443) -> my webserver

In my webserver, jwilder's proxy splits those sub-domains back out to the appropriate containers. Therefore, once I use NETWORK_ACCESS=internal on pihole's container the results will be this

  • pihole.example.com // works on my home network, fails outside home network
  • publicSite1.example.com // works on home and external networks
  • publicSite2.example.com // works on home and external networks

And from your comment, I can avoid going out to Asus's DNS to get *.example.com while on my home network if I record all of the sub-domains for example.com in extra_hosts of pihole's container.

Does this sound sane to you? At least for low-budget personal projects while I haven't yet committed to using a hosting service, like AWS?

Makes sense. extra_hosts is the original way of doing custom LAN only hostnames, the downside is it requires blasting / recreating the docker container every time that you need to update the hostnames. I migrated to using a dnsmasq config file in /etc/dnsmasq.d (volume mapped in of course), with this restarting the container picks up changes.

My internal wildcard is setup with this file on the first line:

$ cat etc-dnsmasqd/02-diginclan.conf
address=/diginc.lan/192.168.1.55
address=/cloud.example-external-domain-hosted-at-home.com/192.168.1.55
address=/desktop.diginc.lan/192.168.1.100

The external domain I have is something port forwarded anyway. Directly routing to the same destination as the port forward avoids the hairpin NAT which didn't used to work for me but now does.

1 Like