Conditional Forwarding stops working when I reboot host or restart container

Thank you. I did it that way as well. It gave me URLs as output, which are below. Also, instead of using my browser to access some local domains to do the queries, I used "dig @192.168.1.30 somelocal.domain.lan", where the IP is the Pi-hole in question, because I think my browser was pre-querying tons of domains and flooding the logs.

BEFORE/BROKEN:
pihole.log: https://tricorder.pi-hole.net/xv1ufkgtni
pihole-FTL.log: https://tricorder.pi-hole.net/1oygsywv2l

AFTER/WORKING:
pihole.log: https://tricorder.pi-hole.net/rgd9qc1s3r
pihole-FTL.log: https://tricorder.pi-hole.net/qa47sthtwp

The Before/Broken doesn't show anything NXDOMAIN other than two lines that are bad domains and should always return NXDOMAIN.

Run that tail -n 100 /var/log/pihole-FTL.log command without piping it to pihole and see what you are sending us so that you can tell us what it is we are looking for.

Edit: These are the only two NXDOMAIN queries in the BEFORE log:

Feb 24 07:17:19 dnsmasq[521]: query[AAAA] connectivity-check.ubuntu.com from 192.168.1.31
Feb 24 07:17:19 dnsmasq[521]: cached connectivity-check.ubuntu.com is NODATA-IPv6
Feb 24 07:17:19 dnsmasq[521]: query[AAAA] connectivity-check.ubuntu.com.REDACTED.lan from 192.168.1.31
Feb 24 07:17:19 dnsmasq[521]: cached connectivity-check.ubuntu.com.REDACTED.lan is NXDOMAIN
Feb 24 07:17:19 dnsmasq[521]: query[PTR] 31.1.168.192.in-addr.arpa from 127.0.0.1
Feb 24 07:17:19 dnsmasq[521]: forwarded 31.1.168.192.in-addr.arpa to 172.20.0.3
Feb 24 07:17:19 dnsmasq[521]: query[AAAA] connectivity-check.ubuntu.com from 192.168.1.30
Feb 24 07:17:19 dnsmasq[521]: cached connectivity-check.ubuntu.com is NODATA-IPv6
Feb 24 07:17:19 dnsmasq[521]: query[AAAA] connectivity-check.ubuntu.com.REDACTED.lan from 192.168.1.30
Feb 24 07:17:19 dnsmasq[521]: cached connectivity-check.ubuntu.com.REDACTED.lan is NXDOMAIN

I've redacted the b********a.lan domain name.

But note the PTR went to 127.20.0.3:

Feb 24 07:17:19 dnsmasq[521]: forwarded 31.1.168.192.in-addr.arpa to 172.20.0.3

EDIT2: I see the .lan are still cached, so we haven't yet found the original query for those lookups.

cached connectivity-check.ubuntu.com.REDACTED.lan is NXDOMAIN

1 Like

Sorry, should have caught this at the get go.

I'm pretty sure your environments are wrong.

    environment:
      ServerIP: x.x.x.x
      DNS1: 1.1.1.1
      DNS2: 1.0.0.1
      VIRTUAL_HOST: pi.hole
      DNSMASQ_LISTENING: all

You're dropping in the literal strings as the environment variables, and not variables with values.

    environment:
      - "TZ=America/New_York"
      - "PROXY_LOCATION=pihole"
      - "VIRTUAL_PORT=80"
      - "PIHOLE_DNS_=172.20.0.3#5053;172.20.0.3#5053"
      - "WEBPASSWORD=***************"
      - "ServerIP=192.168.1.30"

Try entering the running Pi-hole container and doing something like echo $DNSSEC or even export to see if the variables are really being populated correctly. Do this in a fresh container that is just spun up.

Ok, thanks, I fixed them. They look like this now:

    environment:
      TZ: America/New_York
      PROXY_LOCATION: pihole
      VIRTUAL_PORT: 80
      PIHOLE_DNS_: 172.20.0.3#5053;172.20.0.3#5053
      WEBPASSWORD: **************
      ServerIP: 192.168.1.30
      DNS_BOGUS_PRIV: 'TRUE'
      DNS_FQDN_REQUIRED: 'TRUE'
      DNSSEC: 'TRUE'
      REV_SERVER: 'TRUE'
      REV_SERVER_TARGET: 192.168.1.1
      REV_SERVER_DOMAIN: b*******a.lan
      REV_SERVER_CIDR: 192.168.1.0/24
      TEMPERATUREUNIT: f
      WEBUIBOXEDLAYOUT: boxed

Although that did not fix the issue. I did go into a newly spun up container and did check those variables and they were all good. I got the correct result; I checked all of the environmental variables shown above.

On another note, I forgot to include the actual results I got in terminal for the dig's I did after I spun up a new container with broken Conditional Forwarding. I did three local domains and all had results like this:

; <<>> DiG 9.10.6 <<>> @192.168.1.30 plex.b******a.lan
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 29631
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;plex.b*******a.lan. IN A

;; AUTHORITY SECTION:
. 1800 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2021022400 1800 900 604800 86400

;; Query time: 97 msec
;; SERVER: 192.168.1.30#53(192.168.1.30)
;; WHEN: Wed Feb 24 07:16:38 EST 2021
;; MSG SIZE rcvd: 122

I will try again to capture the logs... I'm not sure what the best way is to do it. Can I just send the whole damn log? On a new container it shouldn't be that big.

Edit: I DM'd you the logs since I had to look at them first to find the queries.

Edit 2: So I found that in the BEFORE/BROKEN the queries are going to 172.20.0.3 (cloudflared container) which is not correct, and in the AFTER/WORKING they are going to 192.168.1.1 (my router) which is correct. So I guess that's a clue, question is why. The env variables are supplied in the docker-compose and they are shown in the UI and the variables themselves are populated with the correct info right after a fresh container spin up.

Post the full output of a fresh docker container start up. Don't daemonize the container process, so no docker -d or docker-compose up -d.

Ok, here ya go. Docker output of a new container via docker-compose:

Okay, and then run
docker-compose exec <Pi-hole Container Name> export

and

docker-compose exec <Pi-hole Container Name> cat /etc/pihole/setupVars.conf

docker-compose exec <Pi-hole Container Name> cat /etc/dnsmasq.d/01-pihole.conf

Why exactly are you doing that?

And I don't know what those are for?

I can't get this to work? I get:

Can't find a suitable configuration file in this directory or anyparent. Are you in the right directory? Supported filenames: docker-compose.yml, docker-compose.yaml

Can you explain how to do this?

I did that because on container spin up one google upstream dns server would be checked in the UI in addition to the Custom 1 DNS (for cloudflared). I added it in a second time, which prevents the Google one from getting checked. I can undo it if necessary.

Yeah I don't know what those are. I adapted my docker-compose from here:

Should I remove? Would they even be used with Pi-hole?

How are you starting the docker composure?

That's from 2019.

Two options: Use our official template or ask the creator of that template for help.

I typically use Portainer, but can do it in Terminal just as easy.

Oh, Portainer... You should have said that from the start.

Portainer injects it's own environment variables and changes a lot of things.

Still have same issue when spinning up from command line though so I don’t think Portainer has anything to do with it. I will double check though.

Just tried a fresh spin up in terminal and got the same result; so it's not something with Portainer. I also tried removing PROXY_LOCATION and VIRTUAL_PORT and had no effect.

Do you want the contents of setupVars.conf and 01-pihole.conf? I can get those to you.

Do those files exist in the volume mounts before you start the container?

3 posts were split to a new topic: 8.8.4.4 being set in docker if only one server set in PIHOLE_DNS_ env variable

No. I've been deleting them before spinning up a new container.

Any thoughts as to what may be going on here? At this point, I'm contemplating just running it on bare metal if we can't figure it out.

I'm having exact issue as interconnect though I'm less technical user, I'm on newest version, I noticed the Pihole is showing wrong hostname (show my phone was active when I was away - different IP suddenly got the same hostname), so I tried to Flush network table once, then saw it was still wrong, then I do it one more time, also Restart DNS resolver and restart the pi. After that, pi hole completely stop fetching hostname no matter what I do afterward. :frowning:

Tried everything from flush, check/uncheck the Conditional Forwarding option again, even reset the pi with pihole -r, but nothing seems to work, it still showing IP without hostname for some reason. Even though having the same issue, I don't do any weird configuration at all. REV_SERVER variables all are enable in the conf file. I notice I was having the flag CONDITIONAL_FORWARDING=false before reset though