[Solved] Not blocking on UDP outside of pod

Note: this seems to be a kuberenetes routing issue from my dig results. What is interesting to me is that "something" is still responding and correctly then on the network?

Expected Behaviour:

Block double click outside of Kubernetes pod as well.

OS: Debian 11
HW: Kubernetes in Alpine Linux in ESXi on Intel NUC.
dig doubleclick.com @192.168.7.9 to return 0.0.0.0. Interestingly dig doubleclick.com +tcp @192.168.7.9 does return 0.0.0.0.
Inside pod, both return 0.0.0.0
Image being used is pihole/pihole

Actual Behaviour:

+ kubectl exec -it deployment/pihole -- dig doubleclick.com @192.168.7.9

; <<>> DiG 9.16.27-Debian <<>> doubleclick.com @192.168.7.9
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 22463
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;doubleclick.com.		IN	A

;; ANSWER SECTION:
doubleclick.com.	2	IN	A	0.0.0.0

;; Query time: 0 msec
;; SERVER: 192.168.7.9#53(192.168.7.9)
;; WHEN: Mon Aug 08 10:02:02 PDT 2022
;; MSG SIZE  rcvd: 60

+ kubectl exec -it deployment/pihole -- dig doubleclick.com +tcp @192.168.7.9

; <<>> DiG 9.16.27-Debian <<>> doubleclick.com +tcp @192.168.7.9
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 17769
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;doubleclick.com.		IN	A

;; ANSWER SECTION:
doubleclick.com.	2	IN	A	0.0.0.0

;; Query time: 9 msec
;; SERVER: 192.168.7.9#53(192.168.7.9)
;; WHEN: Mon Aug 08 10:02:02 PDT 2022
;; MSG SIZE  rcvd: 60

+ dig doubleclick.com @192.168.7.9

; <<>> DiG 9.18.5 <<>> doubleclick.com @192.168.7.9
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 19712
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;doubleclick.com.		IN	A

;; ANSWER SECTION:
doubleclick.com.	300	IN	A	142.251.215.238

;; Query time: 61 msec
;; SERVER: 192.168.7.9#53(192.168.7.9) (UDP)
;; WHEN: Mon Aug 08 10:02:02 PDT 2022
;; MSG SIZE  rcvd: 60

+ dig doubleclick.com +tcp @192.168.7.9

; <<>> DiG 9.18.5 <<>> doubleclick.com +tcp @192.168.7.9
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 32759
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;doubleclick.com.		IN	A

;; ANSWER SECTION:
doubleclick.com.	2	IN	A	0.0.0.0

;; Query time: 9 msec
;; SERVER: 192.168.7.9#53(192.168.7.9) (TCP)
;; WHEN: Mon Aug 08 10:02:02 PDT 2022
;; MSG SIZE  rcvd: 60

Kubernete Service definition


apiVersion: v1
kind: Service
metadata:
  name: pihole-web
  annotations:
    metallb.universe.tf/allow-shared-ip: "pihole"
spec:
  type: LoadBalancer
  loadBalancerIP: 192.168.7.9
  externalTrafficPolicy: Local
  selector:
    app: pihole
  ports:
    - name: web
      port: 80
      targetPort: 80
      protocol: TCP
---

apiVersion: v1
kind: Service
metadata:
  name: pihole-dns-tcp
  annotations:
    metallb.universe.tf/allow-shared-ip: "pihole"
spec:
  type: LoadBalancer
  loadBalancerIP: 192.168.7.9
  externalTrafficPolicy: Local
  selector:
    app: pihole
  ports:
    - name: dns-tcp
      port: 53
      targetPort: 53
      protocol: TCP

---

apiVersion: v1
kind: Service
metadata:
  name: pihole-dns-udp
  annotations:
    metallb.universe.tf/allow-shared-ip: "pihole"
spec:
  type: LoadBalancer
  loadBalancerIP: 192.168.7.9
  externalTrafficPolicy: Local
  selector:
    app: pihole
  ports:
  - name: dns-udp
    port: 53
    targetPort: 53
    protocol: UDP

in the deployment,

        ports:
        - containerPort: 53
          protocol: TCP
        - containerPort: 53
          protocol: UDP
        - containerPort: 80
          protocol: TCP

service seems to have the same endpoint?

➜  marco-polo git:(master) ✗ kubectl describe service/pihole-dns-udp service/pihole-dns-tcp
Name:                     pihole-dns-udp
Namespace:                default
Labels:                   <none>
Annotations:              metallb.universe.tf/allow-shared-ip: pihole
Selector:                 app=pihole
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.43.89.193
IPs:                      10.43.89.193
IP:                       192.168.7.9
LoadBalancer Ingress:     192.168.7.9
Port:                     dns-udp  53/UDP
TargetPort:               53/UDP
NodePort:                 dns-udp  32311/UDP
Endpoints:                10.42.1.46:53
Session Affinity:         None
External Traffic Policy:  Local
HealthCheck NodePort:     31591
Events:
  Type    Reason        Age                  From             Message
  ----    ------        ----                 ----             -------
  Normal  nodeAssigned  35s (x421 over 16h)  metallb-speaker  announcing from node "esxi-worker" with protocol "bgp"


Name:                     pihole-dns-tcp
Namespace:                default
Labels:                   <none>
Annotations:              metallb.universe.tf/allow-shared-ip: pihole
Selector:                 app=pihole
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.43.194.51
IPs:                      10.43.194.51
IP:                       192.168.7.9
LoadBalancer Ingress:     192.168.7.9
Port:                     dns-tcp  53/TCP
TargetPort:               53/TCP
NodePort:                 dns-tcp  31766/TCP
Endpoints:                10.42.1.46:53
Session Affinity:         None
External Traffic Policy:  Local
HealthCheck NodePort:     32224
Events:
  Type    Reason        Age                  From             Message
  ----    ------        ----                 ----             -------
  Normal  nodeAssigned  35s (x421 over 16h)  metallb-speaker  announcing from node "esxi-worker" with protocol "bgp"

Debug Token:

https://tricorder.pi-hole.net/LBth1MC6/

note: I think this may likely be related to metallb BGP mode and will likely open with them as well, but wanted to see if anyone here had some insight or ideas.

edit: in MetalLB slack: Slack

Unrelated to your issue, I've noted that your container did not configure FTLCONF_REPLY_ADDR4, which would be needed for Pi-hole's web UI to be accessible via http://pi.hole.

There is neither a guarantee nor a requirement that DNS requests issued from Pi-hole's host machine (container or bare metal, regardless) would use Pi-hole for DNS at all. So while your kubectl exec dig results are interesting, they are irrelevant as far as DNS services for your network are concerned.

The expected behaviour would be that Pi-hole does not forward received DNS requests for domains as blocked by its configuraton.

The relevant part is whether a regular client would see its requests blocked as expected.

Regular clients would commonly use DNS servers as provided by your router, while servers tend to be manually configured more often.

In that regard, it isn't quite clear whether the following dig was run from an actual client or a Kubernetes server hosting Pi-hole:

Your debug log indicates that your Pi-hole is configured to treat all clients indiscriminately, so that output would indeed suggest that the DNS service at 192.168.7.9 did not block anything.

But as 192.168.7.9 is associated to your Kubernetes host (your Pi-hole container is at (Kubernetes internal?) 10.42.1.46), did that request register in Pi-hole's Query Log at all?
If it did, the logs at /var/log/pihole/pihole.log* should have the details how that request was handled.

If not, that would suggest that your Kubernetes configuration is not routing traffic as intended.

Ran from macbook on the network / outside of any host in the cluster.

Yeah so that's interesting. The TCP requests show up, but not UDP. (using the UI for tail pihole.log) If I do it in the pihole container (kubectl exec) it shows up in logs TCP or UDP. I am now so curious to what is responding to the UDP DNS requests from outside the pod...

For context, it's not like there is another pod running or something that could be handling the requests with a different block list,

➜  ~ kubectl get po -A | grep pihole
default                pihole-f7759d56d-pd6g9                                   1/1     Running     0             20h
➜  ~ kubectl get services -A | grep pihole
default                pihole-dns-tcp                             LoadBalancer   10.43.194.51    192.168.7.9     53:31766/TCP                   45h
default                pihole-dns-udp                             LoadBalancer   10.43.89.193    192.168.7.9     53:32311/UDP                   45h
default                pihole-web                                 LoadBalancer   10.43.217.72    192.168.7.9     80:30713/TCP                   45h
➜  ~ kubectl get deployments -A | grep pihole
default                pihole                        1/1     1            1           45h

That would be a strong indication that those requests do not reach your Pi-hole inside its container.
As you are already using Permit all origins (as implied by DNSMASQ_LISTENING=all, except-interface=nonexisting), this seems to be a networking rather than a Pi-hole configuration issue.

You'd have to look for DNS resolvers supplied by Docker and your host machine.
The latter may be your OS's local (stub) resolver.
Your debug log suggests that Kubernetes/Docker is using 10.43.0.10 internally for DNS.

Also you could try below ones to see if pihole-FTL is answering:

pi@ph5b:~ $ dig +short +notcp @10.0.0.4 version.bind chaos txt
"dnsmasq-pi-hole-2.87test8"
pi@ph5b:~ $ dig +short +tcp @10.0.0.4 version.bind chaos txt
"dnsmasq-pi-hole-2.87test8"

Or check upstream configured DNS servers that might give a hint:

pi@ph5b:~ $ dig +short +notcp @10.0.0.4 servers.bind chaos txt
"127.0.0.1#5335 28 0" "10.0.0.2#53 5 0"
pi@ph5b:~ $ dig +short +tcp @10.0.0.4 servers.bind chaos txt
"127.0.0.1#5335 28 0" "10.0.0.2#53 5 0"

Oh this is interesting did not know about that. Still need to look further into the Kubernetes side but this was a nice check on the blackbox side.

➜  ~ dig +short +notcp @192.168.7.9 version.bind chaos txt
➜  ~ dig +short +tcp @192.168.7.9 version.bind chaos txt
"dnsmasq-pi-hole-2.87test8"
➜  ~ dig +short +notcp @192.168.50.1 version.bind chaos txt
➜  ~ dig +short +tcp @192.168.50.1 version.bind chaos txt
;; communications error to 192.168.50.1#53: host unreachable

Starting to wonder if my router (Peplink Balance 20, not 20x) is doing something funky like catching all DNS (UDP)? Doesn't seem it supports TCP (50.1 above).

➜  ~ dig +notcp @192.168.7.9 version.bind chaos txt

; <<>> DiG 9.18.5 <<>> +notcp @192.168.7.9 version.bind chaos txt
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 24202
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;version.bind.			CH	TXT

;; Query time: 70 msec
;; SERVER: 192.168.7.9#53(192.168.7.9) (UDP)
;; WHEN: Thu Aug 11 14:45:00 PDT 2022
;; MSG SIZE  rcvd: 41

➜  ~ dig +notcp @192.168.50.1 version.bind chaos txt

; <<>> DiG 9.18.5 <<>> +notcp @192.168.50.1 version.bind chaos txt
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 12596
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;version.bind.			CH	TXT

;; Query time: 83 msec
;; SERVER: 192.168.50.1#53(192.168.50.1) (UDP)
;; WHEN: Thu Aug 11 14:45:26 PDT 2022
;; MSG SIZE  rcvd: 41

edit; does look like the router, wtf. Will have to try a different upstream later and see if it reflects upstream or router.

➜  ~ dig +notcp @192.168.50.1 version.bind

; <<>> DiG 9.18.5 <<>> +notcp @192.168.50.1 version.bind
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 21238
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;version.bind.			IN	A

;; AUTHORITY SECTION:
.			85987	IN	SOA	a.root-servers.net. nstld.verisign-grs.com. 2022081102 1800 900 604800 86400

;; Query time: 52 msec
;; SERVER: 192.168.50.1#53(192.168.50.1) (UDP)
;; WHEN: Thu Aug 11 14:49:00 PDT 2022
;; MSG SIZE  rcvd: 116

➜  ~ dig +notcp @192.168.7.9 version.bind

; <<>> DiG 9.18.5 <<>> +notcp @192.168.7.9 version.bind
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 45095
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;version.bind.			IN	A

;; AUTHORITY SECTION:
.			85987	IN	SOA	a.root-servers.net. nstld.verisign-grs.com. 2022081102 1800 900 604800 86400

;; Query time: 7 msec
;; SERVER: 192.168.7.9#53(192.168.7.9) (UDP)
;; WHEN: Thu Aug 11 14:49:21 PDT 2022
;; MSG SIZE  rcvd: 116

➜  ~ dig +tcp @192.168.7.9 version.bind

; <<>> DiG 9.18.5 <<>> +tcp @192.168.7.9 version.bind
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 54725
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;version.bind.			IN	A

;; Query time: 19 msec
;; SERVER: 192.168.7.9#53(192.168.7.9) (TCP)
;; WHEN: Thu Aug 11 14:49:38 PDT 2022
;; MSG SIZE  rcvd: 41

WOO thank you DeHakkelaar, that lead me further into thinking on the router. Since it's a balance router (3 WANs -> LAN) it had a feature called DNS Proxying where DNS requests would be sent to all three WANs and whoever responded first it would cache. Looks like I'll lose that feature, but that's fine. For some reason having that on seems to capture ALL DNS UDP traffic instead of just the ones addressed to the router...

➜  ~ dig +tcp @192.168.7.9 version.bind chaos txt

; <<>> DiG 9.18.5 <<>> +tcp @192.168.7.9 version.bind chaos txt
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 31212
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;version.bind.			CH	TXT

;; ANSWER SECTION:
version.bind.		0	CH	TXT	"dnsmasq-pi-hole-2.87test8"

;; Query time: 9 msec
;; SERVER: 192.168.7.9#53(192.168.7.9) (TCP)
;; WHEN: Thu Aug 11 15:58:27 PDT 2022
;; MSG SIZE  rcvd: 79

➜  ~ dig +notcp @192.168.7.9 version.bind chaos txt

; <<>> DiG 9.18.5 <<>> +notcp @192.168.7.9 version.bind chaos txt
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 16834
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;version.bind.			CH	TXT

;; ANSWER SECTION:
version.bind.		0	CH	TXT	"dnsmasq-pi-hole-2.87test8"

;; Query time: 7 msec
;; SERVER: 192.168.7.9#53(192.168.7.9) (UDP)
;; WHEN: Thu Aug 11 15:58:28 PDT 2022
;; MSG SIZE  rcvd: 79

and OG query,

➜  ~ dig doubleclick.com @192.168.7.9

; <<>> DiG 9.18.5 <<>> doubleclick.com @192.168.7.9
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 8170
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;doubleclick.com.		IN	A

;; ANSWER SECTION:
doubleclick.com.	2	IN	A	0.0.0.0

;; Query time: 11 msec
;; SERVER: 192.168.7.9#53(192.168.7.9) (UDP)
;; WHEN: Thu Aug 11 16:02:00 PDT 2022
;; MSG SIZE  rcvd: 60

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.