Query dnsmasq loop Windows Store displaycatalog.mp.microsoft.com

Versions

  • Pi-hole: v5.3.1 (Latest: v5.3.1)
  • AdminLTE: v5.5 (Latest: v5.5)
  • FTL: v5.8 (Latest: v5.8)

Platform

  • OS and version: Debian GNU/Linux 10 (buster)
  • Platform: Raspberry Pi 4 8Gb with Docker

Expected behavior

I don't want to have a saturation of hits on permitted domains.

Actual behavior / bug

After I installed the latest version of PiHole on my raspberry pi 4 8Gb with Docker, I have a weird behavior on the permitted domains.

I can see multiple call per seconds of this domain displaycatalog.mp.microsoft.com.
Initialy the call was initiate by my computer with the IP : 192.168.0.23 (but I don"t know why because I don't have open Windows Store) and after that it's look like a loop has started...

Steps to reproduce

Steps to reproduce the behavior:

  1. Go to the Dashboard
  2. Scroll down to : Top Permitted Domains
  3. See error

Screenshots



Additional context

A side of Pihole I have my own dns-over-https docker container that I referenced in pihole.

This is the script that I launch with supervisorctl :

$ cat pihole.sh
#!/bin/bash

# https://github.com/pi-hole/docker-pi-hole/blob/master/README.md

_term() {
  echo "Caught SIGTERM signal!"
  kill -TERM "$child"
}

trap _term SIGTERM

docker stop pihole || true
docker rm -f pihole || true
docker pull pihole/pihole:latest || true

eth0_ip=$(ip address show dev eth0 | grep -E -o "inet [0-9.]+" | grep -E -o "[0-9.]+")
echo $eth0_ip
docker run --rm\
    --name pihole \
    -e INTERFACE="eth0" \
    --hostname=piholesrv \
    -e TZ="Europe/Paris" \
    -e ServerIP="${eth0_ip}" \
    --shm-size=256m \
    -e WEBPASSWORD="Ch4ngeMeP1ease!"\
    -v "/opt/piholesrv/pihole/pihole/:/etc/pihole/" \
    -v "/opt/piholesrv/pihole/dnsmasq.d/:/etc/dnsmasq.d/" \
    -v "/opt/piholesrv/pihole/log/:/var/log/" \
    -p 127.0.0.1:8080:80 \
    -p 53:53/tcp \
    -p 53:53/udp \
    --ip 172.17.0.3 \
    --cap-add NET_ADMIN \
    --cap-add=SYS_NICE \
    --dns=127.0.0.1 --dns=1.1.1.1 \
    pihole/pihole:latest &

child=$!
wait "$child"

docker stop pihole

I have -p 127.0.0.1:8080:80 because I'm using Caddy as a reverse proxy in frontend.

Moreover, in the PiHole logs I can see multiple time those entries :

Apr 18 14:00:00 dnsmasq[513]: query[A] displaycatalog.mp.microsoft.com from 172.17.0.1
Apr 18 14:00:00 dnsmasq[513]: cached displaycatalog.mp.microsoft.com is <CNAME>
Apr 18 14:00:00 dnsmasq[513]: cached displaycatalog-rp.md.mp.microsoft.com.akadns.net is <CNAME>
Apr 18 14:00:00 dnsmasq[513]: cached displaycatalog-rp-europe.md.mp.microsoft.com.akadns.net is <CNAME>
Apr 18 14:00:00 dnsmasq[513]: cached consumerrp-displaycatalog-aks2eap-europe.md.mp.microsoft.com.akadns.net is <CNAME>
Apr 18 14:00:00 dnsmasq[513]: cached displaycatalog-europeeap.md.mp.microsoft.com.akadns.net is <CNAME>
Apr 18 14:00:00 dnsmasq[513]: cached db5eap.displaycatalog.md.mp.microsoft.com.akadns.net is 52.155.217.156
Apr 18 14:00:00 dnsmasq[513]: query[A] displaycatalog.mp.microsoft.com from 172.17.0.1
Apr 18 14:00:00 dnsmasq[513]: cached displaycatalog.mp.microsoft.com is <CNAME>
Apr 18 14:00:00 dnsmasq[513]: cached displaycatalog-rp.md.mp.microsoft.com.akadns.net is <CNAME>
Apr 18 14:00:00 dnsmasq[513]: cached displaycatalog-rp-europe.md.mp.microsoft.com.akadns.net is <CNAME>
Apr 18 14:00:00 dnsmasq[513]: cached consumerrp-displaycatalog-aks2eap-europe.md.mp.microsoft.com.akadns.net is <CNAME>
Apr 18 14:00:00 dnsmasq[513]: cached displaycatalog-europeeap.md.mp.microsoft.com.akadns.net is <CNAME>
Apr 18 14:00:00 dnsmasq[513]: cached db5eap.displaycatalog.md.mp.microsoft.com.akadns.net is 52.155.217.156
Apr 18 14:00:00 dnsmasq[513]: query[A] displaycatalog.mp.microsoft.com from 172.17.0.1
Apr 18 14:00:00 dnsmasq[513]: cached displaycatalog.mp.microsoft.com is <CNAME>
Apr 18 14:00:00 dnsmasq[513]: cached displaycatalog-rp.md.mp.microsoft.com.akadns.net is <CNAME>
Apr 18 14:00:00 dnsmasq[513]: cached displaycatalog-rp-europe.md.mp.microsoft.com.akadns.net is <CNAME>
Apr 18 14:00:00 dnsmasq[513]: cached consumerrp-displaycatalog-aks2eap-europe.md.mp.microsoft.com.akadns.net is <CNAME>
Apr 18 14:00:00 dnsmasq[513]: cached displaycatalog-europeeap.md.mp.microsoft.com.akadns.net is <CNAME>
Apr 18 14:00:00 dnsmasq[513]: cached db5eap.displaycatalog.md.mp.microsoft.com.akadns.net is 52.155.217.156
Apr 18 14:00:00 dnsmasq[513]: query[A] displaycatalog.mp.microsoft.com from 172.17.0.1
Apr 18 14:00:00 dnsmasq[513]: cached displaycatalog.mp.microsoft.com is (null)
Apr 18 14:00:00 dnsmasq[513]: config error is REFUSED
Apr 18 14:00:00 dnsmasq[513]: query[A] displaycatalog.mp.microsoft.com from 172.17.0.1
Apr 18 14:00:00 dnsmasq[513]: cached displaycatalog.mp.microsoft.com is (null)
Apr 18 14:00:00 dnsmasq[513]: config error is REFUSED
Apr 18 14:00:00 dnsmasq[513]: query[A] displaycatalog.mp.microsoft.com from 172.17.0.1
Apr 18 14:00:00 dnsmasq[513]: cached displaycatalog.mp.microsoft.com is (null)
Apr 18 14:00:00 dnsmasq[513]: config error is REFUSED
Apr 18 14:00:00 dnsmasq[513]: query[A] displaycatalog.mp.microsoft.com from 172.17.0.1
Apr 18 14:00:00 dnsmasq[513]: cached displaycatalog.mp.microsoft.com is (null)
Apr 18 14:00:00 dnsmasq[513]: config error is REFUSED

I think something is wrong somewhere...

What am I doing wrong ?

Thank you very much for your help.

Best regards,

In order to close a DNS loop, DNS queries that Pi-hole is forwarding to one of its upstreams must be somehow returned to Pi-hole instead of being answered.

This seems unlikely in your case, as your log excerpts show that an answer has been received at some stage, and yet the query is repeated. This would suggest that the requester keeps repeating its query (maybe because it never receives an answer?).

Nevertheless, is your DoH server Pi-hole's only upstream?
What is your DoH upstream using as DNS server?
(And providing 172.17.0.2 just once in Pi-hole should suffice.)

Seeing that you are mapping ports and you didn't declare or mention a specific network, I assume you are running all your containers with Docker's default bridge network?

In that case, what machine is 192.168.0.23?
I'm a bit puzzled by 192.168.0.23 registering in your Pi-hole's Query Log. In bridge mode, I'd expect all incoming DNS traffic to be NATted by Docker and thus to be originating from Docker's internal gateway address, i.e. 172.17.0.1 in your case.

Hello Bucking_Horn , thank you very much for the update.

First of all here is the view of my "network" :
My personnal workstation -> My Internet Box -> My Raspberry Pi 4

My personnal workstation have the IP : 192.168.0.23 (it's the internet box that give this IP with DHCP)
My Internet Box have two DNS defined : 192.168.0.245 (it is the static IP of the raspberry Pi 4) as primary DNS and 1.1.1.1 as secondary DNS
My Raspberry Pi 4 have a static IP : 192.168.0.245
And inside the raspberry :

pi@piholesrv:~ $ cat /etc/network/interfaces.d/eth0
auto eth0
allow-hotplug eth0
iface eth0 inet static
        address 192.168.0.245
        netmask 255.255.255.0
        gateway 192.168.0.1
        dns-nameservers 127.0.0.1 1.1.1.1

pi@piholesrv:~ $ cat /etc/resolv.conf
# Generated by resolvconf
nameserver 127.0.0.1

In the resolve.conf I can't see the second DNS entry I already open a ticket too.

On the raspberry pi 4, I have 3 containers running as follow :

  1. caddy

  2. pihole/pihole:latest

  3. crazymax/cloudflared:latest (DoH)

Caddy is used as a reverse-proxy (to connect to pihole) and have the following configuration for docker :

#!/bin/bash

_term() {
  echo "Caught SIGTERM signal!"
  kill -TERM "$child"
}

trap _term SIGTERM

docker stop reverse-proxy || true
docker rm -f reverse-proxy || true
docker pull caddy || true

docker run --rm --name reverse-proxy \
    -v /opt/services/reverse_proxy/data:/data \
    -v /opt/services/reverse_proxy/config/Caddyfile:/etc/caddy/Caddyfile \
    --net=host \
    caddy &

child=$!
wait "$child"

docker stop reverse-proxy

pihole have the following configuration :

#!/bin/bash

# https://github.com/pi-hole/docker-pi-hole/blob/master/README.md

_term() {
  echo "Caught SIGTERM signal!"
  kill -TERM "$child"
}

trap _term SIGTERM

docker stop pihole || true
docker rm -f pihole || true
docker pull pihole/pihole:latest || true

eth0_ip=$(ip address show dev eth0 | grep -E -o "inet [0-9.]+" | grep -E -o "[0-9.]+")
echo $eth0_ip
docker run --rm\
    --name pihole \
    -e INTERFACE="eth0" \
    --hostname=piholesrv \
    -e TZ="Europe/Paris" \
    -e ServerIP="${eth0_ip}" \
    --shm-size=256m \
    -e WEBPASSWORD="Ch4ngeMeP1ease!"\
    -v "/opt/piholesrv/pihole/pihole/:/etc/pihole/" \
    -v "/opt/piholesrv/pihole/dnsmasq.d/:/etc/dnsmasq.d/" \
    -v "/opt/piholesrv/pihole/log/:/var/log/" \
    -p 127.0.0.1:8080:80 \
    -p 53:53/tcp \
    -p 53:53/udp \
    --ip 172.17.0.3 \
    --cap-add NET_ADMIN \
    --cap-add=SYS_NICE \
    --dns=127.0.0.1 --dns=1.1.1.1 \
    pihole/pihole:latest &

child=$!
wait "$child"

docker stop pihole

And cloudflared have the following configuration :

#!/bin/bash

_term() {
  echo "Caught SIGTERM signal!"
  kill -TERM "$child"
}

trap _term SIGTERM

docker stop dns_over_https || true
docker rm -f dns_over_https || true

# nslookup -port=5053 google.fr 127.0.0.1
docker run --name dns_over_https --rm \
  -e TZ="Europe/Paris"\
  -e TUNNEL_DNS_UPSTREAM="https://cloudflare-dns.com/dns-query,https://cloudflare-dns.com/dns-query"\
  --ip 172.17.0.2 \
  -p 127.0.0.1:5053:5053/udp \
  crazymax/cloudflared:latest &

child=$!
wait "$child"

docker stop dns_over_https

For the DoH, I'm using : https://cloudflare-dns.com/dns-query and you are right I should only provide 172.17.0.2 once in Pi-hole.

About 192.168.0.23 it's my personnal workstation. In Pi-hole I can see all the IP on my network once when all the query are not "in the cache" and after all request marked "OK (cached)" are from the client "172.17.0.1" known as docker0 the bridge.

Here is the docket network view :

pi@piholesrv:~ $ sudo docker network list
NETWORK ID     NAME       DRIVER    SCOPE
752fdba7f2f1   bridge     bridge    local
9c2b2877b542   host       host      local
293d04c2c6ba   influxdb   bridge    local
fc176e6cb052   none       null      local
pi@piholesrv:~ $ sudo docker inspect 752fdba7f2f1
[
    {
        "Name": "bridge",
        "Id": "752fdba7f2f14632fb4c65acda8cbaa5765e72e9dc0a1a5bbd983f99f30350e1",
        "Created": "2021-04-18T12:48:05.676991432+02:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "0af93eed0f45937a3c777b35930bfa67ce5642dc2c0ed6263cb8af292807579f": {
                "Name": "pihole",
                "EndpointID": "481de078527e9996f18e9dae2f2f8b5b42ff9fb55f9f1f98ac892b2c599627fb",
                "MacAddress": "02:42:ac:11:00:03",
                "IPv4Address": "172.17.0.3/16",
                "IPv6Address": ""
            },
            "dd71ef2f08456b518c99bd2d6847abf9a641207edd70bbcc68f27b4f6fa50d3c": {
                "Name": "dns_over_https",
                "EndpointID": "67ac4ad172089f11afc9fed745d7836d2b2dd9fbc0c9755f19a3e15a84dc36fa",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]

Moreover, on my Pi I have this kind of script for firewall (should I remove all this ?):

#!/bin/bash

ufw --force reset
ufw logging high
ufw default deny

# DNS
ufw allow 53/udp
ufw allow 53/tcp

# reverse proxy
ufw allow 80/tcp
ufw allow 443/tcp

# SSH
ufw limit 22/tcp
# SSH backup
ufw limit 2222/tcp
# Firewall
ufw allow 51820/udp

ufw --force enable

Any news about my last comment ? I'm still having the issue ...

That's a lot of information, but I'm not sure whether my questions have yet been answered.

If you'd use only your DoH servers as Pi-hole's upstream, and none of Pi-hole's upstreams would use Pi-hole as theirs, directly or indirectly, then that would preclude a DNS loop.

As mentioned, your observation could also be explained by a client repeating DNS requests in quick succession. However, you'd often see such a client repeating its requests because Pi-hole is blocking them.

As the DNS request samples from your logs are shown to have been resolved, it could also mean that you may have hit Pi-hole's rate limit, and the domain that caused it would be purely coincidental.

With Docker NATting DNS requests to Pi-hole, its internal gateway appears as Pi-hole's only client, and the default limit of 1000 requests in 60 seconds per client may be too restrictive for you.

In that case, try if adopting or disabling Pi-hole's rate limit in pihole-FTL.conf would fix you.

To made the test I have erase the Pi-hole database to restart freshly. And I hit the domain "displaycatalog.mp.microsoft.com" only once from my personnal workstation (192.168.0.23)

So I don't have any client that are repeating this requests and Pi-hole is not blocking this request because as you can see in my screenshots, the status is OK (cached)

If you'd use only your DoH servers as Pi-hole's upstream, and none of Pi-hole's upstreams would use Pi-hole as theirs, directly or indirectly, then that would preclude a DNS loop.

In Pi-hole I have only defined this :


End behind the IP 172.17.0.2#5053 I have my container that have cloudfared

#!/bin/bash

_term() {
  echo "Caught SIGTERM signal!"
  kill -TERM "$child"
}

trap _term SIGTERM

docker stop dns_over_https || true
docker rm -f dns_over_https || true

# nslookup -port=5053 google.fr 127.0.0.1
docker run --name dns_over_https --rm \
  -e TZ="Europe/Paris"\
  -e TUNNEL_DNS_UPSTREAM="https://cloudflare-dns.com/dns-query,https://cloudflare-dns.com/dns-query"\
  --ip 172.17.0.2 \
  -p 127.0.0.1:5053:5053/udp \
  crazymax/cloudflared:latest &

child=$!
wait "$child"

docker stop dns_over_https

I agree with the rate limit BUT if the rate limit is reached it's because something generate all those requests...

After using tshark to view what is going on the eth0 and the docker0 (bridge) I only see the request to contact displaycatalog.mp.microsoft.com on the docker0 interface. On eth0 I see only one hit and then all the queries are on docker0.

Yes, the fact that an answer is available is what made me doubt a DNS loop is involved, and it also prompted my reasoning about rate limits.

As mentioned before, you could somehow expect to see a client request resolution for a blocked domain repeatedly before giving up.

There could have been a siginificant count of such requests for blocked domains immediately before the rate limit kicked in, and displaycatalog.mp.microsoft.com just happened to be involved afterwards.

In any way, it is your clients that issue those requests.
You should try to locate that client and the software that's causing them.

Were it a single client with high volume requests for a blocked domain, you should try to stop it from doing so.
If you can't configure or influence client behaviour explicitly, you could either decide to unblock the asscoiated domains in Pi-hole (potentially compromising all clients, beyond the one that is responsible for the high query count).

If it's not a blocked domain that encourages repeated requests, you could consider that client to bypass Pi-hole completely or to not use the device or software anymore.

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.