PiHole + Unbound issues

Running PiHole v6 2025.04.0. and Unbound klutchell/unbound:v1.22.0 and seeing some weird issues. All the issues noted below go away if I switch to my router or CF as upstream resolver. Edit: to add this is the first time I am using Unbound so cannot compare if this was any different in v5.

  1. Multiple times a day on my browser I will get an error saying unable to reach dns. On refresh it will work. I have tried disabling/enabling secure dns option in browsers and it has no impact. This is seen on multiple different devices and browsers.
  2. Browser some times feels slower as compared to my router or CF being upstream resolvers.
  3. Had this issue when a specific dns query would return no data from unbound and then I restarted it and it returned results. I assume this is cache related? The default for cache-max-ttl is 1d so I assume it would have fixed itself in a day?

I am using unbound conf mentioned on unbound - Pi-hole documentation. I am using docker networking to talk to unbound on the same host and also add IP address of unbound running on a 2nd host.

pihole:
    container_name: pihole
    domainname: hole
    hostname: pi
    image: ghcr.io/pi-hole/pihole:2025.04.0
    networks:
      default:
        ipv4_address: $PIHOLE_IP
    ports:
      - 53:53/tcp
      - 53:53/udp
      - 9080:80
    volumes:
      - ${BASE_DOCKER_DATA_PATH}/pihole/conf:/etc/pihole
    environment:
      TZ: $TZ
      FTLCONF_webserver_api_password: $PIHOLE_PWD
      #FTLCONF_dns_upstreams: unbound;$PIHOLE2_IP#5353
      #FTLCONF_dns_upstreams: 1.1.1.1;1.0.0.1
      FTLCONF_dns_upstreams: 192.168.1.1
      FTLCONF_dns_dnssec: 'true'
      FTLCONF_dns_domainNeeded: 'True'
      FTLCONF_dns_bogusPriv: 'True'
      FTLCONF_dns_listeningMode: 'all'
      FTLCONF_ntp_ipv4_active: 'False'
      FTLCONF_ntp_ipv6_active: 'False'
      # https://pi-hole.net/blog/2025/02/21/v6-post-release-fixes-and-findings/#page-content
      #  no-0x20-encode to help with intermittent issue with cloudflared
      # https://discourse.pi-hole.net/t/dnsmasq-warn-reducing-dns-packet-size/51803/30
      FTLCONF_misc_dnsmasq_lines: 'no-0x20-encode;address=/xxx.yyyy/192.168.1.86;rebind-domain-ok=/plex.direct/'
      FTLCONF_database_maxDBdays: 7
    restart: unless-stopped
  unbound:
   container_name: unbound
   image: klutchell/unbound:v1.22.0
   ports:
     - 5353:53/udp
     - 5353:53/tcp
   healthcheck:
     # Use the drill wrapper binary to reduce the exit codes to 0 or 1 for healthchecks
     test: ['CMD', 'drill-hc', '@127.0.0.1', 'dnssec.works']
     interval: 30s
     timeout: 30s
     retries: 3
     start_period: 30s
   volumes:
     - ../../data/unbound:/etc/unbound/custom.conf.d
   restart: unless-stopped
   # https://github.com/MatthewVance/unbound-docker-rpi/issues/4#issuecomment-1341653963
   cap_add:
     - NET_ADMIN

My unbound config is

server:
    # If no logfile is specified, syslog is used
    # logfile: "/var/log/unbound/unbound.log"
    log-time-ascii: yes
    verbosity: 1

    interface: 127.0.0.1
    port: 53
    do-ip4: yes
    do-udp: yes
    do-tcp: yes

    # May be set to no if you don't have IPv6 connectivity
    do-ip6: yes

    # You want to leave this to no unless you have *native* IPv6. With 6to4 and
    # Terredo tunnels your web browser should favor IPv4 for the same reasons
    prefer-ip6: no

    # Use this only when you downloaded the list of primary root servers!
    # If you use the default dns-root-data package, unbound will find it automatically
    #root-hints: "/var/lib/unbound/root.hints"

    # Trust glue only if it is within the server's authority
    harden-glue: yes

    # Require DNSSEC data for trust-anchored zones, if such data is absent, the zone becomes BOGUS
    harden-dnssec-stripped: yes

    # Don't use Capitalization randomization as it known to cause DNSSEC issues sometimes
    # see https://discourse.pi-hole.net/t/unbound-stubby-or-dnscrypt-proxy/9378 for further details
    use-caps-for-id: no

    # Reduce EDNS reassembly buffer size.
    # IP fragmentation is unreliable on the Internet today, and can cause
    # transmission failures when large DNS messages are sent via UDP. Even
    # when fragmentation does work, it may not be secure; it is theoretically
    # possible to spoof parts of a fragmented DNS message, without easy
    # detection at the receiving end. Recently, there was an excellent study
    # >>> Defragmenting DNS - Determining the optimal maximum UDP response size for DNS <<<
    # by Axel Koolhaas, and Tjeerd Slokker (https://indico.dns-oarc.net/event/36/contributions/776/)
    # in collaboration with NLnet Labs explored DNS using real world data from the
    # the RIPE Atlas probes and the researchers suggested different values for
    # IPv4 and IPv6 and in different scenarios. They advise that servers should
    # be configured to limit DNS messages sent over UDP to a size that will not
    # trigger fragmentation on typical network links. DNS servers can switch
    # from UDP to TCP when a DNS response is too big to fit in this limited
    # buffer size. This value has also been suggested in DNS Flag Day 2020.
    edns-buffer-size: 1232

    # Perform prefetching of close to expired message cache entries
    # This only applies to domains that have been frequently queried
    prefetch: yes

    # One thread should be sufficient, can be increased on beefy machines. In reality for most users running on small networks or on a single machine, it should be unnecessary to seek performance enhancement by increasing num-threads above 1.
    num-threads: 1

    # Ensure kernel buffer is large enough to not lose messages in traffic spikes
    so-rcvbuf: 1m

    # Ensure privacy of local IP ranges
    private-address: 192.168.0.0/16
    private-address: 169.254.0.0/16
    private-address: 172.16.0.0/12
    private-address: 10.0.0.0/8
    private-address: fd00::/8
    private-address: fe80::/10

    private-domain: plex.direct

I am not able to determine why would see the issue #1 i see with unbound and not with any other upstream dns. Any pointers on what I should look at?

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.