How to deploy Pi-hole on Kubernetes using Helm

If you're running a Kubernetes cluster and want Pi-hole as your network-wide DNS ad blocker, here's a step-by-step guide using a community Helm chart from [HelmForge]( GitHub - helmforgedev/charts: Production-ready Helm charts with built-in S3 backup for Kubernetes · GitHub ).


Quick Install

Add the repository and install with defaults:


helm repo add helmforge https://repo.helmforge.dev

helm repo update

helm install pihole helmforge/pihole

Or install directly from the OCI registry:


helm install pihole oci://ghcr.io/helmforgedev/helm/pihole

This gives you Pi-hole running with:

  • Persistent storage for `/etc/pihole` (1Gi)

  • DNS service on port 53 via LoadBalancer

  • Google DNS (8.8.8.8 / 8.8.4.4) as upstream

  • Web admin UI on port 80


Enabling Unbound for recursive DNS

If you want full DNS privacy without relying on Google, Cloudflare, or any upstream resolver, enable the Unbound sidecar. It performs recursive DNS resolution directly from root nameservers.


# values.yaml

unbound:

  enabled: true

When Unbound is enabled, Pi-hole automatically switches its upstream DNS to 127.0.0.1#5335 (the Unbound sidecar). No manual DNS configuration needed.


Custom DNS records

Manage local DNS entries directly from your Helm values — no need to manually edit Pi-hole's admin UI:


# values.yaml

dns:

  customRecords:

    - "192.168.1.10 nas.local"

    - "192.168.1.20 printer.local"

    - "192.168.1.30 homeassistant.local"

  cnameRecords:

    - "cname=media.local,nas.local"


Monitoring with Prometheus

Enable the pihole-exporter sidecar for Prometheus metrics and Grafana dashboards:


# values.yaml

metrics:

  enabled: true

  serviceMonitor:

    enabled: true

    interval: 30s

This exposes Pi-hole stats (queries, blocked domains, cache hits) on port 9617.


Exposing the admin UI with Ingress


# values.yaml

ingress:

  enabled: true

  ingressClassName: nginx  # or traefik, or your ingress class

  hosts:

    - host: pihole.example.com

      paths:

        - path: /

          pathType: Prefix

  tls:

    - secretName: pihole-tls

      hosts:

        - pihole.example.com


Full production example

Here's a complete values.yaml for a production setup with Unbound, fixed DNS IP, monitoring, custom records, and ingress:


pihole:

  timezone: America/Sao_Paulo

  dnssec: true

admin:

  password: "your-secure-password"

dns:

  customRecords:

    - "192.168.1.10 nas.local"

    - "192.168.1.20 printer.local"

serviceDns:

  type: LoadBalancer

  loadBalancerIP: "192.168.1.53"

  externalTrafficPolicy: Local

ingress:

  enabled: true

  ingressClassName: nginx

  annotations:

    cert-manager.io/cluster-issuer: letsencrypt-prod

  hosts:

    - host: pihole.example.com

      paths:

        - path: /

          pathType: Prefix

  tls:

    - secretName: pihole-tls

      hosts:

        - pihole.example.com

persistence:

  enabled: true

  size: 2Gi

unbound:

  enabled: true

metrics:

  enabled: true

  serviceMonitor:

    enabled: true

Install with your custom values:


helm install pihole helmforge/pihole -f values.yaml


Links

This is an open-source community chart (MIT license), not affiliated with the Pi-hole project. Feedback and contributions are welcome!

Nice writeup. A few things worth noting for anyone following this on a production cluster:

The LoadBalancer service type for port 53 requires MetalLB or an equivalent, bare metal clusters without a load balancer controller will leave the service in Pending indefinitely. If you're on bare metal and don't have MetalLB, hostNetwork: true on the Pi-hole pod is the simpler workaround, though it pins Pi-hole to a specific node.

One gotcha specific to Kubernetes: if your cluster's CoreDNS is configured to forward queries upstream, and you point CoreDNS to Pi-hole, and Pi-hole forwards to an upstream that CoreDNS also uses, you can get a DNS resolution loop. The safest approach is either configuring Pi-hole's upstream to something pure outside the cluster, or enabling the Unbound sidecar as shown above. Unbound resolves from root servers directly, which sidesteps the loop entirely.

On upstream resolver choice: if you're not using Unbound and need a reliable upstream, it's worth benchmarking a few options from your cluster's actual network location rather than just picking Google or Cloudflare by default. Resolvers that rank well in global benchmarks don't always perform best from your specific network. publicdns.info has a benchmark tool that tests from your location so you can see actual latency and reliability numbers before hardcoding an upstream into your values.yaml.

One more thing: the default 1Gi PVC for /etc/pihole is usually enough, but if you're running multiple blocklists and have a large gravity database, keep an eye on it. Gravity sync jobs can temporarily use quite a bit of disk.