Clustered pihole - i've done it

How to install a pihole cluster

Setup is as follows:

1st PI
IP: 192.168.2.170
NAME: pihole01

2nd PI
IP: 192.168.2.171
NAME: pihole02

Clustered IP:
IP: 192.168.2.172
NAME: pihole

On Both:

All done as "root", if not add a leading sudo

Install Raspbian Strech
Install additional Packages:

apt-get install keepalived
apt-get install libipset3
apt-get install ntp

Setup pihole on both

Setup /etc/hosts on both

127.0.0.1       localhost
::1             localhost ip6-localhost ip6-loopback
ff02::1         ip6-allnodes
ff02::2         ip6-allrouters
192.168.2.170   pihole01.samba.domain.dom pihole01
192.168.2.171   pihole01.samba.domain.dom pihole02
192.168.2.172   pihole.samba.domain.dom pihole

Setup keepalived
systemctl enable keepalived.service

Config on Master:

root@pihole01:/scripts# cat /etc/keepalived/keepalived.conf
global_defs {
		notification_email {
				mail@domain.com                              	# Benachrichtigungs Zieladresse(n)
		}
		notification_email_from mail@domain.com              	# Benachrichtigungs Quelladresse
		smtp_server localhost                                   # SMTP Serveradresse
		smtp_connect_timeout 30                                 # Timeout zum SMTP Server
		router_id pihole01                                      # Eindeutige ID wie z.B. HOSTNAME
		script_user root                                        # Benutzer der Notify Scripte
		enable_script_security                                  # Script Sicherheit einschalten
}

vrrp_instance PIHOLE {
		state MASTER
		interface eth0                             	  			# Genutztes Interface
		virtual_router_id 51                                    # ID der Route
		priority 150                                            # Master Prio 150, Backup Prio 50
		advert_int 5                                            # Intervall der VRRP Pakete
		smtp_alert                                              # E-Mail Benachrichtigung aktiviren

		unicast_src_ip 192.168.2.170                              # Unicast Quelladresse
		unicast_peer {
				192.168.2.171                                     # Unicast Zieladresse(n)
		}

		authentication {
				auth_type PASS                                  # Authentifizierungs Typ
				auth_pass XXXXXXXXXX                            # Authentifizierungs Passwort
		}

		virtual_ipaddress {
				  192.168.2.172/24                               # Virtuelle Failover IP-Adresse
		}

#       notify_master ""                                        # Notify Script für den Master Status (einkommentieren, wenn genutzt wird)
#       notify_backup ""                                        # Notify Script für den Backup Status (einkommentieren, wenn genutzt wird)
#       notify_fault ""                                         # Notify Script für den Fehler Status (einkommentieren, wenn genutzt wird)
}

Config on Slave:

root@pihole02:~# cat /etc/keepalived/keepalived.conf
global_defs {
		notification_email {
				mail@domain.com                              	# Benachrichtigungs Zieladresse(n)
		}
		notification_email_from mail@domain.com              	# Benachrichtigungs Quelladresse
		smtp_server localhost                                   # SMTP Serveradresse
		smtp_connect_timeout 30                                 # Timeout zum SMTP Server
		router_id pihole02                                      # Eindeutige ID wie z.B. HOSTNAME
		script_user root                                        # Benutzer der Notify Scripte
		enable_script_security                                  # Script Sicherheit einschalten
}

vrrp_instance PIHOLE {
		state BACKUP
		interface eth0                               			# Genutzte Interface
		virtual_router_id 51                                    # ID der Route
		priority 50                                             # Master Prio 150, Backup Prio 50
		advert_int 5                                            # Intervall der VRRP Pakete
		smtp_alert                                              # E-Mail Benachrichtigung aktiviren

		unicast_src_ip 192.168.2.170                              # Unicast Quelladresse
		unicast_peer {
				192.168.2.171                                     # Unicast Zieladresse(n)
		}

		authentication {
				auth_type PASS                                  # Authentifizierungs Typ
				auth_pass XXXXXXXXXX                            # Authentifizierungs Passwort
		}

		virtual_ipaddress {
				  192.168.2.172/24                               # Virtuelle Failover IP-Adresse
		}

#       notify_master ""                                        # Notify Script für den Master Status (einkommentieren, wenn genutzt wird)
#       notify_backup ""                                        # Notify Script für den Backup Status (einkommentieren, wenn genutzt wird)
#       notify_fault ""                                         # Notify Script für den Fehler Status (einkommentieren, wenn genutzt wird)
}

Setup Sync:

on pihole01:

root@pihole01:~# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:0EBtzMNnGEdAIqA19b29ekv/oYeUU/xiWQq6vrdwX94 root@pihole01
The key's randomart image is:
+---[RSA 2048]----+
|  +oo.+*+=o      |
| o . o =O.o      |
|.     o.o+   .   |
|       . o  . o .|
|        S .. + = |
|          ..+ = .|
|          +o.+...|
|         oo+oo.+.|
|        .o++++o E|
+----[SHA256]-----+

Copy Keys:

root@pihole01:~# ssh-copy-id pihole02
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'pihole02 (192.168.2.171)' can't be established.
ECDSA key fingerprint is SHA256:ZR+1egGWI7WFsQzuWVfEf3nHgX4Q8SUDwp4d50aqTSs.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@pihole02's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'pihole02'"
and check to make sure that only the key(s) you wanted were added.

Setup sync

   mkdir /scripts
   chmod 750 /scripts

Create sync script:

vi /scripts/sync-pihole.sh (This could be done better... only sync on file change for example

#!/bin/bash
echo "Start at $(date) " >> /var/log/pihole.sync
test -e /etc/pihole/whitelist.txt && scp /etc/pihole/whitelist.txt pihole02:/etc/pihole/whitelist.txt
test -e /etc/pihole/blacklist.txt && scp /etc/pihole/blacklist.txt pihole02:/etc/pihole/blacklist.txt
ssh pihole02 pihole -g >> /var/log/pihole.sync
echo "Stop at $(date) " >> /var/log/pihole.sync

Either:
call this script after editing white/blacklist on pihole01
Or:
create cron entry on pihole01:

Test setup (on pihole01):

systemctl stop keepalived

clustered ip should switch to pihole02

This setup works!
What is missing from my point of view:
clustered dhcp (works with isc-dhcp, but is not integrated in pihole)
automatic sync between both piholes

advantages:
2 raspberry pi are cheap and don't need much power
all clients only need one ip address as dns server (the clustered ip)

Hope this helps

3 Likes

Nice.

What benefits do you see clustering providing over using two Pi-Holes running independently and parallel?

PiHole 1 at IP1
PiHole 2 at IP2
DNS 1 in router - IP1
DNS 2 in router - IP2

Clients would get both IP's from the router, and use either they choose. If one Pi goes down, its clients shift to the other.

This setup provides similar redundancy, it appears.

it depends on. i am an IT professional and in an it professional world redundancy is a big topic.
the setup i've worked on, is at my home. It set this up because my wife does not like any outages.
just found one addtional issue - not a real one. disable iphole for about 10 mins. this is not synchronized in my setup. i like to investigate in technical issues and have read many complaints about pihole not beeing cluster ready. so i tried out. and thats the result

Here's an interesting cluster setup that was built on 4 Pi's using a docker swarm.

https://www.reddit.com/r/pihole/comments/9b02dn/no_kill_like_overkill_quad_rpi_3b_cluster_for/?sort=old

i'am searching for those things.. case, power supply and switch.
do you have any suggestions?

do you have any details?

Just what I saw on the Reddit post. You could go there and message the person who built this setup.

https://gathering.tweakers.net/forum/list_message/56207713#56207713
Does that help ?!

Love the VRRP based solution by the way :wink:

The problem with that solution is that the DNS resolution is random and if one of them goes down slower too! :wink:

I've been looking to do something like this as well, very well put, however, it seems more of overkill than what is needed.

While I can appreciate your wife's affinity for uptime, clusters usually have performance issues if one node goes down. Since this is DNS, it is a negligible impact. In this case, it would have been easier just to set up two instances, but again it's a more noticeable impact on performance.

What I have been mulling over is a k8s cluster with rolling updates and two pods. Then I point to HAProxy at one IP address and get automatic load balancing between the two instances.

While this is considered overkill, updates are a snap, there is a minimal performance hit, and I can use the k8s cluster for other projects.

Thank you very much for this great guide! It's inspiring!
I have some questions/doubts about configuration.
Looking at the keepalived conf file on master and slave i don't understand why on both configuration the unicast_src_ip and unicast_peer are the same.
In more detail the configuration on Master seems to be ok, while on the Slave the unicast_src_ip should be (correct me if i'm wrong) 192.168.2.171 and unicast_peer 192.168.2.170.
Other than that the configuration on /etc/hosts seem to be strange: both ip address 192.168.2.170 and 192.168.2.171 share the same FQDN pihole01.samba.domain.dom.
Maybe 192.168.2.171 should be configured whith pihole02.samba.domain.dom?

Thank you again!

Alessandro

1 Like

it works since months, i will investigate in it next week

fyi

Thank You for these steps.
As I needed a solution that keeps my home network DNS More stable.

Hello

I've trired PiHole with keepalived.
I also removed keepalived again due to one fatal flaw:

What happens if the DNS service on your master node, stops responding, but the master node is still active?

I encountered this problem with a setup like this, that made me remove keepalived and just have 2 DNS servers on the network.
DNS2 updating from DNS1 with gravetysync.

You're using the track_script function in keepalived.

something like:

track_script {
    check_pihole
}
vrrp_script check_pihole {
    script "pidof pihole-FTL"
    interval 10
    fall 2
    rise 2
    weight 10
    timeout 15
}

there's many many other ways to check if the services are running other than pidof, any shell command or shell-script that return 0 if ok and something else if not, works fine.
weight change the priority settings that you have to set on each node, the master should be higher than the slave

If you're running keepalived with IPVS there's a check function in real_server, unfortunately there's no good way to check UDP so easiest way is to do a http_get on port 80 or 443 if you're using https of pihole-server web gui.

vrrp_instance VI_1 {
    state MASTER
    interface ens192
    virtual_router_id 101 
    priority 100 
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }   
    virtual_ipaddress {
        10.13.37.172
    }   
}

real_server 10.13.37.172 80 {
    weight 10
    HTTP_GET {
        url {
           path /
        }
    }   
}

i added it and it works (http_check)

Thought I'd add to the addition on the old post...

You can actually set the script to force a DNS look up on any given node. Using dig, I have the following function as part of my check script running on the same node as pi-hole:

checkDNS() {

  DNS_TEST=${FAILED}  # Assume the worst
  
  DIG_EXEC=`which dig`
  if [[ -x ${DIG_EXEC} ]]; then
    dig -4 @127.0.0.1 ${VIP_TEST_DOMAIN}
    [[ $? -eq 0 ]] && DNS_TEST=${PASSED}
  else
    echo "dig is not installed for controlled DNS test. Failing."
  fi

  echo "DNS test result: ${DNS_TEST}"

  return ${DNS_TEST}

}

Of course these type of tests do make a mess of any stats you track related to % of ads blocked.