High availability (HA) for Pi-hole (running two Pi-hole's)

Using this script why does it update gravity afterwards on the remote machine. it basically renders the script useless as both piholes wont match blocked domain numbers

@RamSet I was hoping to see your new updated script that uses function and more sexiness.

Also inquiring into the status of this as a native feature in the application now that it is 2020.

1 Like

Iā€™m mobile right now. Will post the link to it in a few hours.

Here we go:

This one uses the configuration files (copied) from the original (monitored) Pi-hole instance, located in /etc/dnsmasq.d/
Files (according to the script) need to be stored in /home/pi/stuff/ (path can be changed in the pass function).

Take note of the file names and adjust accordingly the steps where they are deleted.

Do not delete 01-pihole.conf from the backup Pi-hole as that will render the instance unusable.

My systems are heavily embedded with pushover notifications and hence this code, does that too (via an external command).

The pushover lines can be removed without any impact on the overall functionality of the "monitor".

This might work for some people....
UPDATED: Confirmed this works (Implemented on my systems)

LSYNCD

I haven't tested on pihole yet, but I use it at work to keep a HA pair of servers in sync.
You have the option to select/exclude dirs/files you want sync'd

Both Servers

sudo mkdir /etc/lsyncd
sudo vi /etc/lsyncd/lsyncd.conf
sudo mkdir /var/log/lsyncd
sudo touch /var/log/lsyncd/lsyncd.status
sudo touch /var/log/lsyncd/lsyncd.log

=

On the Primary PiHole (IP=192.168.1.1)

sudo vi /etc/lsyncd/lsyncd.conf.lua

settings {
  logfile = "/var/log/lsyncd/lsyncd.log",
  statusFile = "/var/log/lsyncd/lsyncd.status",
  statusInterval = 20
}
 
sync {
  default.rsyncssh,
  delete = false,
  source = "/etc/pihole/",
  host = "192.168.1.2",
  targetdir = "/etc/pihole/",
  excludeFrom = "/etc/lsyncd/lsyncd.exclude",
  delay = 20,
  rsync = {
    archive = true,
    owner = true,
    perms = true,
    group = true,
    compress = false,
    whole_file = true,
  }
}

=
On the Secondary PiHole (IP = 192.168.1.2)

sudo vi /etc/lsyncd/lsyncd.conf.lua

settings {
  logfile = "/var/log/lsyncd/lsyncd.log",
  statusFile = "/var/log/lsyncd/lsyncd.status",
  statusInterval = 20
}
 
sync {
  default.rsyncssh,
  delete = false,
  source = "/etc/pihole/",
  host = "1912.168.1.1",
  targetdir = "/etc/pihole/",
  excludeFrom = "/etc/lsyncd/lsyncd.exclude",
  delay = 20,
  rsync = {
    archive = true,
    owner = true,
    perms = true,
    group = true,
    compress = false,
    whole_file = true,
  }
}

=
On both servers...

cat /etc/lsync/lsyncd.exclude
setupVars.conf
local.list
localbranches
localversions
dhcp.leases
install.log
local.list
*.db
*.db-journal
.gravity*
.list*
whitelist/.git*

=
Usage

sudo systemctl status lsyncd -l
sudo systemctl force-reload lsyncd
sudo systemctl start lsyncd
sudo systemctl restart lsyncd

If you are using any crontab entries to update adlists/whitelists/...etc, this might conflict?

2 Likes

What about using docker? You could put together a docker file that replaces the list files with what you have in your volume. Any time you update the file in the volume, just roll the containers and they'll pick up the changes.

Still trying to get a LB to work. (Internet consist of 1 bar of 4Glte where I live... not even DSL...)

Have you made any progress here? I was wondering if two pihole containers could just share the same volume...

You can easily sync the /etc/pihole/gravity.db database between your devices, possibly even by mounting it from a network share. This database is a single source of truth containing blocked domains, adlists, configured clients, groups and black-/whitelists.

Statistics synchronization is still something which is unlikely to come as it would require a radical change to how FTL processes data as FTL stores everything in memory + precomputes statistics to be able to reply to your requests within milliseconds. If you have used Pi-hole in the pre-FTL times (three years ago), you know the difference in speed through the introduction of FTL was several orders of magnitude and only made using Pi-hole on low-end hardware really enjoyable.

1 Like

Hi,
After 5.0 release, where settings include groups and per-user blocking , the need of a unique configuration for networks using 2 pi-holes is more needed.
Please consider adding it.

Thanks in advance
Best Regards,

1 Like

As I said, syncing this one file is sufficient. You should run pihole restartdns reload-lists after the synchronization and all your groups and per-client settings are in sync.

The issue is: How? Some users will want to sync it though a third party (some will use an internal NAS, some maybe even an external party like Dropbox). Others will maybe want Pi-hole to maintain/operate a file sync system themselves.

I'm aware that there are many votes on this. However, not only given the limited time of the developers, but also that it still seems rather unclear how the technical realization should really look like, I'm afraid this may still take a bit of time. If anyone can come up with a good solution and opens a PR against our publicly available code base, it will surely speed up the discussion around this.

Have you checked out @fireheadman's idea above? I'd rather not like to add this into our automated installer, however, we can surely work on a guide just like Redirecting... which is used by many users to set up custom deployments of their Pi-holes.

3 Likes

I'm still using the old reddit script mentioned here earlier.
Do I need to adjust the list of files after upgrading to 5.0 ?
Currently I have this:
FILES=(black.list blacklist.txt regex.list whitelist.txt lan.list) #list of files you want to sync

Hi DL6ER,
So your advice is to sync only:
/etc/pihole/gravity.db
using any method (for example lsyncd) and run:
pihole restartdns reload-lists
after the sync.
Is this enough to maintain secondary pi-hole as the main one?

For the automatic sync I suggest some sort of API between pi-holes so when a change is done to one of them it's sent to the others. I understand it's not trivial.

Best Regards,

1 Like

.....

1 Like

I sync gravity.db using a couple of crude bash scripts that I can either initiate manually if I am making a lot of changes or it runs on a cron job once a week after the weekly gravity update.

https://raw.githubusercontent.com/jrschat/PublicStuff/master/pds.sh

https://raw.githubusercontent.com/jrschat/PublicStuff/master/reload-list.sh

Great idea! Will be good to have a slightly corrected configuration example with a few typos fixed as well as some additions like how to sync extra files/folders (/etc/dnsmasq.d/) and run post-sync commands like pihole restartdns

Hi everyone...

Happy to colaborate on this. ItĀ“s something IĀ“ve trying to setup latelly since my primary raspi died and my wife canĀ“t use internet until i fix it. :slight_smile:

I think the DNS is not the bigger single point of failure but the DHCP (for thouse that use it in convination with dns filtering

We fully understand second DNS is not a backup but it will become an active for some clients, so both raspberries should have it active and responding to queries.. We will not have convined reports... Gravity list and updates is also not really important and very easy to have same settings on both.

With 5.0 all the whitelist/blocklist are already on the gravity.db, so should be easy to copy it using the already developed scripts and ensuring we run the corresponding pihole command on the destination pi and not on the source one (yes, it happedned to me)

So, we need to add some files of the /etc/dnsmasq.d folder for the DHCP server, specially the 99-second-DNS.conf and 04-pihole-static-dhcp.conf.

What IĀ“m unsure is if we can activate the DHCP on the backup raspi once we detect primary is down...

Thanks in advance
Miguel

1 Like

Below is my first attempt to use lsyncd to synchronize some files between 2 Pi-hole servers.
Thanks to @fireheadman for inspiration :wink:

Notes:

  • I'm using one-way sync, primary --> secondary server
  • Separate 'rsync' account is used on the second/backup server with passwordless login and ability to sudo (that is not covered as already described in multiple sources on Internet)
  • reload after successfull sync is yet to be implemented, 'pihole restartdns reload-lists' is recommended
  • latest version (2.2.3) lsyncd is used, that was installed from sources
  • with lsyncd installed from sources the necessary system scripts will be missing, there're a few workarounds found:
    • install [older] lsyncd using apt, then remove it, but not purge; then install from sources or [re-]run sudo make install
    • download the scripts from external source and install them manually
  • Inclusion with 'filter' directive is available since lsyncd version 2.2.3
  • lsyncd help and examples available here

Primary Pi-hole configuration:

sudo mkdir /etc/lsyncd
sudo mkdir /var/log/lsyncd
sudo touch /var/log/lsyncd/lsyncd.status
sudo touch /var/log/lsyncd/lsyncd.log

sudo nano /etc/lsyncd/lsyncd.conf.lua

settings {
  logfile = "/var/log/lsyncd/lsyncd.log",
  statusFile = "/var/log/lsyncd/lsyncd.status",
  statusInterval = 20
}
 
sync {
  default.rsyncssh,
  delete = false,
  source = "/etc/pihole/",
  host = "rsync@192.168.1.6",
  targetdir = "/etc/pihole/",
  delay = 20,
  filter = {
		'+ gravity.db',
		'+ custom.list',
		'- **'
	},
  rsync = {
    archive = true,
    whole_file = true,
    rsync_path = "sudo rsync",
    _extra = { "--omit-dir-times" }
  }
}

sync {
  default.rsyncssh,
  delete = false,
  source = "/etc/dnsmasq.d/",
  host = "rsync@192.168.1.6",
  targetdir = "/etc/dnsmasq.d/",
  filter = { '- 01-pihole.conf'  },
  delay = 20,
  rsync = {
    archive = true,
    whole_file = true,
    rsync_path = "sudo rsync", 
    _extra = { "--omit-dir-times" }
  }
}

Additional directories could be added to sync as needed. For example, I recently added a folder containing stubby configuration file.
Secondary/backup Pi-hole - make sure rsync is installed.

ToDo:
Execute 'pihole restartdns reload-lists' over ssh on the remote server after successful sync.
This example could be used.

1 Like

I would propose the option to use rqlite or dqlite as a database.
This would probably be the most robust solution for distributed Pi-hole, allowing for many instances with little to no extra config.

1 Like

Do you have any tutorial on how to set this up in combination with PiHole? Am interested!