My take on syncing 2 piholes - pihole-gemini two-way Pi-hole lists sync

Depending on your Router and situation you can make sure that all your devices go through the PiHole(s), whilst everything else using a custom DNS is blocked.

It's more advanced and requires special attention to your particular setup. Your best bet is to look around and see what settings other people are using. I Use iptables to control this at the router level, and both DNS rules are in strict order: Pihole#1 is main DNS, if it fails it switches to PiHole#2. I don't use DHCP features of the PiHoles so this setup is easier to maintain.

Hey guys,

Love the work. I wanted to update everyone on this and maybe the creator as well. I was having the same problem as @Kroontje with respect to double IP's during compare stage. It has to do with something that was change in the Buster version of Raspbian. Either way, the script needs a slight modification to make it work.

If you are running Buster you need to change line 171 from this

RSYNC_COMMAND=$(rsync --rsync-path='/usr/bin/sudo /usr/bin/rsync' -aiu -e "ssh -l $HAUSER@$REMOTEPI -p$SSHPORT" $PIHOLEDIR/$FILE $HAUSER@$REMOTEPI:$PIHOLEDIR)

to this

RSYNC_COMMAND=$(rsync --rsync-path='/usr/bin/sudo /usr/bin/rsync' -aiu -e "ssh -l $HAUSER -p$SSHPORT" $PIHOLEDIR/$FILE $HAUSER@$REMOTEPI:$PIHOLEDIR)

This removes the first @$REMOTEPI in the line and allows the script to run without douple IP issues. Again only in the Buster variant. I am running full on a 3b+ syncing with a zero w on stretch. I do not know if it works with the lite version but give it a shot. It may be best for OP to make two scripts, one for Buster, and one for everything else. Hope this helps anyone who runs into this issue. Thanks again for putting in the leg work.

2 Likes

@Stefan_Coble

Thanks! When I’m back from holiday I’ll have a go with Buster Lite on my pi 3b+ and report back.

Hello Stefan, many thanks and this works now well with the outlined modifications in line 171 of the pihole-gemini script! I tested this with a RPI 3B+ and a new RPI 4 with buster lite. Very cool thanks!

1 Like

I've got this working (Rasbian Stretch and Debian Stretch) but after I install the pihole-gemini script I've install keepalived to have the 2 pi-hole server in HA (Active/Standby). Keepalived creates a virtual IP and assigns this to the active (master) server. When the master server is down keepalived will switch to the backup server (standby) and give that virtual IP to the backup server.

Except pihole-gemini now does not detect my IP address anymore.
Getting the following error:

pihole-gemini v0.0.2.2a was unable to determine the local ip address.
                                  This is a fatal error. Script can not continue. Please check your
                                  configuration for the script, and make sure you are connected to
                                  your router and have an ip address assigned.

Anyone know how to fix this?

Fixed it with the following changes:

Removed line 90 + 91

LOCALGATEWAY=$(ip -4 route | awk '{print $3}' | head -n 1)
LOCALINTERFACE=$(ip -4 route | awk '{print $3}' | tail -n 1)

And changed line 92 from:

LOCALIP=$(ip -4 -o addr show $LOCALINTERFACE | awk '{print $4}' | cut -d "/" -f 1)

To:

LOCALIP=$(ip -4 -o addr show  | awk '{print $4}' | cut -d "/" -f 1 | sed -n 2p)

The script before my change got the wrong IP (virtual keepalived IP) instead of the normal IP.
After the changes it works again!

Thank you for sharing the script!

I'm getting a error from rsync the error says: rsync error: error in rsync data protocol data stream (code 12) at io.c(235) [sender=3.1.3]

Does anyone know a sollution to this error or help me in the right direction?

Many thanks!

I am not 100% sure this is the correct answer, but it should solve your issue. As I ran into the same issue and this worked for me.

1 Like

Yes!
This worked for me as well, before it didn't work automatically.
Using Ubuntu 18.10 for 1 pihole and raspbian for the other one.

Awesome post GeorgeT! I got this working after I ran into the duplicate ips which is resolved with Stefan_Coble's post; great thread. Thanks to all. I am getting a strange issue now; in the web gui, when I add a domain to the white list from the query log or top blocked domain listing, the "adding domain to whitelist" process will hang; however, after closing the browser the domain that I clicked to add is listed and is synced to my second host running pihole. Should I just add any whitelist and blacklist domains through CLI from here on out? Anybody else experience this? Thank you.

Hi all; kind of embarrassed but I figured it out. I needed to set my account for nopasswd using visudo so pihole would not prompt when the gemini script would run prompted by any change to the whitelist, etc.

Hi all,

I just set up pihole on my Pi 3B+ and as an LXC container on Ubuntu on my QNAP NAS. I wanted to know if there was any extra steps to take when syncing custom.list? From what I understand, the Local DNS Resolutions page updates the entries in custom.list so I want to sync them between the two. Is it as simple as adding 'custom.list' into the "Files to Sync" section of the gemini script? Do I have to add it to both of the Piholes?

Is this affected at all by updating to pihole v5? (other than editing gravity.sh again)

This is a pretty healthy implementation on performing sync between one (or more) secondary (tertiary) Pi-hole instances on V5.0.

1 Like

I use this one now.

1 Like

better fix, that work in all cases, list all ip (including ipv6), keep only the 2 you are expecting:
LOCALIP=$(ip -o addr | awk '{print $4}' | cut -d "/" -f 1 |grep "$PIHOLE1|$PIHOLE2")

Why don't you just share the storage?

A while back I was doing this with Pi-hole on Docker, I just mounted the whole /etc directory to the containers. Data was identical on both yet they load was still shared between them.

This could be done easily mounting the appropriate /etc subdirectories on an NFS share; /etc/dnsmasq.d and /etc/pihole I believe. Then match user/groups IDs. If they are already created with different IDs, you could just uninstall Pi-hole, create the user/group manually with the IDs from another instance and reinstall Pi-hole, maybe double-check for lingering files before reinstalling. I probably would just go with it instead of recreating the user/group.

I do this for a ton of servers and it works quite alright, some of which are very picky in regards to permissions, httpd, nginx, Nextcloud and their PHP processes. Funkwhale or containers with special permissions such as Postgress'… you name it. No need for syncing hosts.

I might just it do now and report back. :grin: (I had stopped using Pi-hole and came back just now)

I'd advise against that.

For once, Pi-hole wasn't designed to work with a shared database.
It works assuming it is the only process accessing and writing to the database.

But more importantly, SQLite3 (Pi-hole's database backend) is not designed for write concurrency at all: Only one process is ever allowed to write to the database.
In busy environments, multiple different origin write requests may cause write attempts to starve.

What's more, SQLite3's write locking is based on locking database files, and as such is depending on file locking routines of the underlying OS.
Unfortunately, OS level file locking is known to be somewhat buggy across all OSs when it comes to network shares, which means you are bound to provoke database corruption when storing SQLite3 database file on NFS shares.

SQLite3's documentation for hosting database files on NFS shares reads as follows:

Your best defense is to not use SQLite for files on a network filesystem.

[ I just finished, it worked again! ]

That occurred to me just now when I was inspecting the files, but I'll still keep it because it's not like it's mission critical information. I think these are workable caveats, for instance; whenever lists are modified, the config can be exported so if things get corrupted, you just restore it and it downloads back its data in a few seconds. Other settings appear to be in .conf files, so they should be safe.

And, depending what the goals are, you could work with some settings, for instance, set one server to mount NFS with sync on while the others with buffers, or read only. I just needed to distribute the load on some old very low power i586 thin clients, they're sol old and they had some ancient IDE SSD that NFS was actually a huge upgrade…plus some memory modules I scavenged from an iMac, they can even use all of it nor nowhere near the full module speed, but, if you add enough of them DNS resolution and filtering becomes insanely fast! :grin:

I didn't even had to sync user IDs!

In this case I would say don't lose sight of the reason for running redundant Pi-holes in the first place. If you're in a position where you're having to work around unpredictable database corruption caused by violating the database's locking guidelines, I'd consider that a bad position compared to having a single stable instance and planning occasional unavailability for admin.

Presumably though any local speed is just tending back towards the default near instant FTL performance you get anyway from a bog standard install, eg on a Pi, following the standard guide.

1 Like