Pi-hole Unbound in Proxmox supporting multiple VLANS

I had a spare Protectli FW4B laying around and have been testing proxmox in a replacement for the raspberry pi's that I run apps in docker containers like Pi-hole. What is cool is that I have 4 ports. One for management and the 2 of the other 3 setup for LAGG in my switch with the VLAN's assigned to the LAGG. This means that a Linux container can have a network adapter on each VLAN with one Pi-hole instance setup to listen on all origins. No routing needed VLAN to VLAN.

To start I had to create the basic network configuration for the LAGG. The physical network ports are:

  • enp1s0
  • emp2s0
  • emp3s0
  • emp4s0

I bonded emp2s0 and emp3s0 with LACP 802.3ad and configured my network switch to bond the ports with a profile of "All" so all tagged and untagged traffic can be presented on the ports. I then created a Bridge1 that bridge that will be used to bridge the container networks to the LAGG:

image

I then created my first container with a network interface on the "Server" VLAN tag and static IP address. I used the Debian 11 standard container image. Booted and pinged the gateway for the Sever VLAN to make sure I had connectivity. Then I installed Unbound on this container and tested that I could do a NS lookup to the IP address of the new container which worked.

The next container I created was for Pi-Hole. On this container I created a network adapter per VLAN with static address and VLAN tagged. Only one gateway was configured on the "Server" VLAN to keep things simple.

image

I used the same Debian base image for the container and installed Pi-Hole using the script. I then configured the DNS source of the Pi-Hole to the address of the Unbound network adapter in that container and set listen on all origins. Each network interface in the container ends in 222 (192.168.x.222) that way the DNS address is consistent on all VLAN's.

I then did a quick NSLookup on each interface and they were all working. Set the static hosts DNS servers to their local VLAN DNS and DHCP sever to hand out the local VLAN DNS server as the Pi-Hole for that VLAN.

It's been running for a week now with no issues. DNS traffic is now local to each VLAN and could not be happier that this works. I still haven't given up on Docker and other Pi projects but this definitely was the coolest thing I have done since my Pi-Hole and DNScrypt proxy Docker container. I do have plans to setup a 3rd container with just DNScrypt proxy configured for DoH and do some comparison on performance. With this model I can easily spool up and test that configuration.

One more thing is that Pi-Hole container only seems to use 50 to 60 MB of RAM and the Unbound uses about 40 MB of RAM.

1 Like

Just wanted to give a updates on the project. I did stand up the DNScrypt proxy as another container on Proxmox. I configured Pi-hole with Custom 1 pointing to Unbound and Custom 2 pointing to DNSCrypt Proxy. After a week it seems to prefer going to DNSCrypt Proxy. As I understand that DNSmasq will will us the response time of the upstream DNS servers and use the one with the lowest latency. That seems to be what I am observing.

image

192.168.30.221 is Unbound
192.168.30.224 is DNScrypt Proxy in DoH configuration

Well after about 3 months in the dual configuration of both Unbound and DNScryptProxy on the Pi-hole I have made quite the number of configuration changes. First off as much as I like Unbound there are some strange things that happen if you are using AT&T internet through the residential gateways (RG). The RG has limited connection resources which seem to go into a half connected state when Unbound reaches out to the origin servers which all have to timeout until you have connectivity again. So with that in mind I have decided to stick with DNScrypt to Quad9 and Cloudflare DOH as my primary upstream servers.

I have moved and cloned my Pi-Hole Proxmox containers to the production environment with a Pi-Hole running on each Proxmox cluster node. Full redundancy but need to keep the Pi-Holes in sync manually if I make changes to configs which is rare now since things are settled down. Just a word of advise if you are running multiple network interfaces on the Proxmox container there is a bug that will write invalid network configuration file to the container which doesn't delete removed interfaces properly. I discovered this when Pi-hole would start on reboot but then about 30 minutes later it was running. Turned out that the config forced a wait for IPv6 configuration thus blocking the web server and pi-hole from starting. I edited the network configuration file manually to only reference the active interfaces and that resolved the issue.

I hope this thread helps folks that want to have VLAN's with virtual Pi-Hole's in Proxmox.