The issue I am facing:
In all of my wisdom, after upgrading Pi-hole to v6 (installed to-the-metal,) I confirmed the removal of lighttpd despite configuring it for other self-hosted website. I wanted to turn this mistake into an opportunity by exploring a different reverse proxy (e.g. NGINX or Caddy) to server pi-hole on https://pi.hole URL. Are there any documentation for this? I assume I have to sudo systemctl disable pihole-FTL for this?
Note: I've backed up my custom lighttpd configuration files, so I should be able to recover the rest of the self-hosted services I had.
Details about my system:
Hosted on Mac Mini 2012
Ubuntu 24.04.2 LTS
Installed Pi-hole on-the-metal
Both disabled and uninstalled lighttpd (it was giving me problems, anyway.)
NGINX and Caddy installed, but not running
What I have changed since installing Pi-hole:
Upgraded to v6, and accepted the removal of lighttpd.
Great question! The goal of the reverse proxy is simply to setup devices connected to my local network to resolve URLs to a specific IP address and port combo, e.g. make git.example.local to resolve to a gitea docker container served at 192.168.0.50:3000. Similar story for pi.hole to presumably serve 192.168.0.50:53 (if I'm reading the config toml file correctly.) I don't have any intend to expose all of these services to the internet, I have VPN server (separate hardware) for that, instead.
Not pi-hole.example.local specifically, but I'm serving multiple websites via docker containers from the same server as Pi-hole (while pi-hole itself is installed directly.) I've used lighttpd to have git.example.com serve a Gitea docker container, hosted from 192.168.0.50:3000. With lighttpd disabled and uninstalled right now, if I try to access git.example.local, it brings up a 404 page rendered by pi-hole. I'd like to fix that, albeit with a lighttpd alternative.
ETA: To be clear, when I had pi-hole v5 installed, I left the lighttpd configuration for serving it from https://pi.hole untouched. I don't mind leaving that hostname as-is for v6, I'm just looking for a way to serve other websites, which I'm assuming pihole-FTL isn't designed to do.
Instead of using a proxy, maybe consider using virtual machines, or docker with a custom MACVLAN or IPVLAN docker network. That way you can assign each service an IP address (and custom domain name) on your LAN. No proxy needed. Make sure the custom docker network matches your LAN subnet if you decide to go that route.
Using virtual machines or containers can make it really easy to add new stuff to your home server. Just decide what you want to add, choose the IP address, and spin up the container.
Both have their pros and cons, but I'd probably recommend docker for home lab stuff. There's a bit of a learning curve, however. If you are interested, Portainer, Unraid, or TrueNAS are some common OSes, but you can even run it on your Ubuntu install if you'd like.
Interesting, I didn't know about MACVLAN Docker feature. I'm playing around with it now, will hopefully post back if I'm making progress, along with notes in case anyone else is trying to achieve the same thing. It's promising so far.
In case anyone else is seeking how to setup a Docker compose file to work together with Pi.hole on-metal installation, I've had some luck using the MACVLAN network settings. I've mentioned about a git repository as an example, so here's what I did for Forgejo (a Gitea fork) docker-compose.yml file. Note that I basically copy-pasted this into a new stack in Portainer to spin up this site; thus the lack of version field on the first line:
networks:
forgejo_default:
name: forgejo_default
driver: macvlan
driver_opts:
parent: enp3s0f0 # For most computers, this would be eth0, but I guess Macs are special
macvlan_mode: bridge
ipam:
config:
- subnet: 192.168.0.0/24
ip_range: 192.168.0.100/30 # Gives IP range of 192.168.0.100 - 103, if memory serves correct
gateway: 192.168.0.100
services:
app:
image: codeberg.org/forgejo/forgejo:10
environment:
- USER_UID=???? # I made a specific user for Forgejo container, but not revealing it here
- USER_GID=????
restart: always
networks:
forgejo_default:
ipv4_address: 192.168.0.102
volumes:
- /media/www/forgejo/data/:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
- /home/git/.ssh/:/data/git/.ssh
ports:
- '3000:3000'
- '127.0.0.1:2222:22'
caddy:
image: caddy:2
restart: unless-stopped
depends_on:
- app
ports:
- 80:80
- 443:443
environment:
- USER_UID=????
- USER_GID=????
volumes:
- /media/www/forgejo/caddy/Caddyfile:/etc/caddy/Caddyfile
- /media/www/forgejo/caddy/data:/data
- /media/www/forgejo/caddy/config:/config
networks:
forgejo_default:
ipv4_address: 192.168.0.101
For Forgejo specifically, I also chose to expose the Forgejo container's IP Address (192.168.0.102 ), and update Forgejo's config file, app.ini to use that URL as the SSH Git Clone URL. This would let me clone Forgejo repos via SSH. Cloning from HTTPS didn't work, but then again, that was expected given Caddy doesn't generate a Let's Encrypt signed certificate if the URL ends with *.local (or is localhost). In the near-future, I'll probably look up how to setup an internal ACME server, and setup the Caddyfile to consult that instead to verify the certificate, but that's a question for another forum, and I'm getting off-topic.
Hopefully this helps anyone who happens to be in the same boat as I am.