PROXY protocol support for FTLDNS

Currently when proxying requests to FTLDNS all of the requests get logged as coming from the proxy. This makes auditing the query logs more difficult because all of the info about source of the request gets lost.

Currently my workaround is to log proxy connections and simply do an update of the FTLDNS database where I replace localhost with the appropriate IP (using timestamps) but is not perfect because this supposes that only one client is connected at a time.

If FTLDNS would support the PROXY protocol this info could be preserved. For example an nginx proxy that handles TLS encrypted DNS requests could be used to create a private DNS over TLS server.

This is very attractive because this would enable the use of a secure private DNS server on Android Pie phones without the need to install and run a VPN server and client.

Here is the specification for the PROXY protocol.

https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt

Here is also a sample nginx conf that could be used for testing:

stream{
	upstream pihole {
			server    127.0.0.1:53;
	}

	server {
			listen 853 ssl;
			proxy_pass pihole;

		   proxy_protocol on;

			ssl_certificate               ssl.crt;
			ssl_certificate_key        ssl.key;
			ssl_dhparam                 ssl-dhparams.pem;

			ssl_protocols        TLSv1.2;
			ssl_ciphers          HIGH:!aNULL:!MD5;

			ssl_handshake_timeout    10s;
			ssl_session_cache        shared:SSL:20m;
			ssl_session_timeout      4h;

	}
}

I tried to set up the IP transparency but I think this setup needs the proxy and DNS server (pihole) to be on physically different machines.

Has anyone been able to set up a proxy to be transparent to pihole for DNS over TLS requests on the same server?

It requires some doing, but it can be done.

First things first, proxy_protocol has to be supported by the backend you're connecting to for it to work. It uses http headers and thus only works for http connections (it won't work for dns requests to the pihole).

A transparent proxy is what you want; Linux kernel support makes this rather trivial, though the application will have to support it. If it doesn't, then a little hoop jumping will be needed to achieve the same thing.

As previously recommended you could set up a transparent proxy following the steps outlined here:
https://www.nginx.com/blog/ip-transparency-direct-server-return-nginx-plus-transparent-proxy/
I followed this guide using docker containers, though there are a couple of caveats:

  • You don't need to run nginx worker processes as root.

  • iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE should be:
    iptables -t nat -A POSTROUTING -s {container network subnet here} -o eth0 -j MASQUERADE

Here's a rough sketch of setting it up:

./nginx/nginx.conf
user  nginx;
worker_processes  1;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    include /etc/nginx/conf.d/*.conf;
    # I'm assuming you're comfortable reverse proxying the web UI
}

stream {
    # work around for nginx shutting down if the upstream isn't resolvable at runtime
    map $remote_addr $dns_upstream {
        default "pihole:53";
    }

    server {
        resolver 127.0.0.11 valid=30s;
        listen 53;
        listen 53 udp;
        proxy_bind $remote_addr transparent;
        proxy_responses 1;
        proxy_timeout 1s;
        proxy_pass $dns_upstream;
    }
}
./nginx/nginx.dockerfile
FROM arm32v7/nginx:stable as dev
RUN apt-get update &&\
    apt-get install -qq iproute2 iptables &&\
    rm -rf /var/cache/apt/archives /var/lib/apt/lists/*
CMD SUBNET=$(ip route | grep -oP "default via \K(\d*\.){3}") &&\
    MASK=$(ip route | grep -oP "${SUBNET}0/\K(\d*)") &&\
    ip rule add fwmark 1 lookup 100 &&\
    ip route add local 0.0.0.0/0 dev lo table 100 &&\
    iptables -t mangle -A PREROUTING -p tcp -s "${SUBNET}0/${MASK}" --sport 80 -j MARK --set-xmark 0x1/0xffffffff &&\
    iptables -t mangle -A PREROUTING -p udp -s "${SUBNET}0/${MASK}" --sport 53 -j MARK --set-xmark 0x1/0xffffffff &&\
    iptables -t mangle -A PREROUTING -p tcp -s "${SUBNET}0/${MASK}" --sport 53 -j MARK --set-xmark 0x1/0xffffffff &&\
    iptables -t nat -A POSTROUTING -s "${SUBNET}0/${MASK}" -o eth0 -j MASQUERADE &&\
    nginx -g 'daemon off;'

./pihole/pihole.dockerfile
FROM pihole/pihole:4.3.2-1_armhf as dev
RUN sed -i '1 a \
SUBNET=$(ip route | grep -oP "default via \\K(\\d*\\.){3}")\n\
ip route del default\n\
ip route add default via "${SUBNET}2"\n\
/opt/dnscrypt-proxy -service start\n\
' /etc/cont-init.d/20-start.sh
./docker-compose.yml
version: '3.7'

services:

  tproxy:
    build:
      context: ./nginx
      dockerfile: nginx.dockerfile
    cap_add:
    - NET_ADMIN
    container_name: tproxy
    image: tproxy:latest
    networks:
      lan:
        ipv4_address: 172.16.0.2
    ports:
    - mode: host
      protocol: udp
      published: 53
      target: 53
    - mode: host
      protocol: tcp
      published: 53
      target: 53
    - mode: host
      protocol: tcp
      published: 80
      target: 80
    restart: unless-stopped
    volumes:
    - ./nginx/nginx.conf:/etc/nginx/nginx.conf:cached

  pihole:
    build:
      context: ./pihole
      dockerfile: pihole.dockerfile
    image: proxied_pihole:latest
    cap_add:
    - NET_ADMIN
    container_name: pihole
    depends_on:
    - tproxy
    environment:
      DNSMASQ_LISTENING: all
    networks:
      - lan
    restart: unless-stopped

networks:
  lan:
    ipam:
      config:
      - subnet: 172.16.0.0/24
      driver: default

You bake iptables and iproute into the nginx image and then exec the desired commands at container start up. I grep the subnet/mask at runtime rather than hard code it because it depends on how you define the network in docker-compose.yml.

Note, I'm using armv7 images here, you may need to change that.

docker-compose up -d brings it all up
docker-compose down tears it down
docker logs <container id/name> brings up the associated logs
docker container ls -a lists the names and statuses of the running containers

did you ever found a solution for a non docker env ?

Found this: NGINX with DNS over TLS = only localhost - #21 by specterzz
But couldn't get it to run. I'm getting "upstream timed out"...
Has anyone made any progress?