Hyperlocal: Is it meaningful to hold a local copy of the root zone?


#1

Hallo,

hab gerade bei Heise folgendes gesehen:

wäre es möglich und überhaupt sinnvoll das auf dem Pi einzurichten?


English translation by @DL6ER: This article describes downloading and maintaining a local copy of the DNS root zone. Te question is whether it is possible and/or useful to do this on a Raspberry Pi.


#2

Sorry, I don’t speak german…

In order to do this you need to implement the unbound solution, proposed by the pihole developers.

Duplicating the root zone (zone transfert) has been partially discussed here

I’m running the latest version of Raspbian (oktober 2018). It isn’t possible with the unbound version, included in this distro, you need at least unbound version 1.7 (mentioned in the article (on discourse)).
It is however possible to compile and use the latest version of unbound (1.8.1) and use the feature. This requires a lot of work, since the original Raspbian package (version 1.6.x) doesn’t appear to use chroot, and some additional software (not included in the unbound package)

The unbound config I’m using looks like this (everything ends up in /etc/unbound):

server:
    logfile: /unbound.log
    verbosity: 1
   interface: 127.x.x.x@55xx
   interface: xxxx:xxxx:xxxx:xxxx::xxxx@55xx
   do-ip4: yes
   do-udp: yes
   do-tcp: yes
   do-ip6: yes
   root-hints: "/root.hints"
   harden-glue: yes
   harden-dnssec-stripped: yes
   use-caps-for-id: no
   cache-min-ttl: 3600
   cache-max-ttl: 86400
   prefetch: yes
   num-threads: 1
   so-rcvbuf: 1m
   edns-buffer-size: 1472
   private-address: 192.168.0.0/16
   private-address: 169.254.0.0/16
   private-address: 172.16.0.0/12
   private-address: 10.0.0.0/8
   private-address: fd00::/8
   private-address: fe80::/10

for some reason, discourse doesn’t show the entire config file, so the config continues here (It is a single config file)…

remote-control:
   control-enable: yes

auth-zone:
   name: "."
   master: i.root-servers.net
   master: f.root-servers.net
   master: j.root-servers.net
   master: k.root-servers.net
   fallback-enabled: yes
   for-downstream: no
   for-upstream: yes
   zonefile: "/root.zone"

I’ve copied the original unbound.conf from the raspian distribution, witch simply includes everything in /etc/unbound/unbound.conf.d. I’ve also copied /etc/unbound/unbound.conf.d/qname-minimisation.conf and /etc/unbound/unbound.conf.d/root-auto-trust-anchor-file.conf from the Raspbian distribution into that directory, and placed my unbound.conf (see above) in this directory

To compile unbound 1.8.1 (on Raspbian), I’m using the following script:

#!/bin/bash

# Make sure only root can run our script
if [ "$(id -u)" != "0" ]; then
  echo "This script must be run as root" 1>&2
  exit 1
fi

# install dnsutils
sudo apt-get -y install dnsutils
# install drill
# usage drill txt qnamemintest.internet.nl
# result HOORAY - QNAME minimisation is enabled on your resolver
sudo apt-get -y install ldnsutils

sudo apt-get -y install libssl-dev
sudo apt-get -y install libexpat1-dev

sudo groupadd -g 991 unbound
sudo useradd -c "unbound-1.8.1" -d /var/lib/unbound -u 991 -g unbound -s /bin/false unbound

file=unbound-1.8.1
mkdir -p unbound
cd unbound
wget https://nlnetlabs.nl/downloads/unbound/$file.tar.gz

for some reason, discourse doesn’ show the entire script, so script continues here (It is a single script)…

tar xzf $file.tar.gz 
cd $file

sudo ./configure --prefix=/usr --sysconfdir=/etc --disable-static --with-pidfile=/run/unbound.pid
sudo make
sudo make install

I haven’t figured out how to compile unbound with system, so I created my own /lib/systemd/system/unbound.service, content:

[Unit]
Description=Unbound DNS server
Documentation=man:unbound(8)
Requires=network.target
Wants=nss-lookup.target
Before=nss-lookup.target
After=network.target

[Service]
ExecStartPre=/usr/sbin/unbound-anchor -a /etc/unbound/root.key -v
ExecStart=/usr/sbin/unbound -d -v
ExecReload=/bin/kill -HUP $MAINPID
Restart=always
RestartSec=360

[Install]
WantedBy=multi-user.target

enable the unbound system configuration, using this:

sudo systemctl daemon-reload
sudo systemctl enable unbound.service

Hope this helps, I also wonder if this is beneficiary for privacy…


#3

thanks for the extensive reply!
you think its not really worth the effort to do this? my first thought was something like that, too. perhaps i will try it anyway when im really extremely bored :slight_smile:


#4

NOT entirely sure it is worth the effort, I did it anyway…

@DL6ER
Is this beneficiary for privacy?
How to verify the unbound DNS request to the root entries are using the local copy?
How often do you need to update the copy? It currently does this only when unbound starts, unless forced by sudo /usr/sbin/unbound-control auth_zone_transfer "."


#5

Compiling Ubound for Raspbian was also not possible for me. Luckily we have now Unbound 1.8.1 available without any hassle.

https://discourse.pi-hole.net/t/unbound-will-not-start/14327/9?u=msatter

It is now already available in Buster and Sid.


#6

have it up and running now, seems to be quite quick an can give a little bit of privacy perhaps…

edit: only seemed to be any effort btw, followed these:

https://docs.pi-hole.net/guides/unbound/

and it took something about 5 minutes…


#7

No. Full-stop.
With qname minimization (enabled by default), the root servers will only be queried for the TLD, i.e., the request will only be for .de or .com. There is no way this would leak anything really. In addition, you’ll still have to contact the name servers one level further down, so anyone “watching” you will still know if you visit a suspicious, e.g., Russian webpage.
TL;DR: No privacy is gained by having a local copy.

What it will be beneficial for is that it will provide a local copy for the top level. This will speed up the first query for an individual TLD after unbound has been restarted. After the first query, e.g., to something.de, your local unbound will hold the corresponding reply in it’s cache and will also take care of always refreshing the information before it expires so there should be no benefit at all once you queried domains under all relevant TLDs at least once.

I looked at the article @Troy_McClure posted above and they mix up a few things.
To summarize things: As unbound used an internal cache, qname minimization and does DNSSEC validation by default, the vast majority of the benefits mentioned in the article are void.
Furthermore, as Pi-hole is cascaded in front, requests for your local domain should not even be forwarded to unbound and can also never propagate through to the root servers (this is once thing they mention concerning privacy).

I cannot give any reliable figures, but I doubt that they’d need to be updated more often than maybe once a month. Maybe once every three months or even once a year is sufficient. The DNS servers referenced in this data are all global players, being relevant/responsible for an entire TLD such as .com, I don’t think that the IP addresses of these servers change very often.


#8

I will rename the header and move this into General so everyone can have a look at this discussion.


#9

will it be more beneficiary for privacy with this Option disabled and will that give a big/any performance impact?
not the pentagon at here but why not use such a solution…


#10

This is actually rather difficult to answer. From my experience, I’d say that all issues with qname minimization have been fixed and that there is no obvious benefit in disabling it. Disabling it would clearly not increase privacy (rather decrease it!).


#11

ok, when qname minimization is ok, what else voids the benefits mentioned ?


#12

One of the points / or maybe their major point is that a local copy would eliminate the first query to the root servers. This argument is nullified by unbound's cache as it would never query the root zone again for a TLD as it already stores a local copy once the first query was made.

Don’t misunderstand me - I’m mainly arguing that unbound is already so efficient and well implemented that most of the problems they address and that could be solved by a local copy don’t even exist in the first place - there is no room for improvement if your existing server already does most of it.


#13

If you want privacy the VPN or other ways of obscuring connection works. You can use the DNS of the VPN provider of use a DNS resolution through the VPN connection.

I have not be able yet to stop Unbound still puting out a load of contact to rootservers on startup.

I realy love Unbound working together with Pi-hole.


#14

I was not happy running qname through the VPN and it was usable with any DNSSEC domains and I had a lot of slow connections.

With 1.8.1 the Anchor (for DNSSEC) was also not generated.
I think that the rootservers don’t like VPN connections and so I put the whole a-to-m.root-servers.net outside the VPN and it worked all again.

I have to test the response times and a longer test but the “flood” start-up of Unbound is now back to normal and I don’t get almost 100 connections going out on startup.

Is anyone else running Ubound through a VPN and do you have the same problem?

You can test is fast by requesting the Anchor file:

unbound-anchor -v -a /var/lib/unbound/root.key

You should get back:

/var/lib/unbound/root.key has content
success: the anchor is ok

If the VPN is causing problems you will get a failure feedback.


#16

You are correct and I will change that in the original posting. Was a bit sleepy at time op writing that.

Wakening this morning I though of what was wrong and then I though of the MTU while that is different between VPN and normal traffic. I changed some stuff to no avail.

Then I switched on wireshark to see what is going wrong and comparing VPN and normal I noticed that the returning UDP packets where cut off over VPN when they are larger than 548 bytes.

So I have to see if that can be solved in Unbound.

Tried to no avail:
edns-buffer-size: 512
max-udp-size: 512

And also MTU 512 on the L2TP/IPsec connection.

It is getting even stranger. So I set the all again as it is working through the VPN and Rootservers through the normal connection.

Doing some surfing and then a package comes over the VPN of size 888 so there is someting special about UDP packets coming from the root servers.


#17

I did not yet solved it but I worked around it. The requests will not get answered good over the VPN which is IPv4 and so I allowed also to make requests over IPv6 and this work very well.

Only the root-server requests go over IPv6 and because I have still IPv6 disabled in the Unbound config file.

Now I have to find why it does not work over IPv4. I checked the normal fallback to TCP if DNS packets are not good.

Update: step by step I getting nearer to a solution.

Then Anchor is read from the data.iana.org with refers to the icann.org site. The problem was that the iana.org DNS return was to big and so I could not go to the correct IP.

I have now switched over from UDP to TCP to upstream DNS servers and now I don’t need IPv6 any more to reach them. In the unbound docs it is stated that TCP is used to reach them if you are using “tunnels” and VPN is a tunnel.

My config section is now:

do-ip6: no
do-ip4: yes
do-udp: yes
do-tcp: yes
tcp-upstream: yes

The “do” part is for your clients and in this case is that Pi-hole. The line with tcp-upstream: yes is telling Unbound to only use TCP to send and receive DNS information.

There is also a posibility to udp-upstream-withoud-downstream: but then your will have break Statefull inspection of packets and make you vulnerable for attacks so not advisable.


#18

Just a small hint: This will cause a notable increase in the amount of that that is sent around and will likely increase delays in transit. Well, it will probably to too bad in the end when your Pi-hole can utilize the cache to a sufficient amount.


#19

Oops I switched of the cache of Pi-hole two weeks ago and Unbound is doing that also. :wink: Otherwise there would be a double caching hapening.

The tablet I use runs Netguard which like to cache requests for three days which I not good and Unbound gave control back to the original TTL. Now the minimal TTL is 300secs

I did not yet has any slowdowns and the answering time is not really or lower. Not receiving a good reply is a bigger problem which is avoided now.

I am still not happy and my brain is still crunching on it.

I thought that DNS was easy but there are still a lot of pitfalls.


#20

Well, it very much depends on your particular setup. Anyways I just wanted to point this out in case others users will find this at some point in the future and would blindly follow any configs they find online.

There is no harm in double caching things. The TTL should ensure that the data is kept only as long as it is meaningful.


#22

Caching is something that one has to wrap their mind around. You have a cascade of events that determine the total efficiency.

TTL is setting the rythm for the cache and when it expires on the client it should also expire in Pi-hole and Unbound. So if you are the only user of a DNS then upstream cache is obsolete due to this.
If there are more than one user then cache becomes more efficient.

Unbound can do some tricks to have even cache hits even if the TTL is expired.
Often used domains can be prefetched when the TTL is almost up. Also, it can serve outdated info with a TTL that is expired with a TTL of zero and at the same time it will fetch current info. This not caching but the effect is the same.

I am still reading the manual for Unbound and can do much different things that most of use will never use.