DNS for archive.is

dig archive.is is not returning an address.

;; ANSWER SECTION:
archive.is.		70	IN	A	0.0.0.0

If I do dig @1.1.1.1 archive.is I get an IP address about 1/4th of the time which I don't understand. I currently have quad9 configured in my DNS settings. If I change it to Cloudflare, I still don't get an IP address and the log says Blocked (external, NULL) which the net says it is blocked by the upstream server but if that is the case, why does it work about 1/4th the time when I go to @1.1.1.1 directly?

All this to ask... how do I change my settings so that I can get an IP address for archive.is?

I've tried adding archive.is to my white list and that didn't change anything.

Your upstream DNS server is blocking this domain. You will need to change upstream DNS servers if this continues to be blocked by the one you are using. Here are my results:

**root@nanopi** : **~** # dig +short archive.is @8.8.8.8
94.16.117.236

**root@nanopi** : **~** # dig +short archive.is @1.1.1.1
0.0.0.0

**root@nanopi** : **~** # dig +short archive.is @9.9.9.9
0.0.0.0

If Pi-hole is not blocking the domain, then adding it to the whitelist will have no effect. What is the output of the following command from the Pi terminal:

pihole -q archive.is

Thank you for the help

# pihole -q archive.is
 Match found in exact whitelist
   archive.is

Can you help me understand why 1.1.1.1 returns an IP address about 1/4th of the time?

I cannot. This would be a question for Cloudflare. Note that Google DNS does not appear to block this, so perhaps that is your best choice. Alternatively, a local instance of unbound would definitely not block any domains, and running this puts you in control of your own DNS resolver.

https://docs.pi-hole.net/guides/unbound/

1 Like

Wow... that was drop dead easy!

The weirdness with Cloudflare returning an IP occasionally really played havoc with my debugging of the issue. I would change things which I was "positive" should not affect anything and then "test" and it would seem that things were different. I finally ended up doing about 8 dig commands as a single "test".

Anyhow, I wrote a silly script to pull down root.hints and keep a sequence of backups:

#!/bin/bash

wget -O root.hints https://www.internic.net/domain/named.root
sudo /bin/bash << EOF
[[ -r /var/lib/unbound/root.hints.02 ]] && rm -f /var/lib/unbound/root.hints.02
[[ -r /var/lib/unbound/root.hints.01 ]] && mv /var/lib/unbound/root.hints.01 /var/lib/unbound/root.hints.02
[[ -r /var/lib/unbound/root.hints.00 ]] && mv /var/lib/unbound/root.hints.00 /var/lib/unbound/root.hints.01
[[ -r /var/lib/unbound/root.hints ]] && mv /var/lib/unbound/root.hints /var/lib/unbound/root.hints.00
mv root.hints /var/lib/unbound/
EOF

Thank you again for the help and to everyone who has contributed to pi-hole!

No need to keep root.hints backups. Once the hints expire they are useless and the root hints are always available.

1 Like

I was worried about a catch-22 type situation. How do I do DNS to get root.hints if I don't have DNS? But I guess I could reconfigure pi-hole to use Google, quad9, etc, fetch it, and then configure it back.

Hints don't just change, they are rotated in. And root hints rarely change, when they do the addresses are sent out beforehand to prime the caches.

I'm all for having backups but in this case it's really not necessary.

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.