Local DNS for one domain, as a back up if internet goes down

Summary

Is there a way to resolve domains via DNS (cloud) first, and check (permanent) local cache or local storage if they upstream DNS server is unreachable?

Context

I am using Pi-Hole as my DNS server, and I am looking for a solution for a problem for resolving a particular domain.

I have my own domain under Cloudflare DNS. This domain is used for my home network, and for Dynamic DNS to be able to access from outside the network. Like this (about 20-30 different subdomains):

  • external-subdomain.domain.com may direct to my external IP updated to be kept in sync.
  • internal-subdomain.domain.com may direct to a static internal IP.

Usually Pi-hole resolves fine to Cloudflare DNS and everything works fine.

Issue

Today, internet went down. Pi-hole lost access to the internet, and to the upstream DNS server. While a lot of my internal-subdomain.domain.com are internal IPs, all these links broke because the DNS couldn't be resolved because Pi-hole didn't have internet.

I am looking for a way to solve this problem in the future.

Potential Solutions

I have been thinking on what could be some ways to resolve this, but I am unsure what would be the best way or if I am missing some ideas. Any input is appreciated.

Manually setting local DNS

The way I solved this today was by creating Local DNS for all the internal-subdomain.domain.com and that solved it. This works, but I would like something dynamic, as I may add new subdomains or change existing ones, and I do not want to have to input it on both Cloudflare and Pi-hole. If I only update one of them, it may go out of sync and create issues.

Local DNS, but as a backup

One option would be to only look at the Local DNS if the upstream DNS is unavailable. I am not sure if this is a configuration that exists, or makes sense. It also doesn't solve the problem of having to manually maintain both systems in sync. However, at least it would only be a "situation" when internet goes down.

Using DNS caching

Ideally, I would love that Pi-hole uses the normal upstream DNS and resolves the IP normally. I would like this to be stored in local cache, and used any time when internet is down. Ideally, some sort of "permanent" cache.

Caching does this, but it doesn't in the wrong order. What I mean is that if cache exists, Pi-hole would hit the local cache first (rather than upstream DNS first, local as a backup); until it expires. This may lead to wrong resolution if I have updated the IPs on Cloudflare and ttl is long. On the other hand, if I reduce the ttl, I might find myself in a situation where I cannot resolve it because cache is gone.

Additionally, this only works if I had access that domain before.

Local DNS, but automatically sync

Another option would be having a system that syncs everyday the DNS from Cloudflare to the Pi-hole. Cloudflare has an API, so downloading this would be straightforward. However, I am not sure if Pi-hole as a way to dynamically add, remove and/or update the Local DNS using API, cli or writing to a file. Any way I could achieve this?

Other?

Are there any other or simpler solutions to solve this problem?

Thanks for your help and time in advance!

That probably makes the most sense here. Local DNS records are saved in /etc/pihole/custom.list. You can just write your own records in there, its the same syntax as the hosts file (ip hostname).

1 Like

Just to make sure I understand this correctly:
Cloudflare's public DNS server is handing out private range IPv4 addresses for certain subdomains that you've registered with Cloudflare?

If that's true, I'd look for a way to eliminate that dependency.
That is only going to be viable for those subdomains which are meant for strictly internal usage only, i.e. such a subdomain would point to an internal, private range IPv4 address exclusively, never to a public one.

All of those internal only subdomains could then be maintained and resolved exclusively in Pi-hole, cutting Cloudflare's DNS out of the picture completely. That would also mean you wouldn't have to expose your network internals to Cloudflare.

This would equate to your Manually setting local DNS heading, without having to worry about syncing your internals to Cloudflare or vv.

Those of your domains that you expect to resolve to your gateway's public IP address from outside of your network (e.g. when resolving from work or mobile connections - the DynDNS part) would of course need to stay with a public DNS server.


I'll briefly go over your other options

No, not by standard DNS means, and certainly not with dnsmasq's split DNS support limitations.


That is already what DNS is doing.

That is not technically possible, as the TTL field value is number of seconds represented by a 32bit integer (which still would be well over 160 years). Note that many domain registrars will impose a much lower maximum TTL, and DNS resolvers might do so as well.

You basically seem to want Pi-hole to serve stale DNS records from its cache when a record's regular TTL has expired, but its DNS renewal has failed.

That scenario is covered by RFC 8767, but AFAIAAO, dnsmasq does not support that feature.

If you were really intent to go down that path, you could experiment with setting host-record specific TTLs for your own internal subdomains in dnsmasq custom configurations.

Probably an easier approach would be to use a local unbound as Pi-hole's upstream resolver and make yourself familiar with its RFC 8767 related configuration options (e.g. serve-expired-ttl). Note that those would apply to all cached DNS records, not only your internal subdomains.


Still, as I suggested, my personal preference would be to move Cloudflare out of the picture wherever that's possible, and have Pi-hole as the sole source of truth for your network's internals, rather than your current state of making internal network resolution dependant on an external 3rd party service (that you have to share and expose your internals to, to begin with).

Daxtorim, Bucking_Horn Thanks a lot both for the input! Very valuable. Replying below.


Thanks. I've just coded that solution and seems to work fine. I will even share the code soon in case it's helpful for others. However, I will wait for other answers as I really want to explore the problem.


Yes, that's correct.

That dependency is needed and intentional for some reasons. Some of them:

  1. Cloudflare does not only resolve to internal IPs, but also external; including Dynamic DNS over my own IP to access externally. Since my IP changes, I cannot rely on Pi-hole as being the provider for this.

  2. Not every single device can and will rely on Pi-hole as the DNS server. For example, Chromecast are always preset with Google DNS and this cannot be changed. I have some integrations that requires this IP to be resolved by Chromecast too, and as such Cloudflare still serves a purpose.

  3. There are other registry other than A registers which are useful and required by external services. Since my IP is not fixed, I wouldn't be able to rely on Pi-hole for this.

Indeed, the local IP is more of a hack and secondary use case; but one that I would like to fix nonetheless.

This is interesting, I am not familiar with unbound, but I will have a look and familiarise myself with it. Based on what I understand from the serve-expired-ttl description, if it only serves expires for that ttl if it cannot retrieve the updated one, this would work. It's similar to the idea of using the DNS cache as a backup. Nonetheless, if this is not set at domain-specific, it might be creating more potential problems than it really solves.


This also made me think of another potential solution: having 2 Pi-holes. One would be acting as the main DNS and having 2 upstream DNS. One, the one I currently use; the other one would be the other Pi-hole. My assumption here is that Pi-hole would normally use the Primary DNS for queries. As long as there's internet, this would end up calling the Cloudflare DNS for my domain. If this DNS server is not available (internet is down), then it would use the other Pi-hole. This Pi-hole would have the local domains manually configured, solving the issue. However, that still means having to maintain those; and in that case, it's just much simple to update the local DNS on the main Pi-hole and keep it in sync.


All in all, for now, I think the idea of simply keeping Pi-hole and Cloudflare in sync seems the lowest hanging fruit.

This is just to clarify my suggestion, not to criticise your choice.
I cannot know all of the intricacies and use cases in your network, so my advice has to be somewhat generic. Whatever is understood by and working for you is obviously your optimal solution.
:wink:

Your examples seem to ignore my precondition.
I am not suggesting to get rid of public resolution. DynDNS is a valid use case.
I am proposing to move exclusively those subdomains under Pi-hole's control that need internal, private address range resolutions only.

This makes your examples 1. and 3. not applicable.

2 is valid, but should be addressed by configuring your router to redirect your network's DNS traffic to Pi-hole. If that's not possible, have your router block outbound DNS for any device but Pi-hole to much the same effect.
Even if your router would support neither, my preference would be to cage any misbehaving fixed DNS devices behind a dedicated extra AP or firewall of their own. To me, it seems just wrong to rely on an external service for things that are internal only.


If that means duplicating private and public IP address resolutions alike, note that traffic may be flowing slower than it could when public IPs are used, as two devices on your network may communicate via your router instead of talking to each other directly, and in case of an actual Internet outage, clients may try public IP addresses first in vain before falling back to the private IP.
In addition, you may also run into issues with certain reverse lookups for private IP addresses (see e.g. Answer PTR Queries)

If public IPs are not necessary for other reasons, you should try to eliminate them. If you can, you would use only private IPs, and that would again suggest you could put those entries under Pi-hole's control.

Hey, yes, absolutely, no issues and thanks for the suggestions. I understand where you're coming from and why you're saying that it doesn't make sense than an external provider handles internal IP mappings. I agree to a high extent. Nonetheless, this works fine for me in most scenarios, and I have reasons as shared before (eg. solution 1 here).

Additionally. the external IPs and internal IPs are different concerns for different purposes. Some are for access from outside my network to services whose IP I've opened or put behind a reverse proxy; while the local domains are purposefully only for local consumption. I understand that I could decouple these, but back to previous, there are reasons (non-relevant to the question) to make these choices.

All in all, I would like to focus on solutions that would not involved changing the main provider from Cloudflare to other sources.

Again, I am not trying to change my setup, but this is not the case. When in the same network, I use the subdomains pointing to the local IPs, so the only external request is the DNS request to resolve the subdomains to local IPs, everything else stays local.

This is the only part I am trying to solve, how to get the IPs to resolve when internet is not available and only for the router/devices using Pi-hole as the DNS server, without having to maintain both Cloudflare and Pi-hole for new subdomains.

So far the solution of keeping Cloudlfare and Pi-hole Local DNS works. Local DNS of Pi-hole resolves all the subdomains of my domain.com. The script downloads the A DNS entries from Cloudflare setup, and therefore it has an updated list as often as I schedule the script to run. For the devices which do not or cannot rely on Pi-hole, Cloudflare will still be the ultimate resolver for whichever DNS server that they use.

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.