I've been using pihole to block facebook & twitter on my home wifi. I only want this block to operate between 10pm and 6am, when we should all be sleeping.
and pihole status reports enabled after 10pm as expected, so it would appear the cron job is working.
However, I don't actually see my network behaving as if pihole is enabled: browsers continue to access fb & twitter, and (for instance) dig facebook.com returns
facebook.com. 61 IN A 157.240.1.35
e.g the DNS still resolves to the real server.
I do see things behaving as expected if I manually toggle Enabled / Disabled in the Pihole dashboard.
So, a question: should running pihole enable on the commandline and using the toggle on the dashboard theoretically do exactly the same thing? Why might it not do?
I've just tried running pihole enable / disable manually at the CLI, and everything works as expected (browser and dig). So I guess it is a cron issue after all.
Would not client DNS caching come into play here? Disabling pihole after a client has already resolved the domain, means it might not ask again for some time. It doesn't need to.
This strongly depends on your clients setup. Chrome, for example, has an internal DNS chace. Firefox, however, didn't had such a cache when I checked last time. It also depends on the operating system as many Linux systems come with a locally running caching DNS server such as dnsmasq which is doing nothing else than caching...
Which user is executing this cron rule? It should be root.
Can confirm that, on my Mac at least, Chrome and Firefox behave as @DL6ER says. This is why I was using dig to test, which seems to do fresh DNS queries.
Aha! I had been setting the cron rule using sudo crontab -e when logged in as pi on the Raspberry Pi running pihole; my assumption was that the sudo would mean that I was editing the crontab for root, but I now think I was wrong about that.
Editing /etc/crontab directly and specifying root as the user for the cron rules seems to have worked. Hoorah! Thanks, @DL6ER