Saving query logs for long-term analysis

I'd like to save my query logs so that I can do more than daily analysis on them. I'm interested in daily, weekly, monthly, act yearly activity.

Has anyone else done this?

It seems like the API or FTL project might provide what I want through getallqueries and save me the trouble of parsing the dnsmasq logs myself. However, I'm concerned that these tools might not offer the paging that I want (maybe I'd need to contribute that?).

Are /var/log/pihole.log and similar (.1, .2.gz, etc.) the only sources for that information? I might be better off scheduling a cron job that rsyncs pihole.log.1 at like 12:05 nightly to a remote server.

Thoughts?

1 Like

Ah ha! I just spotted getallqueries-time that probably handles the paging that I want.

However, I'm not ready to run FTL if it's not production-ready. FTL seems like it just handles log parsing, so maybe it is what I want?

Playing around with it a little more, it seems that getallqueries-time can still only do today's log. It does not parse beyond that.

Maybe I could put the /var/log on an NFS share and change logrotate so that pihole never cycles out?

Yes, that should exactly do what you want. FTL should be able to keep as much in its internal memory as you'd like to see. Note that, however, the currently shows statistics and everything else is engineered around having the log flushed once a day, so it might be important to make some adjustments depending on your needs.

If you find a clever implementation that does not interfere with how our other users expect it to work, you can always set up pull requests against all of our repositories and we will happily discuss everything with you. We are also here if you need assistance in understanding parts of the code.

I decided to configure logrotate with the olddir set to an NFS mount to my NAS. Moreover, I'll use the dateext directive to configure logrotate to save the log backups with a date instead of a number indicating how many days ago the log was rotated out.

My steps, writing as I go, with intent to turn this into a HOWTO. I performed this while SSH'd into my Raspberry Pi B+ running Pihole v2.12.1 on Raspbian Jessie.

Before doing anything here, ensure that both Pihole and apt are up to date:

pihole -up
sudo apt-get update && sudo apt-get upgrade

Preparing the NFS backup directory

Verifying NFS accessibility

I created the NFS share on my NAS, which will be addressed for the purposes of this write-up as mynas.local:/dns-backup. You will want to use a static IP address for the remote server instead of a hostname if your Pihole is doing DHCP. Mine is not.

sudo mount mynas.local:/dns-backup /mnt/dns-backup -o nolock
dd if=/dev/zero of=/mnt/dns-backup/zeros count=2 bs=1M

Check to see if the file exists. If you didn't get an error after the mount and the file exists, you're good to go. Unmount it.

sudo umount /mnt/dns-backup

Mounting NFS

I decided to use autofs instead of putting the mount entry in /etc/fstab because the latter can delay system startup or cause a backup malfunction if for some reason the share cannot be mounted. I obviously want my Pihole to come back up as quickly as possible after going down for any reason! This Unix & Linux Stack Exchange question swayed me.

sudo apt-get autofs

You'll need to create /etc/auto.master.d, which is configured to load automatically but was not automatically created for me.

sudo mkdir -p /etc/auto.master.d

Next, create your autofs map file and the file containing the mount point and the mount target

echo -e "/mnt\t\t/etc/auto.master.d/mnt.map" | sudo tee /etc/auto.master.d/mnt.autofs
echo -e "dns-backup\t\t-fstype=nfs,nolock,soft,noexec,nosuid\t\tmynas.local:/dns-backup" | sudo tee /etc/auto.master.d/mnt.map

You may need to adjust those NFS options to meet your own needs.

Start autofs with

sudo service autofs start

then verify that you see the zeros file you created earlier when you do

ls /mnt/dns-backup

If you see the zeros file, then your mount is correctly configured! If not, tail /var/log/syslog to see any errors. It took me a good hour to find just the right syntax for the map file. I saved you an hour :wink:

Ensure that it sticks

This is a great time to reboot your Pihole server to ensure that the mounts you've created will come back up when on restart.

sudo reboot

Give it a few seconds and reconnect.

ls /mnt/dns-backup

If you see the zeros file, then your mount is fully prepared and ready to receive data.

Adjusting logrotate configuration

Why Pihole puts its logrotate configuration into /etc/pihole/logrotate instead of /etc/logrotate.d/pihole is beyond me, but we'll go with it for now.

Make /etc/pihole/logrotate look like this:

/var/log/pihole.log {
	su root root
	daily
	copytruncate
	compress
	delaycompress
	notifempty
	nomail
	olddir /mnt/dns-backup
	missingok
	dateext
	dateformat -%Y%m%d
}

logrotate is executed daily by cron, so you'll basically have to wait until midnight to see if this working correctly. You could run logrotate manually with sudo /usr/sbin/logrotate -s /tmp/statefile /etc/logrotate.conf but you'd have to run it again in 24 hours or manually change the timestamps to test it! I'm lazy and I'm not doing that.

With any luck, come midnight the day after you've set this up, you'll have a dated log file in your NFS share! After a couple of days, you'll have several, all but one of which will be compressed. Note that I removed the rotate directive because I want to store everything forever and I have space warnings set up for my NAS. If you have a lot of DNS traffic, be mindful of what you're doing and maybe consider storing only a year's worth of data.

1 Like

I went to check my configuration after a few days of forgetting to do so. I found that my logs are no longer rotating, much to my dismay.

I dug into how pihole executes logrotate, and found that pihole installs a cronjob that runs pihole flush nightly. This flush command is what runs logrotate.

When I ran it manually with -d to show debugging information, I see this:

pi@pihole:~ $ sudo /usr/sbin/logrotate -d /etc/pihole/logrotate
reading config file /etc/pihole/logrotate
olddir is now /mnt/dns-backup
error: /etc/pihole/logrotate:13 olddir /mnt/dns-backup and log file /var/log/pihole.log are on different devices
removing last 1 log configs

So, it looks like my idea isn't going to work. I'll have to find another way to move data to NFS.

For now, you could change piholeLogFlush.sh to achieve what you want (move the file after logrotate did its magic), but that will not survive Pi-hole updates.

I definitely want to do something that will survive updates! Apparently, there are some logrotate options that can circumvent the "same device" rule. I'm looking into them presently.

It looks like logrotate 3.8.9 introduced the renamecopy directive, which specifically enables olddir to be set to a path on a different device. My Pi is running 3.8.7 so it looks like I'm going to have to figure out how to upgrade it.

I'm running Raspbian jessie, which has logrotate 3.8.7. Fortunately, stretch, the current testing release, has logrotate 3.11.0. This means we can try to install the stretch package on jessie!

Download it from the logrotate package download page:

wget http://ftp.us.debian.org/debian/pool/main/l/logrotate/logrotate_3.11.0-0.1_armhf.deb
sudo dpkg -i logrotate_3.11.0-0.1_armhf.deb
pi@pihole:~ $ logrotate -v
Segmentation fault

Nope. Looks like we're building from source.

It looks like there are a lot of patches that get applied:

pi@pihole:~ $ apt-get source logrotate
Reading package lists... Done
Building dependency tree
Reading state information... Done
NOTICE: 'logrotate' packaging is maintained in the 'Svn' version control system at:
http://svn.fedorahosted.org/svn/logrotate/
Need to get 80.8 kB of source archives.
Get:1 http://archive.raspbian.org/raspbian/ jessie/main logrotate 3.8.7-1 (dsc) [1,804 B]
Get:2 http://archive.raspbian.org/raspbian/ jessie/main logrotate 3.8.7-1 (tar) [58.9 kB]
Get:3 http://archive.raspbian.org/raspbian/ jessie/main logrotate 3.8.7-1 (diff) [20.1 kB]
Fetched 80.8 kB in 0s (128 kB/s)
gpgv: keyblock resource `/home/pi/.gnupg/trustedkeys.gpg': file open error
gpgv: Signature made Fri 17 Jan 2014 04:46:34 AM EST using RSA key ID 61F9CA53
gpgv: Can't check signature: public key not found
dpkg-source: warning: failed to verify signature on ./logrotate_3.8.7-1.dsc
dpkg-source: info: extracting logrotate in logrotate-3.8.7
dpkg-source: info: unpacking logrotate_3.8.7.orig.tar.gz
dpkg-source: info: unpacking logrotate_3.8.7-1.debian.tar.xz
dpkg-source: info: applying deb-config-h.patch
dpkg-source: info: applying datehack.patch
dpkg-source: info: applying manpage.patch
dpkg-source: info: applying cpp-crossbuild.patch
dpkg-source: info: applying chown-484762.patch
dpkg-source: info: applying mktime-718332.patch
dpkg-source: info: applying man-su-explanation-729315.patch

It might be easier for me to risk the upgrade to stretch than to mess around trying to rebuild a deb package. Before that, let's try just building logrotate from its upstream source distribution.

First, install the build dependencies. I don't like to fight with ./configure to figure out what all needs to be installed, so I just sledgehammer it with:

sudo apt-get build-dep logrotate

Get the 3.11.0 source, extract, and build it:

wget https://github.com/logrotate/logrotate/releases/download/3.11.0/logrotate-3.11.0.tar.gz
tar xf logrotate-3.11.0.tar.gz
cd logrotate-3.11.0
./configure --with-acl --with-state-file-path=/var/lib/logrotate/status --prefix=/usr
make && sudo make install

The --with-state-file-path part of the ./configure line is important because that's where Debian keeps its status file. Be warned that the --prefix setting will overwrite what is installed by apt-get.

Stand up, do 10 jumping jacks. It takes about a minute on my Raspberry Pi B+, the first one.

Now, give it a try again:

sudo /usr/sbin/logrotate -d /etc/pihole/logrotate

reading config file /etc/pihole/logrotate
olddir is now /mnt/dns-backup
Reading state from file: /var/lib/logrotate/status
Allocating hash table for state file, size 64 entries
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state

Handling 1 logs

rotating pattern: /var/log/pihole.log  after 1 days (no old logs will be kept)
olddir is /mnt/dns-backup, empty log files are not rotated, old logs are removed
considering log /var/log/pihole.log
  Now: 2017-02-27 13:24
  Last rotated at 2017-02-24 00:00
  log needs rotating
rotating log /var/log/pihole.log, log->rotateCount is 0
Converted ' -%Y%m%d' -> '-%Y%m%d'
dateext suffix '-20170227'
glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]'
glob finding logs to compress failed
removing /mnt/dns-backup/pihole.log-20170220.gz
removing old log /mnt/dns-backup/pihole.log-20170220.gz
removing /mnt/dns-backup/pihole.log-20170221.gz
removing old log /mnt/dns-backup/pihole.log-20170221.gz
removing /mnt/dns-backup/pihole.log-20170222.gz
removing old log /mnt/dns-backup/pihole.log-20170222.gz
copying /var/log/pihole.log.tmp to /mnt/dns-backup/pihole.log-20170227
Not truncating /var/log/pihole.log.tmp
removing tmp log /var/log/pihole.log.tmp
removing old log /mnt/dns-backup/pihole.log-20170223.gz

You'll notice a ghastly thing! We need to have a rotate directive, because otherwise, logrotate will delete our old log files! Imagine my terror when I momentarily forgot that the -d option prevents logrotate from actually taking action, thereby removing my log files from previous days!

So, let's change our logrotate config to include an absured rotate directive. 10 years ought to be enough for anyone.

/var/log/pihole.log {
	su root root
	daily
	rotate 3650
	copytruncate
	renamecopy
	compress
	delaycompress
	notifempty
	nomail
	olddir /mnt/dns-backup
	missingok
	dateext
	dateformat -%Y%m%d
}

Run a trial run again and inspect the output to make sure it's doing what you want:

sudo /usr/sbin/logrotate -d /etc/pihole/logrotate

I think this looks OK, because I don't see anything getting deleted now:

reading config file /etc/pihole/logrotate
olddir is now /mnt/dns-backup
Reading state from file: /var/lib/logrotate/status
Allocating hash table for state file, size 64 entries
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state
Creating new state

Handling 1 logs

rotating pattern: /var/log/pihole.log  after 1 days (3650 rotations)
olddir is /mnt/dns-backup, empty log files are not rotated, old logs are removed
considering log /var/log/pihole.log
  Now: 2017-02-27 13:30
  Last rotated at 2017-02-24 00:00
  log needs rotating
rotating log /var/log/pihole.log, log->rotateCount is 3650
Converted ' -%Y%m%d' -> '-%Y%m%d'
dateext suffix '-20170227'
glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]'
glob finding logs to compress failed
copying /var/log/pihole.log.tmp to /mnt/dns-backup/pihole.log-20170227
Not truncating /var/log/pihole.log.tmp
removing tmp log /var/log/pihole.log.tmp

So, we'll see how this runs tonight!

1 Like

This is getting wild. My logs didn't rotate tonight, so went and tried to execute it manually:

$ sudo /usr/sbin/logrotate -f -v /etc/pihole/logrotate
…
copying /var/log/pihole.log.tmp to /mnt/dns-backup/pihole.log-20170228
error: error opening /var/log/pihole.log.tmp: No such file or directory
removing tmp log /var/log/pihole.log.tmp
…

logrotate is apparently not copying to the tmp file like it is supposed to.

I added dateyesterday and removed renamecopy from the config and tried it, expecting logrotate to fail because renamecopy supposedly enabled olddir to be a path not on the same physical device. I guess that limitation is gone, because it worked.

We'll see if it works again tomorrow night.

I've been away for a few days and it seems to be working just fine now.

-rw-r--r-- 1 root    root 807K Feb 24 21:20 pihole.log-20170222.gz
-rw-r--r-- 1 root    root 731K Feb 24 21:20 pihole.log-20170223.gz
-rw-r--r-- 1 dnsmasq root 2.9M Feb 28 00:48 pihole.log-20170227.gz
-rw-r--r-- 1 dnsmasq root 990K Mar  1 00:00 pihole.log-20170228.gz
-rw-r--r-- 1 dnsmasq root 715K Mar  2 00:00 pihole.log-20170301.gz
-rw-r--r-- 1 dnsmasq root 379K Mar  3 00:00 pihole.log-20170302.gz
-rw-r--r-- 1 dnsmasq root 549K Mar  4 00:00 pihole.log-20170303.gz
-rw-r--r-- 1 dnsmasq root 6.8M Mar  5 00:00 pihole.log-20170304
-rw-r--r-- 1 pi      pi   2.0M Feb 24 19:36 zeros

I'll simplify the above into a nice tutorial when I get a chance.

Maybe the tutorial how to move the Pi-hole logfile will come in handy. You could entirely move the log to an NFS share where you will also then want to store also the backups. You just have to make sure that it will always be available!

Hey, I know this topic is old, however, I’ve been messing around with the logs as well. What I’ve been doing is taking the log file, and piping it to another file in a bash script. It seems you could set a cron job up to copy over the log file a minute before the logs are flushed. You could even pipe the file to the end of an existing file in order to make one giant log. You would loose a whole minute of logs so I’m not sure how useful this idea is.

i tried setting this up tonight with the logs going to an external USB drive, hopefully it works!

What tool would be best to analyze these logs?
When i started i thought the logs would be formatted in a way i could use sheets/excel, but i don't see that will be doable with the size and the format.