Can I share the query log file to local computer or ftp via automated script?

**The issue I am facing:
I am looking to share the query log file (SQLite 3 file with all the logs) from the raspberry pi device running pihole so that my windows computer can access the log file in real-time (or updated file every minute) and can parse the data using a program on my computer.

I am trying to determine if I can use a cronjob started on raspberry pi boot that runs a python script every minute uploading the query file to ftp or local computer so that my other program running on my computer can read the data and parse/analyze it. Will this type of set up prevent pihole from writing to the query log file and will I need to look for an alternate solution to get the query file automatically sent to my computer or is there a workaround?
I was reading this article (Moving the Pi-hole log to another location/device) but can’t understand the limitations or how it would auto upload the file by ftp if I turn it into a python script?

**Details about my system:
Pi 4, 4Gb RAM running pihole on pi lite

**What I have changed since installing Pi-hole:
Standard install

You're going to break things if you let another app besides the pihole-FTL binary have write access to the DB.
pihole-FTL needs exclusive write access to the DB!
You can imagine if another app locks some tables in the DB, that pihole-FTL is going to complain if it tries to write to that same table.

You could publish that DB via Samba or NFS as a read only share.

You probably shouldn't use an NFS share - SQLite3's documentation for hosting database files on NFS shares currently reads as follows:

Your best defense is to not use SQLite for files on a network filesystem.

You could create a clean copy of Pi-hole's long-term query database to a new file with the following command:

pihole-FTL sqlite3 /etc/pihole/pihole-FTL.db "VACUUM INTO '/path/to/pihole-FTL.backup.db'"

Replace /path/to/ with an existing location as required (e.g. with a network mount).
This results in a potentially smaller file than the direct copy, and it can be executed while your Pi-hole is running.

However, note that Pi-hole's long-term query db can grow quite large, and copying it to a remote machine every minute may consume considerable amounts of bandwidth.

For an alternative approach, you should be aware that Pi-hole also writes to an actual log file at /var/log/pihole/pihole.log.

It may be easier and less bandwidth intensive to send entries to that log file to a remote location via rsyslog, as some users have succeeded with, see e.g. REQUEST: Option to send logs to a remote logserver - #27 by Tim_Dockter.

Of course, you'd have to parse log files instead of querying a database, but you could always build your own database from those logs.

I am also aware of a user effort to collect and visualise Pi-hole's log data via Kibana/Elasticsearch, employing FileBeat to send logs, which may be similar to what you try to achieve - see GitHub - nin9s/elk-hole: elasticsearch, logstash and kibana configuration for pi-hole visualiziation.
It's somewhat dated, so it may or may not work with current Pi-hole releases, but you probably could still glance some inspiration from it, or get in contact with that third-party maintainer to exchange ideas.

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.