Currently, the pihole-FTL service is using systemd/journalctl for it’s logging.
Is it possible to use systemd/journalct for all logs instead of using “old fashioned” logs in /var/pihole (FTL.log, pihole.log, pihole_updateGravity.log and webserver.log).
By doing so all log info is in one modern facility which will ease system maintenance and debugging
There is currently just the pihole-FTL.service so there's only one log file associated with that service. Dumping multiple log files to one service would be very chatty and very confusing.
You could try using rsyslog to create the logging pipeline that works for you though.
Thanks for your reply @DanSchaper, but maybe I’ve been too unclear. Let me point out, I am referring to the log files in /var/log/pihole. Since they are there at this location I had the impression that they are part of pihole, to be more precise, part of the pihole-FTL service.
pi@vpn:/var/log/pihole $ ls -l
total 78052
-rw-r----- 1 pihole pihole 2622 Jan 16 08:14 FTL.log
-rw-r----- 1 pihole pihole 52659 Jan 7 08:36 pihole_debug.log
-rw-r----- 1 pihole pihole 79859294 Jan 16 09:08 pihole.log
-rw-r----- 1 pihole pihole 1484 Jan 11 04:43 pihole_updateGravity.log
-rw-r----- 1 pihole pihole 0 Jan 13 00:00 webserver.log
pi@vpn:/var/log/pihole $
The most interesting one is pihole.log, let’s dig into this file:
pi@vpn:/var/log/pihole $ sudo lsof ./pihole.log
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
pihole-FT 610 pihole 41w REG 179,2 79872554 261453 ./pihole.log
pi@vpn:/var/log/pihole $ ps -fp 610
UID PID PPID C STIME TTY TIME CMD
pihole 610 1 3 Jan12 ? 03:22:29 /usr/bin/pihole-FTL -f
pi@vpn:/var/log/pihole $
I think (at least) this one is part of the pihole-FTL service. So my FR is to have the service write the messages not into pihole.log but into systemd-journal.
Background: I had a serious challenge with the functionality given by pihole-FTL. Has been resolved , but it would have been so much easier if pihole.log was written to the systemd-journal. Next to that I think it’s best practice for Linux processes/daemons to use systemd-journal instead of seperate logfiles.
It might be what systemd wishes as best practice on distros making use of it, but it's not linux best practice by any means. There are multiple init systems and Pi-hole supports Alpine linux which does not use systemd.
On my Pi I see that pihole-FTL a systemd service is, it writes to systemd-journalclt and I think I ask a reasonable FR to have all logs which are created by the pihole-FTL service go into systemd-journalctl.
If the developers of Pi-Hole agree is something I leave to their discretion. I understand it is “work” and maybe there are many other things to do on Pi-Hole which are much more important.
I leave the Holy discussion about init systems (but also emacs/vi) to others.
Nor am I. My main desktop system uses systemd, some of my other systems do not. Each has their place but the different systems and their limitations are an important consideration when writing code.
Here is a shell script that you could run on startup that will duplicate in real time the log information within the systemd journal, tagging by severity and which file they came from. Sadly this approach is necessary because systemd's journal by design does not and as stated policy will never be able to ingest data from log files as is possible with rsyslog.
Execute it as root (or any user of your choice that has access to the pihole log files and authority to write to the system journal) on startup or any other time you would like pi-hole's log contents duplicated within the systemd journal.
#!/bin/bash
#
# Tail Pi-hole log files and forward entries into systemd-journald
# with inferred severity and per-file SYSLOG_IDENTIFIER tags.
#
# This file is copyright under the European Union Public License version 1.2 or newer
#
LOG_FILES="
/var/log/pihole/FTL.log
/var/log/pihole/pihole.log
/var/log/pihole/pihole_debug.log
/var/log/webserver.log
/var/log/pihole_updateGravity.log
"
infer_priority() {
line="$1"
case "$(printf '%s\n' "$line" | tr '[:upper:]' '[:lower:]')" in
*debug*) echo 7 ;;
*info*) echo 6 ;;
*warn*|*warning*) echo 4 ;;
*error*|*err*) echo 3 ;;
*fatal*|*critical*) echo 2 ;;
*) echo 5 ;;
esac
}
tag_for_file() {
file="$1"
case "$file" in
*/FTL.log) echo "pihole-FTL" ;;
*/pihole_debug.log) echo "pihole-debug" ;;
*/webserver.log) echo "pihole-web" ;;
*/pihole_updateGravity.log) echo "pihole-updategravity" ;;
*) echo "pihole" ;;
esac
}
cleanup() {
echo "Stopping Pi-hole log forwarder."
exit 0
}
trap cleanup INT TERM HUP
# Ensure all log files exist
for f in $LOG_FILES; do
[ -e "$f" ] || touch "$f"
done
echo "Forwarding Pi-hole logs to systemd journal (Ctrl+C to stop)..."
tail -F -v $LOG_FILES 2>/dev/null | \
while IFS= read -r line; do
case "$line" in
"==>"*"<=="*)
current_file=$(printf '%s\n' "$line" | sed 's/^==> //; s/ <==$//')
current_tag=$(tag_for_file "$current_file")
continue
;;
esac
prio=$(infer_priority "$line")
printf '%s\n' "$line" | systemd-cat -t "$current_tag" -p "$prio"
done
Thanks @robgill Great workaround for now, but no more than a workaround.
As you can see from my post, the files are created by pifhole-FTL (started as by the systemd service pihole-FTL). I am just asking to make sure that all logs created by pihole-FTL (the systemd service) go into the journal. What’s the problem with this? Is it not a clear and genuine FR which makes sense for systems with systemd?
Most users probably don't know or care where Pi-hole stores its log files.
From my support point of view, the current advantage of Pi-hole storing log files in a default location -regardless of the distro its running on- is that it allows us to easily request information from and/or point users to the correct location to help them analyse issues, should that be needed.
For my use case, once Pi-hole (or most anything else) is set up and humming along I don’t have need of log file inspection. When I do, I like that I canlessthen search in a targeted location/file, without needing to filter out the rest of my system’s activity. This is just my perspective though.
@Bucking_Horn normal users will probably never be interested in log files, let alone it’s location. They do get interested once things go “wrong” and they are asked for log info. Since it is Linux it will probably never be completely the same on all hardware/OS. Let’s make use of the appropriate logging facility on each system. I think generally there are two, systemd-journal and separate files.
@tomporter518 I fully agree that if it is bound to one component, e.g. pihole-FTL a single file is good enough. Although very often it is related to something else, or initiated by something else happening on the system. Often you do not know upfront which components come into play.
In that case you need to manually correlate many files, I haven't spoked to anybody how enjoys that. With systemd-journal you do not have to correlate, it’s there already, often root cause and symptom are easily identified.
I think we should take advantage of modern facilities if they are available. The pihole-FTL service does that already, now let’s make the next step and make sure that all logging of the service goes into systemd-journal.
If there is no systemd-journal you have to revert to files.