Expected Behaviour:
pihole -t
will output live view of logs, as will the option in the web gui
Actual Behaviour:
I see no output at all
pihole -t
will output live view of logs, as will the option in the web gui
I see no output at all
Your pihole log is shown as empty, but there are also some other errors in your debug log that indicates that your storage device is full:
-rw-r--r-- 1 www-data www-data 4096 Jun 14 19:51 /var/log/lighttpd/error.log
2019-06-10 06:25:10: (server.c.1534) logfiles cycled UID = 0 PID = 3313
2019-06-14 19:35:37: (mod_fastcgi.c.2543) FastCGI-stderr: PHP Warning: Unknown: write failed: No space left on device (28) in Unknown on line 0
2019-06-14 19:35:37: (mod_fastcgi.c.2543) FastCGI-stderr: PHP Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/var/lib/php/sessions) in Unknown on line 0
2019-06-14 19:35:37: (mod_accesslog.c.180) writing access log entry failed: /var/log/lighttpd/access.log No space left on device
2019-06-14 19:35:37: (mod_compress.c.616) writing cachefile /var/cache/lighttpd/compress//admin/scripts/vendor/jquery.min.js-gzip-127831-84345-1537606924 failed: No space left on device
2019-06-14 19:35:38: (mod_accesslog.c.180) writing access log entry failed: /var/log/lighttpd/access.log No space left on device
2019-06-14 19:35:38: (mod_compress.c.616) writing cachefile /var/cache/lighttpd/compress//admin/scripts/vendor/jquery-ui.min.js-gzip-127826-240027-1537606924 failed: No space left on device
2019-06-14 19:35:38: (mod_compress.c.616) writing cachefile /var/cache/lighttpd/compress//admin/scripts/vendor/jquery.dataTables.min.js-gzip-127828-82638-1537606924 failed: No space left on device
2019-06-14 19:35:38: (mod_compress.c.616) writing cachefile /var/cache/lighttpd/compress//admin/scripts/vendor/Chart.bundle.min.js-gzip-127819-208221-1537606924 failed: No space left on device
2019-06-14 19:35:39: (mod_accesslog.c.180) writing access log entry failed: /var/log/lighttpd/access.log No space left on device
This may be the source of your problem.
Hmmm. Interesting.
Not sure why disc usage has suddenly jumped
pi@pi-hole:~ $ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 3.6G 3.5G 0 100% /
devtmpfs 236M 0 236M 0% /dev
tmpfs 241M 48K 241M 1% /dev/shm
tmpfs 241M 25M 216M 11% /run
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 241M 0 241M 0% /sys/fs/cgroup
/dev/mmcblk0p1 41M 23M 19M 55% /boot
tmpfs 49M 0 49M 0% /run/user/1000
tmpfs 49M 0 49M 0% /run/user/999
Any tips on what to do to free up space?
First, figure out which files are using up the space. Walk through the directories and run ls -lh
. If you find a big file that should not be that big, investigate there.
Second, move up to a bigger uSD; you can get a 32 GB uSD for less than $8 US delivered. I assume they would be similarly inexpensive in the UK.
Go hunt with the du
tool:
pi@noads:~ $ sudo du --max-depth=1 -h /var
35M /var/www
4.0K /var/tmp
4.0K /var/local
12K /var/mail
7.0M /var/log
651M /var/cache
1.9M /var/backups
144M /var/lib
36K /var/spool
4.0K /var/opt
938M /var
I've got 2 files in /var/log
that are over 100MB each.
ls: cannot access '/var/log/dae': No such file or directory
pi@pi-hole:~ $ ls /var/log/dae*
-rw-r----- 1 root adm 132M Jun 15 17:07 /var/log/daemon.log
-rw-r----- 1 root adm 191M Jun 10 06:27 /var/log/daemon.log.1
-rw-r----- 1 root adm 8.6M Jun 2 06:26 /var/log/daemon.log.2.gz
-rw-r----- 1 root adm 8.6M May 26 06:26 /var/log/daemon.log.3.gz
-rw-r----- 1 root adm 5.8M May 19 06:27 /var/log/daemon.log.4.gz
Any idea why they'd be so big?
Safe to delete?
I also had an unbound log file that was nearly 1gb, deleted that, bit didn't see 1gb space frees up?
Also a lot in /usr
pi@pi-hole:~ $ sudo du --max-depth=1 -h /usr
4.0K /usr/games
4.0K /usr/src
267M /usr/share
20M /usr/include
11M /usr/sbin
25M /usr/local
395M /usr/lib
90M /usr/bin
806M /usr
sudo awk -F ':' '{print $4}' /var/log/daemon.log | sort | uniq -c | sort -n -r | head
Safe to delete the archived ones, log.1
, log.2.gz
etc.
Wait a bit.
Sometimes takes a while to free up when deleting large files.
pi@noads:~ $ sudo du --max-depth=1 -h /usr
7.4M /usr/sbin
4.0K /usr/src
160K /usr/local
4.0K /usr/games
348M /usr/lib
96M /usr/bin
324M /usr/share
19M /usr/include
793M /usr
EDIT: added sudo just in case
You will fight this battle again if you don't get a bigger card. The same thing that got this card full will do it again. For a couple of dollars/pounds/rubles/sheckles, you can make your life a lot easier.
Completely agree, and I will get a bigger sdcard in there.
My only 'concern' is that it's suddenly become an issue. Been running the same Pi, same card, same hardware etc for over a year with no problems.
Then I get an issue where unbound stopped working and sdcard is full
To move to a bigger card I can just image the existing one and burn that to a new card right?
My opinion is to better fix the issue (if any) thats bloating those logs.
My setup uses just over 2GB:
pi@noads:~ $ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 7.4G 2.0G 5.1G 29% /
A larger card would take longer to backup with for example the disk destroyer tool "dd
"
If with normal usage it comes close to 80% full, its generally recommended to expand.
My mindset is to use the resources that you have to the optimal if it can be fixed easy.
If not easy to fix, expand.
Ps. the reason why the unbound logs grew so large is probably due to not having log rotate in place:
pi@noads:~ $ man logrotate
[..]
DESCRIPTION
logrotate is designed to ease administration of systems that
generate large numbers of log files. It allows automatic rota‐
tion, compression, removal, and mailing of log files. Each log
file may be handled daily, weekly, monthly, or when it grows
too large.
pi@noads:~ $ cat /etc/logrotate.d/lighttpd
/var/log/lighttpd/*.log {
weekly
missingok
rotate 12
compress
delaycompress
notifempty
sharedscripts
postrotate
if [ -x /usr/sbin/invoke-rc.d ]; then \
invoke-rc.d lighttpd reopen-logs > /dev/null 2>&1; \
else \
/etc/init.d/lighttpd reopen-logs > /dev/null 2>&1; \
fi; \
endscript
}
Exaclty. And if at some point, the verbosity was increased (as I think was the case in this install), that can make for a very big log very quickly. And it stays big forever.
Found a nice one:
Yeah, I had increased the verbosity to 4 due to the problems I'm having with my unbound install.
I've now set it back to 1, and actually not using unbound at all.
Couldn't agree more. But I'm just not sure how to begin figuring out what's bloated the logs
This might shed some light:
pi@pi-hole:~ $ sudo awk -F ':' '{print $4}' /var/log/daemon.log | sort | uniq -c | sort -n -r | head
245316 cpumonitor.service
61329 Traceback (most recent call last)
61329 Stopped CPU Temp Monitor.
61329 Started CPU Temp Monitor.
61329 sock = socket.create_connection((self._host, self._port), source_address=(self._bind_address, 0))
61329 sock.connect(sa)
61329 return self.reconnect()
61329 raise err
61329 publish.single(topic, payload_json, hostname="192.168.0.133", port=1883, auth=auth)
61329 protocol, transport)
I'm guessing it's related to the cpumonitor.service
?
Which is one I created
Yup.
Its always human error
School boy error
Now to try and figure out what's up with my service.
I just realised, this service was publishing to an MQTT server which I had running on a Pi.
I took that Pi offline, so every time the script tried to publish I guess it failed?
Could this be the reason for the bloated log entries?
Am not familiar with MQTT.
But yeah from the couple of lines you posted from the daemon log, it appears to try open a connection to 192.168.0.133, failing and for some reason restarting the cpumonitor service.
An this over and over again.
Better ask the folks from whom you got that cpumonitor.service
code from as this is a bit off topic.
This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.