The issue I am facing:
Pi-Hole has been showing daily Pihole and Pihole-FTL disk almost full errors (saying 98% full) and I can't delete these error messages in pi-hole diagnosis even though Ubuntu shows plenty of disk space.
Details about my system:
Ubuntu 22.04 on linux/celeron amd64 system. 8 gb memory. 2 tb ssd.
*** [ DIAGNOSING ]: Pi-hole diagnosis messages
count last timestamp type message blob1 blob2 blob3 blob4 blob5
------ ------------------- -------------------- ------------------------------------------------------------ -------------------- -------------------- -------------------- -------------------- --------------------
1 2023-02-21 15:57:14 DISK /etc/pihole/pihole-FTL.db 98 /etc/pihole: 1.9TB u
sed, 2.0TB total
1 2023-02-21 15:57:14 DISK /var/log/pihole/FTL.log 98
What I have changed since installing Pi-hole:
Pihole with DOH via cloudflared, keep alive failover installed, this is the primary pihole.
The commands listed here will show the directories with the most space used. In that thread it turned out to be /var/log/pihole/pihole.log, and log rotation not working, which was eating the space. Worth checking in case it's the same for you.
[2023-02-21 00:05:19.617 1278/T1320] WARNING: Disk shortage (/etc/pihole/pihole-FTL.db) ahead: 99% is used (/etc/pihole: 2.0TB used, 2.0TB total)
[2023-02-21 00:05:20.618 1278/T1320] add_message(type=8, message=/etc/pihole/pihole-FTL.db) - SQL error step DELETE: database is locked
[2023-02-21 00:05:20.618 1278/T1320] Error while trying to close database: database is locked
[2023-02-21 00:05:20.618 1278/T1320] WARNING: Disk shortage (/var/log/pihole/FTL.log) ahead: 99% is used (/var/log/pihole: 2.0TB used, 2.0TB total)
[2023-02-21 00:05:21.619 1278/T1320] add_message(type=8, message=/var/log/pihole/FTL.log) - SQL error step DELETE: database is locked
[2023-02-21 00:05:21.619 1278/T1320] Error while trying to close database: database is locked
[2023-02-21 00:06:00.007 1278/T1319] Encountered error while trying to store client in long-term database
df does show that the NVMe drive has space but that drive size doesn't match up with what we are seeing.
There's an additional drive mounted at /media but it doesn't appear to be part of this issue.
What is your mount output showing?
You can also install the ncdu utility via sudo apt install ncdu and then run sudo ncdu / to see the disk utilization and drill down to see what files are eating up space.
I also see that you are using this same server for media and adsb. Plex can chew though space really quickly with thumbnails and other metadata. It's possible that you are pushing to full utilization of the drive and then another process dumps that extra data to try to get back to some available space.
Throwing a watch df -h in a terminal could show you if the disk is indeed being overloaded and then purged.
I have 2 separate boxes. .39 and .40. When .39 dies it fails over to .40. .39 is the real ip. .50 is the virtual one. I have my router pointing to .50 as the DNS.
I think this is the answer. Ive been adding to Plex and I have the media on this SSD. The /media drive is an external SSD that's a backup drive.
I bet Plex is dumping a ton of data at once (or when I use winscp to transfer files to .39 from the windows computer in digitizing old VHS gone videos on)
I think at some point I'll need to get a 2nd external drive and just have media on there and off then main SSD.
Ok. I had a very large log file that I deleted and I emptied the trash with a 5.gb.file in it. Since I rdp in I don't see the desktop trashcan.
Ncdu seems to be stymied because of time shift, with it saying I have some crazy high apparent TiB of files . something that isn't possible.
I think it's a transient spike from plex that's freaking out pihole and the longer term solution is getting the files off the main SSD so it has more room to do what it needs to do.