Saving your SD card (a bit) parts 1..3

I've recently had a worn out SD card and was looking for ways to reduce writes.

It probably doesn't help much, but every write counts...

reference here.

Edit /etc/rsyslog.conf, replace the line

*.*;auth,authpriv.none		-/var/log/syslog

with

*.*;cron,auth,authpriv.none		-/var/log/syslog

and run

sudo service rsyslog restart

This will eliminate all cron logging, more specificly:

raspberrypi CRON[17657]: (root) CMD (   PATH="$PATH:/usr/local/bin/" pihole updatechecker local)

which is run (and logged by default) every 10 minutes.

Script to implement this:

#!/bin/bash
sudo sed -i 's/;auth,authpriv.none/;cron,auth,authpriv.none/' /etc/rsyslog.conf
sudo service rsyslog restart

Of course, this will NOT help if you enable logging of cron messages to a separate file, as suggested in the above reference, unless you create a TMPFS on a location such as /var/log/cron and log to /var/log/cron/cron.log

In order to check if your SD is really defective, use this mine was really defective...

1 Like

NOT all that interested in the content of the lighttpd logs (/var/log/lighttpd) after a reboot?

This will save your SD card a bit more.

You need to have sufficient free memory for this, I'm running this on a raspberry pi 3B, not sure if older models have sufficient resources!

First, look at the content of /var/log/lighttpd, this will give you an idea of the size you need for the TMPFS mount.
We will change the number of rotated logs from 12 (default on Raspbian version april 2019) to 1, so you only need to calculate for access.log, access.log.1, access.log.1.gz, error.log, error.log.1 and error.log.1.gz.
If you're also using munin, or any other application that logs in /var/log/lighttpd, you also need to take the sizes of these log files into account.

Instructions to create the TMPFS for lighttpd (example 16Mb):
WARNING: ensure you don't already have an entry for /var/logs/lighttpd in /etc/fstab!

sudo service lighttpd stop
# add /var/www/html/slate7/pages/text to /etc/fstab
sudo sed -i '$ a tmpfs /var/log/lighttpd tmpfs nodev,nosuid,gid=www-data,uid=www-data,mode=0750,size=16M 0 0' /etc/fstab
sudo mount /var/log/lighttpd
sudo service lighttpd start

As soon as lighttpd has started, it will start logging, using the TMPFS mount (the TMPFS mount uses your RAM = memory), so no more writes to the SD card. Simply browse to the pihole admin page to see the log file grow.

Now, we will change the number of rotated logs for lighttpd, saved by the automatic log rotation:

Instructions to change the default (on raspian version april 2019) log rotation:

sudo sed -i '/weekly/s/weekly/daily/g' /etc/logrotate.d/lighttpd
sudo sed -i '/12/s/12/1/g' /etc/logrotate.d/lighttpd

Log rotation will now occur daily, instead of weekly, only 3 logs will be available (current, yesterdays - uncompressed, the day before yesterday - compressed)

If your TMPFS fills up (chosen size to small), lighttpd will stop functioning. Increase the size in /etc/fstab and reboot. All lighttpd logs are gone (memory is cleared), but lighttpd will recreate them and start with empty log files.

Undo this? Remove the line from /etc/fstab.

1 Like

SSD does not cost much anymore :+1:

Neither does a 32 GB uSD - $8 US delivered.

Replacing a broken card means re-installing and configuring everything again. That takes time, which is more precious than money.

1 Like

Use a second card as a backup. If the first fails, swap in the second and buy a new backup card.

Probably worth noting that a crappy power supply is more likely the primary cause of failed SD Cards.

I have been using the same 4GB uSD card for 3 years since i started using the project and have used that card for extensive testing (beta) etc and have yet to have the card itself fail.

Same thoughts from me. I have 4 Pi-Holes running 24/7 for over a year, no SD card problems. The two Pi-3B+ are using the Adafruit high power 5.3V and 2.5 amp power supplies, the two Pi Zeros are using Apple 2 amp power supplies.

Also, running a larger card spreads the writes over more card, reducing writes to individual card locations. With the low cost of 32 GB cards, that's my standard setup. Lots of space on the card to absorb lots of writes. And all that for $8.

Persistent as I am, despite the negative comments, here is a method and a script to save your logfiles to TMPFS. Many thanks to @DL6ER for the instructions.

Of course, again, if the TMPFS is full (no more RAM disk space), pihole-FTL will probably fail, so choose your TMPFS size carefully.

Whenever a new version of any software I use (raspbian, webmin, unbound, knot-resolver, munin, wireguard, pihole, …) on my pi, I reinstall everything on another SD card, scripted, takes about an hour. Worth the effort to have the latest versions running...

FTL won't fail if it cannot write to its log. However, the system itself might fail if no memory is available.

Please do not interpret proposed alternate courses of action as negative comments. They were not intended to be negative.

Disclaimer: I am just starting and don't know what I am doing!!! :slight_smile:

I installed log2ram

I did not configure it at first.
After adding OpenHAB and OpenVPN to my Pi-Hole, I noticed the ram-disk was always full. I increased the size for the default 40mb to 64mb... but that was like instantly full too.

I disabled all logging and everything including running the SSH command in PiHole (only reason to have it is debugging, I don't care about spying nor statistics on people/devices in my house - plus I have a FingBox for that if I did).

I stopped OpenHAB until I can learn how to configure it more. I think it writes a ton of logs by default, though not sure. No idea on the OpenVPN logs either, no config done there yet, but it works and I use it only with my phone.

This is the first I have heard people say it is not an issue with a modern 32gb SD. I am using an older 16gb at the moment, and need to do a backup of it (tool recommendations?). Thanks to the OP for this thread and the info, as well as the commenters... and I think a clone-tool recommendation would be OT for this thread IMHO.

My installation manual here, chapter 20 to backup...

3 Likes

Found a problem with the custom log cron job, explained here.

If you have implemented this solution, you should make the necessary modifications. Apologies for the inconvenience.

1 Like