To use MySQL as database

Well,

tell me what I am missing but the dashboard does not do any heavy lifting? Only the Query Log does. So why is the dashboard slow? It should always be very fast. AFAIK the databse is never involved on the dashboard guys?

6 posts were split to a new topic: PHP memory limits

May be a language issue. When someone says dashboard I take that to mean the entire admin interface, not just the index page.

Depend on how many client and query you have. With me, I had more than 1200 clients with more than 1m query per day. So:

  • I have to purge database daily by crontab. If i don't do that, the admin page can not load in next day

sudo service pihole-FTL stop
sudo mv /etc/pihole/pihole-FTL.db /scripts/backup/pihole-FTL_$(date +"%m-%y-%d").db
sudo service pihole-FTL start

  • I am running in CentOS, VMWare and RAM/CPU is not the issue. I can have memory limit to 2GB. But the admin page still take lots of time to load (may be 30s->1min), sometime it timeout and have to refresh again.

memory_limit => 2048M => 2048M

Set the memory allocation to everything you've got. See if that's quick enough. I think you'll find the limitation is PHP and not the database.

Look at the Pi-hole config options around the database:

https://docs.pi-hole.net/ftldns/configfile/#long-term-database-settings

As you are purging the database anyway, you may simply want to disable it altogether? If you want to keep it, you should maybe not be fully restarting pihole-FTL, this seems a bit harsh. I would suggest using the MAXDBDAYS=1 setting or, if you want to keep the data around, just do

sudo sqlite3 /etc/pihole/pihole-FTL.db ".backup /scripts/backup/pihole-FTL_$(date +"%m-%y-%d").db"
sudo sqlite3 /etc/pihole/pihole-FTL.db "DELETE FROM queries;"

Using the .backup is the best option here:

The online backup API allows the contents of one database to be copied into another database, overwriting the original contents of the target database. The copy operation may be done incrementally, in which case the source database does not need to be locked for the duration of the copy, only for the brief periods of time when it is actually being read from. This allows other database users to continue uninterrupted while a backup of an online database is made.

1 Like

Just to add, if you want to reduce the file size also you have to vacuum the database because just deleting does not free space.

Unless SQLite is running in "auto_vacuum=FULL" mode, when a large amount of data is deleted from the database file it leaves behind empty space, or "free" database pages. This means the database file might be larger than strictly necessary. Running VACUUM to rebuild the database reclaims this space and reduces the size of the database file.
Source

The request was not about freeing space. I would not use VACUUM here as it will take a long time to do and the database will anyway grow afterwards. Just do the DELETE and let the database reuse the existing (empty) pages.

1 Like

This would be useful to setup clustering, to have multiple pi-holes run on 1 database.

unless there is another way to cluster pi-hole. I Need a way to cinfugre 3 or more Pi-Hole's exactly the same without manually configuring each one.

I run 5 pi-holes 3 at one location each and the other 2 tag along with a vpn server. Each one is on a different internet connection. the 3 are on same physical but routed and vlan-ed network and each uses a different internet source so DNS still works if 1 or more connections go down. at any rate I want all 3 configured identically 100% of the time. and the 2 on the redundant vpn networks the same, they all have routing inplace to communicate with each other.

@blindrain See GitHub - stevejenkins/pihole-cloudsync: Syncs blocklists, blacklists, and whitelists across multiple Pi-holes using a private GitHub repo

One reason I can think of where MySQL would be preferable to SQLite is when you want to scale or have high-availability by separating the application and the database. 2 Pi-Hole instances pointing to the same MariaDB instance or 2 Pi-Hole instances pointing to 2 MariaDB instance with either 1-way replication (1 master and 1 slave) or 2-way replication (Galera cluster with multi-master setup?)

This is how my Nextcloud setup looks like. If Pi-Hole does switch to MySQL then a similar setup can be done, minus HAProxy and Redis

The only critical component in Pi-hole is the gravity database. Duplicating/synching it periodically seems easy.

As they said above, this is difficult to impossible because of features missing in MySQL:

Well, there's something of the sort that could be used relatively easily.
SELECT (-1 << 8) & INET_ATON("192.168.0.1") = INET_ATON("192.168.0.0");
Though the issue with that is not knowing if it's a /16 or /8 where the third octet just happens to be a 0.
For similar functionality to the subnet_match used, (-1 << 16) should set the 16 most significant bits to 1 for the check. So 192.168.1.1 would return 1, as would 192.168.0.1 but not 192.167.0.1

And if we already know for sure what the subnet mask is, we can use that easily as well:
SELECT (-1 << (33 - INSTR(BIN(INET_ATON("255.255.255.0")), "0"))) & INET_ATON("192.168.0.1") = INET_ATON("192.168.0.0");
This returns either 1 or 0, depending on if 192.168.0.1 is part of the 192.168.0.0/24 subnet (so, in this case, returns 1).
Not quite the same as the subnet_match that was mentioned above, but..

During install, we could even run some CREATE FUNCTION calls for mysql/mariadb to create easier-to-use functions for the code.
Hell, the returned values could even be rewritten to match what's returned for SQLite.

I see, there is also a INET6_ATON so this may even be feasible. I'm still concerned about performance as every DNS lookup has to interact with the database (checking if it is in gravity, the white- or blacklist).

With SQLite3 this is typically very performant (<1 ms even on a Raspberry Pi Zero). We do a lot to achieve this performance, like preparing all the statements beforehand and keeping the index in process-local (!) memory. This turns out that only a variable binding and step is necessary to get the result. With MySQL being the database (maybe even on a different server), I do not know how the performance would look like.

Having said that, I don't think we have the manpower ourselves to do this soon because, as I said above, the current code is tightly written around SQLite3 and a lot of its features like locally in-memory prepared statements so it would be a lot harder than simply replacing the one function which parses the SQL string. This is not to rule this out, but rather to give a realistic estimate that this seems somewhat out of sight without an external contribution for it.

2 Likes