4 KiB × 2^32 = 16 TiB
First I thought the same, but with assumed 100 bytes per query (not sure how much it is approx with meta data?) 1 query per second accumulates to 3 GiB in a year already, and from CPU/throughput side this is way below from what an RPi (even Zero models) is capable of, isn't it? If such database size slows down queries notably and hence raises CPU effort for each request for logging only and then might even lead to an overload/crash (?), then limiting it's size seems like a reasonable step for SBCs. For long-term logs a backup/rotation script can be used, so for most use cases RPi's will still do perfectly fine.
Memory usage is another thing (of course tied to CPU usage/speed). Adding a log entry/new row should not lead to high memory usage only because the whole database is large, should it? But when using the web UI to search and filter logs, I guess it can. Would be another reason for a limit that rules out that all physical memory even can be used by searching though the logs.