V6 beta seems to crash / not respond (so far randomly?)

Hi,

i'm running the beta in LXC (ubuntu server 22.04 LTS) and after some time it stops responding to DNS-Requests.

Are there any steps i can perform to provide you with more info than "it crashed"?

PS: To measure this i've added pihole to my monitoring (using uptime kuma) today.

Take a look in the following files for any entries at the time of the crash:

/var/log/syslog

/var/log/pihole/FTL.log

/var/log/pihole/pihole.log

and at the output of the command dmesg from the terminal.

pihole.log around the time:

Oct 12 09:51:13 dnsmasq[157]: query[AAAA] api.open-notify.org from 192.168.0.53
Oct 12 09:51:13 dnsmasq[157]: cached api.open-notify.org is NODATA-IPv6
Oct 12 09:51:13 dnsmasq[157]: query[A] api.open-notify.org from 192.168.0.53
Oct 12 09:51:13 dnsmasq[157]: cached api.open-notify.org is 138.68.39.196
Oct 12 09:51:14 dnsmasq[157]: query[AAAA] mqtt.zen-iot.com from 192.168.0.53
Oct 12 09:51:14 dnsmasq[157]: cached mqtt.zen-iot.com is NODATA-IPv6
Oct 12 09:51:16 dnsmasq[157]: query[SRV] _http._tcp.deb.volian.org from fe80::580c:e4ff:fec8:3b03
Oct 12 15:10:29 dnsmasq[157]: started, version pi-hole-v2.89-9461807 cachesize 10000
Oct 12 15:10:29 dnsmasq[157]: compile time options: IPv6 GNU-getopt no-DBus no-UBus no-i18n IDN DHCP DHCPv6 Lua TFTP no-conntrack ipset no-nftset auth cryptohash DNSSEC loop-detect inotify 
dumpfile
Oct 12 15:10:29 dnsmasq[157]: DNSSEC validation enabled
Oct 12 15:10:29 dnsmasq[157]: configured with trust anchor for <root> keytag 20326
Oct 12 15:10:29 dnsmasq[157]: using nameserver 8.8.8.8#53
Oct 12 15:10:29 dnsmasq[157]: using only locally-known addresses for onion
Oct 12 15:10:29 dnsmasq[157]: using only locally-known addresses for bind
Oct 12 15:10:29 dnsmasq[157]: using only locally-known addresses for invalid
Oct 12 15:10:29 dnsmasq[157]: using only locally-known addresses for localhost
Oct 12 15:10:29 dnsmasq[157]: using only locally-known addresses for test
Oct 12 15:10:29 dnsmasq[157]: using only locally-known addresses for lan
Oct 12 15:10:29 dnsmasq[157]: read /etc/hosts - 15 names
Oct 12 15:10:29 dnsmasq[157]: read /etc/pihole/custom.list - 57 names
Oct 12 15:10:29 dnsmasq[157]: read /etc/pihole/local.list - 0 names

pihole-ftl.log

2023-10-12 09:15:08.096 [157/T326] WARNING: Long-term load (15min avg) larger than number of processors: 2.4 > 2
2023-10-12 09:20:08.674 [157/T326] WARNING: Long-term load (15min avg) larger than number of processors: 2.5 > 2
2023-10-12 09:20:09.680 [157/T326] ERR: add_message(type=6, message=excessive load) - SQL error step DELETE: database is locked
2023-10-12 09:20:09.681 [157/T326] ERR: Error while trying to close database: database is locked
2023-10-12 09:20:09.681 [157/T326] ERR: log_resource_shortage(): Failed to add message to database
2023-10-12 09:20:13.645 [157/T325] INFO: Size of /etc/pihole/pihole-FTL.db is 1426.60 MB, deleted 30915 rows
2023-10-12 09:25:08.916 [157/T326] WARNING: Long-term load (15min avg) larger than number of processors: 2.4 > 2
2023-10-12 09:28:07.673 [157/T325] INFO: !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
2023-10-12 09:28:07.673 [157/T325] INFO: ---------------------------->  FTL crashed!  <----------------------------
2023-10-12 09:28:07.673 [157/T325] INFO: !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
2023-10-12 09:28:07.673 [157/T325] INFO: Please report a bug at https://github.com/pi-hole/FTL/issues
2023-10-12 09:28:07.673 [157/T325] INFO: and include in your report already the following details:
2023-10-12 09:28:07.674 [157/T325] INFO: FTL has been running for 1087 seconds
2023-10-12 09:28:07.674 [157/T325] INFO: FTL branch: development-v6
2023-10-12 09:28:07.674 [157/T325] INFO: FTL version: vDev-80d4a0e
2023-10-12 09:28:07.674 [157/T325] INFO: FTL commit: 80d4a0ef
2023-10-12 09:28:07.674 [157/T325] INFO: FTL date: 2023-10-10 20:36:04 +0200
2023-10-12 09:28:07.674 [157/T325] INFO: FTL user: started as pihole, ended as pihole
2023-10-12 09:28:07.674 [157/T325] INFO: Compiled for linux/amd64 (compiled on CI) using cc (Alpine 12.2.1_git20220924-r10) 12.2.1 20220924
2023-10-12 09:28:07.674 [157/T325] INFO: Process details: MID: 157
2023-10-12 09:28:07.674 [157/T325] INFO:                  PID: 157
2023-10-12 09:28:07.674 [157/T325] INFO:                  TID: 325
2023-10-12 09:28:07.674 [157/T325] INFO:                  Name: database
2023-10-12 09:28:07.674 [157/T325] INFO: Received signal: Segmentation fault
2023-10-12 09:28:07.674 [157/T325] INFO:      at address: 0
2023-10-12 09:28:07.674 [157/T325] INFO:      with code:  Unknown (128)
2023-10-12 09:28:07.674 [157/T325] INFO: !!! INFO: pihole-FTL has not been compiled with glibc/backtrace support, not generating one !!!
2023-10-12 09:28:07.675 [157/T325] INFO: ------ Listing content of directory /dev/shm ------
2023-10-12 09:28:07.675 [157/T325] INFO: File Mode User:Group      Size  Filename
2023-10-12 09:28:07.675 [157/T325] INFO: rwxrwxrwx root:root       280  .
2023-10-12 09:28:07.675 [157/T325] INFO: rwxr-xr-x root:root       480  ..
2023-10-12 09:28:07.675 [157/T325] INFO: rw------- pihole:pihole   544K  FTL-fifo-log
2023-10-12 09:28:07.675 [157/T325] INFO: rw------- pihole:pihole     4K  FTL-per-client-regex
2023-10-12 09:28:07.675 [157/T325] INFO: rw------- pihole:pihole    12K  FTL-dns-cache
2023-10-12 09:28:07.675 [157/T325] INFO: rw------- pihole:pihole     8K  FTL-overTime
2023-10-12 09:28:07.675 [157/T325] INFO: rw------- pihole:pihole     2M  FTL-queries
2023-10-12 09:28:07.675 [157/T325] INFO: rw------- pihole:pihole    20K  FTL-upstreams
2023-10-12 09:28:07.676 [157/T325] INFO: rw------- pihole:pihole    86K  FTL-clients
2023-10-12 09:28:07.676 [157/T325] INFO: rw------- pihole:pihole    25K  FTL-domains
2023-10-12 09:28:07.676 [157/T325] INFO: rw------- pihole:pihole    82K  FTL-strings
2023-10-12 09:28:07.676 [157/T325] INFO: rw------- pihole:pihole    16  FTL-settings
2023-10-12 09:28:07.676 [157/T325] INFO: rw------- pihole:pihole   292  FTL-counters
2023-10-12 09:28:07.676 [157/T325] INFO: rw------- pihole:pihole    88  FTL-lock
2023-10-12 09:28:07.676 [157/T325] INFO: ---------------------------------------------------
2023-10-12 09:28:07.676 [157/T325] INFO: Please also include some lines from above the !!!!!!!!! header.
2023-10-12 09:28:07.676 [157/T325] INFO: Thank you for helping us to improve our FTL engine!
2023-10-12 09:28:07.676 [157/T325] INFO: Waiting for threads to join
2023-10-12 09:28:07.960 [157/T327] INFO: Terminating resolver thread
2023-10-12 09:28:08.072 [157/T326] INFO: Terminating GC thread
2023-10-12 09:28:09.677 [157/T325] INFO: Thread database (0) is still busy, cancelling it.
2023-10-12 09:28:09.677 [157/T325] INFO: All threads joined
2023-10-12 09:28:10.827 [157M] ERR: Error when obtaining outer SHM lock: Previous owner died
2023-10-12 09:28:10.827 [157M] ERR: Error when obtaining inner SHM lock: Previous owner died
2023-10-12 09:40:06.281 [157M] WARNING: Found database entries in the future (2023-10-12 09:45:00 (1697096700), last timestamp for importing: 2023-10-12 09:25:00 (1697095500)). Your over-ti
me statistics may be incorrect (found in src/dnsmasq_interface.c:712)
2023-10-12 15:08:22.669 [157/T332] WARNING: API: Unauthorized
2023-10-12 15:08:22.708 [157/T331] WARNING: API: Unauthorized
2023-10-12 15:08:22.714 [157/T330] WARNING: API: Unauthorized
2023-10-12 15:08:22.718 [157/T333] WARNING: API: Unauthorized
2023-10-12 15:08:22.719 [157/T332] WARNING: API: Unauthorized
2023-10-12 15:08:22.719 [157/T331] WARNING: API: Unauthorized
2023-10-12 15:08:22.721 [157/T330] WARNING: API: Unauthorized
2023-10-12 15:08:22.723 [157/T333] WARNING: API: Unauthorized
2023-10-12 15:08:22.730 [157/T332] WARNING: API: Unauthorized
2023-10-12 15:08:22.745 [157/T331] WARNING: API: Unauthorized
2023-10-12 15:08:22.745 [157/T330] WARNING: API: Unauthorized
2023-10-12 15:08:22.746 [157/T333] WARNING: API: Unauthorized
2023-10-12 15:08:22.748 [157/T332] WARNING: API: Unauthorized
2023-10-12 15:08:22.749 [157/T330] WARNING: API: Unauthorized
2023-10-12 15:08:22.750 [157/T333] WARNING: API: Unauthorized
2023-10-12 15:10:21.512 [157M] INFO: ########## FTL started on lxc-pihole! ##########
2023-10-12 15:10:21.514 [157M] INFO: FTL branch: development-v6
2023-10-12 15:10:21.514 [157M] INFO: FTL version: vDev-80d4a0e
2023-10-12 15:10:21.514 [157M] INFO: FTL commit: 80d4a0ef
2023-10-12 15:10:21.514 [157M] INFO: FTL date: 2023-10-10 20:36:04 +0200
2023-10-12 15:10:21.514 [157M] INFO: FTL user: pihole
2023-10-12 15:10:21.514 [157M] INFO: Compiled for linux/amd64 (compiled on CI) using cc (Alpine 12.2.1_git20220924-r10) 12.2.1 20220924
2023-10-12 15:10:21.519 [157M] WARNING: copy_file(): Failed to open "/etc/pihole/config_backups/pihole.toml.1" for writing: Permission denied
2023-10-12 15:10:21.519 [157M] WARNING: Rotation /etc/pihole/pihole.toml -(COPY)> /etc/pihole/config_backups/pihole.toml.1 failed
2023-10-12 15:10:21.519 [157M] INFO: Writing config file
2023-10-12 15:10:21.521 [157M] WARNING: copy_file(): Failed to open "/etc/pihole/config_backups/dnsmasq.conf.1" for writing: Permission denied
2023-10-12 15:10:21.521 [157M] WARNING: Rotation /etc/pihole/dnsmasq.conf -(COPY)> /etc/pihole/config_backups/dnsmasq.conf.1 failed
2023-10-12 15:10:21.523 [157M] WARNING: copy_file(): Failed to open "/etc/pihole/config_backups/custom.list.1" for writing: Permission denied
2023-10-12 15:10:21.523 [157M] WARNING: Rotation /etc/pihole/custom.list -(COPY)> /etc/pihole/config_backups/custom.list.1 failed
2023-10-12 15:10:21.523 [157M] WARNING: Cannot set process priority to -10: Permission denied. Process priority remains at 0
2023-10-12 15:10:21.526 [157M] INFO: PID of FTL process: 157
2023-10-12 15:10:21.549 [157M] INFO: Database version is 13
2023-10-12 15:10:21.549 [157M] INFO: Database successfully initialized
2023-10-12 15:10:29.744 [157M] INFO: Imported 22127 queries from the on-disk database (it has 10126799 rows)
2023-10-12 15:10:29.744 [157M] INFO: Parsing queries in database

According to pihole-log this crashed somewhere around 09:20 +/- but according to pihole.log it responded after that

dmesg won't woirk here in LXC (unprivileged) - works on the host but there is nothing related to the instance itself. (which is lxc-2000, the 2001 named there is an unbound-install i started at the time again but is unrelated; pihole crashed somewhere yesterday without unbound-lxc running)

[Oct12 09:12] EXT4-fs (dm-9): mounted filesystem 03eba489-2642-46a5-8f5c-eeae47b80acf with ordered data mode. Quota mode: none.
[  +0.678478] kauditd_printk_skb: 1 callbacks suppressed
[  +0.000019] audit: type=1400 audit(1697094745.636:330): apparmor="STATUS" operation="profile_load" profile="/usr/bin/lxc-start" name="lxc-2001_</var/lib/lxc>" pid=1737548 comm="apparmor_>
[  +1.286707] vmbr0: port 3(fwpr2001p0) entered blocking state
[  +0.000012] vmbr0: port 3(fwpr2001p0) entered disabled state
[  +0.000173] device fwpr2001p0 entered promiscuous mode
[  +0.000091] vmbr0: port 3(fwpr2001p0) entered blocking state
[  +0.000011] vmbr0: port 3(fwpr2001p0) entered forwarding state
[  +0.034377] fwbr2001i0: port 1(fwln2001i0) entered blocking state
[  +0.000035] fwbr2001i0: port 1(fwln2001i0) entered disabled state
[  +0.000148] device fwln2001i0 entered promiscuous mode
[  +0.000125] fwbr2001i0: port 1(fwln2001i0) entered blocking state
[  +0.000006] fwbr2001i0: port 1(fwln2001i0) entered forwarding state
[  +0.034755] fwbr2001i0: port 2(veth2001i0) entered blocking state
[  +0.000012] fwbr2001i0: port 2(veth2001i0) entered disabled state
[  +0.000188] device veth2001i0 entered promiscuous mode
[  +0.141462] eth0: renamed from vethW9Usg0
[  +1.220633] audit: type=1400 audit(1697094748.356:331): apparmor="STATUS" operation="profile_load" label="lxc-2001_</var/lib/lxc>//&:lxc-2001_<-var-lib-lxc>:unconfined" name="lsb_release>
[  +0.002292] audit: type=1400 audit(1697094748.356:332): apparmor="STATUS" operation="profile_load" label="lxc-2001_</var/lib/lxc>//&:lxc-2001_<-var-lib-lxc>:unconfined" name="nvidia_modp>
[  +0.000016] audit: type=1400 audit(1697094748.356:333): apparmor="STATUS" operation="profile_load" label="lxc-2001_</var/lib/lxc>//&:lxc-2001_<-var-lib-lxc>:unconfined" name="nvidia_modp>
[  +0.009106] audit: type=1400 audit(1697094748.364:334): apparmor="STATUS" operation="profile_load" label="lxc-2001_</var/lib/lxc>//&:lxc-2001_<-var-lib-lxc>:unconfined" name="/usr/bin/ma>
[  +0.000039] audit: type=1400 audit(1697094748.364:335): apparmor="STATUS" operation="profile_load" label="lxc-2001_</var/lib/lxc>//&:lxc-2001_<-var-lib-lxc>:unconfined" name="man_filter">
[  +0.000016] audit: type=1400 audit(1697094748.364:336): apparmor="STATUS" operation="profile_load" label="lxc-2001_</var/lib/lxc>//&:lxc-2001_<-var-lib-lxc>:unconfined" name="man_groff" >
[  +0.014495] audit: type=1400 audit(1697094748.380:337): apparmor="STATUS" operation="profile_load" label="lxc-2001_</var/lib/lxc>//&:lxc-2001_<-var-lib-lxc>:unconfined" name="/usr/lib/Ne>
[  +0.000016] audit: type=1400 audit(1697094748.380:338): apparmor="STATUS" operation="profile_load" label="lxc-2001_</var/lib/lxc>//&:lxc-2001_<-var-lib-lxc>:unconfined" name="/usr/lib/Ne>
[  +0.000010] audit: type=1400 audit(1697094748.380:339): apparmor="STATUS" operation="profile_load" label="lxc-2001_</var/lib/lxc>//&:lxc-2001_<-var-lib-lxc>:unconfined" name="/usr/lib/co>
[  +0.127653] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[  +0.000102] fwbr2001i0: port 2(veth2001i0) entered blocking state
[  +0.000008] fwbr2001i0: port 2(veth2001i0) entered forwarding state

Not sure you'll be able to but it would be tremendously helpful if you could bring FTL into being watched by a debugger. We have a step-by-step guide written here: gdb - Pi-hole documentation

You will have to bring gdb into your container, but this should be pretty straightforward given the instructions over there. The most important bit will be the backtrace you can get once the crash happens. From thereon, we can see together how to go forward. Maybe it immediately becomes clear when looking at the mentioned code.

I'll try :slight_smile: And it did crash 3 times since then, so shoudn't be a problem

Thanks for they reply, i'll start that right away


And now l'll wait :slight_smile:

1 Like

Nothing so far apart from that:

image

this is normal, i think?

Yes, perfectly fine. gdb will drop to it's command line interface awaiting input on a crash.

1 Like

Great - it did not crash since running with gdb

It will eventually

ah, finally

[Detaching after vfork from child process 7401]
[Detaching after vfork from child process 7403]
[Detaching after vfork from child process 7408]
[Detaching after vfork from child process 7410]
[Detaching after vfork from child process 7415]
[Detaching after vfork from child process 7417]
[Detaching after vfork from child process 7423]
[Detaching after vfork from child process 7425]
[Detaching after vfork from child process 7433]

Thread 2 "database" received signal SIGSEGV, Segmentation fault.
[Switching to LWP 381]
get_nominal_size (end=0x7fd53b0d4a0c "", p=0x7fd53b0d49d0 "\001\001\001") at src/malloc/mallocng/meta.h:169
169 src/malloc/mallocng/meta.h: No such file or directory.
(gdb) backtrace
#0 get_nominal_size (end=0x7fd53b0d4a0c "", p=0x7fd53b0d49d0 "\001\001\001") at src/malloc/mallocng/meta.h:169
#1 __libc_free (p=p@entry=0x7fd53b0d49d0) at src/malloc/mallocng/free.c:110
#2 0x000000000077bf25 in free (p=p@entry=0x7fd53b0d49d0) at src/malloc/free.c:5
#3 0x0000000000687eca in FTLfree (ptr=ptr@entry=0x7fd53b0d49d0, file=file@entry=0x9083f8 "/app/src/database/network-table.c", func=func@entry=0x90b160 <FUNCTION.14> "parse_neighbor_cache",
line=line@entry=1543) at /app/src/syscalls/free.c:28
#4 0x000000000048816b in parse_neighbor_cache (db=) at /app/src/database/network-table.c:1543
#5 0x000000000047cf4d in DB_thread (val=) at /app/src/database/database-thread.c:159
#6 0x0000000000791c97 in start (p=0x7fd53e28eb00) at src/thread/pthread_create.c:207
#7 0x00000000007930ee in __clone () at src/thread/x86_64/clone.s:22
Backtrace stopped: frame did not save the PC
(gdb)

Okay, thanks. That's some region we have not worked in in quite some time and I am asking myself if this might be a bug getting triggered here but with a root cause being somewhere else - this is rather often/virtually always the case when a free() triggers a crash as the issue might have been around for longer but only gets noticed when the memory is released and the system recognized that something isn't quite right here.

If you don't mind, it'd be tremendously helpful if you could repeat this so we can check if the backtrace always shows the same location.

The next logical step would be running FTL under the supervision of a memory watchdog, we have a guide prepared for this, too: valgrind - Pi-hole documentation

Sure, will do

Crashed again this morning:

[Detaching after vfork from child process 7115]
[Detaching after vfork from child process 7121]
[Detaching after vfork from child process 7123]
[Detaching after vfork from child process 7128]

Thread 2 "database" received signal SIGSEGV, Segmentation fault.
[Switching to LWP 325]
get_nominal_size (end=0x7fef2dd93e6c "", p=0x7fef2dd93e30 "\001\001\001\001\001") at src/malloc/mallocng/meta.h:169
169     src/malloc/mallocng/meta.h: No such file or directory.
(gdb) backtrace
#0  get_nominal_size (end=0x7fef2dd93e6c "", p=0x7fef2dd93e30 "\001\001\001\001\001") at src/malloc/mallocng/meta.h:169
#1  __libc_free (p=p@entry=0x7fef2dd93e30) at src/malloc/mallocng/free.c:110
#2  0x000000000077bf25 in free (p=p@entry=0x7fef2dd93e30) at src/malloc/free.c:5
#3  0x0000000000687eca in FTLfree (ptr=ptr@entry=0x7fef2dd93e30, file=file@entry=0x9083f8 "/app/src/database/network-table.c", func=func@entry=0x90b160 <__FUNCTION__.14> "parse_neighbor_cache",
    line=line@entry=1543) at /app/src/syscalls/free.c:28
#4  0x000000000048816b in parse_neighbor_cache (db=<optimized out>) at /app/src/database/network-table.c:1543
#5  0x000000000047cf4d in DB_thread (val=<optimized out>) at /app/src/database/database-thread.c:159
#6  0x0000000000791c97 in start (p=0x7fef2de86b00) at src/thread/pthread_create.c:207
#7  0x00000000007930ee in __clone () at src/thread/x86_64/clone.s:22
Backtrace stopped: frame did not save the PC
(gdb)

As this seems identical i start prepping the VM for valgrind

And running.

1 Like

valgrind.zip (835.9 KB)

Valgrind generated a 15 MByte-sized log. Its in that zip-archive here (and yeah, it did crash)

Okay, I have not seen this before: valgrind did not notice when the issue happens, it just saw the crash (as did gdb before):

==7006== ERROR SUMMARY: 25797 errors from 3 contexts (suppressed: 0 from 0)
vex amd64->IR: unhandled instruction bytes: 0xF4 0x80 0x3A 0x0 0x74 0x1 0xF4 0xBE 0x1 0x0
vex amd64->IR:   REX=0 REX.W=0 REX.R=0 REX.X=0 REX.B=0
vex amd64->IR:   VEX=0 VEX.L=0 VEX.nVVVV=0x0 ESC=NONE
vex amd64->IR:   PFX.66=0 PFX.F2=0 PFX.F3=0
==573== valgrind: Unrecognised instruction at address 0x77c688.
==573==    at 0x77C688: a_crash (atomic_arch.h:108)
==573==    by 0x77C688: get_nominal_size (meta.h:169)
==573==    by 0x77C688: __libc_free (free.c:110)
==573==    by 0x48816A: parse_neighbor_cache (network-table.c:1543)
==573==    by 0x47CF4C: DB_thread (database-thread.c:159)
==573==    by 0x791C96: start (pthread_create.c:207)
==573==    by 0x7930ED: ??? (clone.s:22)
==573== Your program just tried to execute an instruction that Valgrind
==573== did not recognise.  There are two possible reasons for this.
==573== 1. Your program has a bug and erroneously jumped to a non-code
==573==    location.  If you are running Memcheck and you just saw a
==573==    warning about a bad jump, it's probably your program's fault.
==573== 2. The instruction is legitimate but Valgrind doesn't handle it,
==573==    i.e. it's Valgrind's fault.  If you think this is the case or
==573==    you are not sure, please let us know and we'll try to fix it.
==573== Either way, Valgrind will now raise a SIGILL signal which will
==573== probably kill your program.

All the entire rest of this file is noise.

Please try using FTL from branch fix/network_clients_heap to see if this still crashes. It should become available within the next 30 minutes (if nothing fails) and can be sourced via

pihole checkout ftl fix/network_clients_heap

Since we don't really know what is happening, I looked at the code which was last changed around two years ago. I can theoretically see a possibility for a heap overflow when new clients arrive while the ARP table is processed. This would likely imply your network knows about very many clients and that new ones are frequently added. My proposed fix ensures any new clients we have not allocated memory for are ignored and will be handled on the next ARP table processing.

edit You can monitor the build process here: Install safety-measured to prevent possible heap overflow in the netw… · pi-hole/FTL@c818adf · GitHub

Ok, should i run it normally then or with gdb attached to it?

Many clients.. i woudn't go that far; like 40ish? Including VMs and smarthome devices.

image