Docker container is not starting and marked as unhealthy

Hi,
since today, I'm not able to start Pi-Hole in a Docker Container anymore. It crashes without any error like so:

s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service cron: starting
s6-rc: info: service cron successfully started
s6-rc: info: service _uid-gid-changer: starting
s6-rc: info: service _uid-gid-changer successfully started
s6-rc: info: service _startup: starting
  [i] Starting docker specific checks & setup for docker pihole/pihole
  [i] Setting capabilities on pihole-FTL where possible
  [i] Applying the following caps to pihole-FTL:
        * CAP_CHOWN
        * CAP_NET_BIND_SERVICE
        * CAP_NET_RAW
  [i] Ensuring basic configuration by re-running select functions from basic-install.sh
  [i] Installing configs from /etc/.pihole...
  [i] Existing dnsmasq.conf found... it is not a Pi-hole file, leaving alone!
  [i] Installing /etc/dnsmasq.d/01-pihole.conf...
  [✓] Installed /etc/dnsmasq.d/01-pihole.conf
  [i] Installing /etc/.pihole/advanced/06-rfc6761.conf...
  [✓] Installed /etc/dnsmasq.d/06-rfc6761.conf
  [i] Installing latest logrotate script...
  [✓] Installing latest logrotate script
  [i] Creating empty /etc/pihole/setupVars.conf file.
  [i] Assigning password defined by Environment Variable
  [✓] New password set
  [i] Added ENV to php:
                    "TZ" => "Europe/Berlin",
                    "PIHOLE_DOCKER_TAG" => "",
                    "PHP_ERROR_LOG" => "/var/log/lighttpd/error-pihole.log",
                    "CORS_HOSTS" => "",
                    "VIRTUAL_HOST" => "5cf02c43d338",
  [i] Using IPv4 and IPv6
  [i] Installing latest Cron script...
  [✓] Installing latest Cron script
  [i] setup_blocklists now setting default blocklists up: 
  [i] TIP: Use a docker volume for /etc/pihole/adlists.list if you want to customize for first boot
  [i] Blocklists (/etc/pihole/adlists.list) now set to:
https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts
  [i] Setting DNS servers based on PIHOLE_DNS_ variable
  [i] Applying pihole-FTL.conf setting LOCAL_IPV4=0.0.0.0
  [i] FTL binding to default interface: eth0
  [i] Enabling Query Logging
  [i] Testing lighttpd config: Syntax OK
  [i] All config checks passed, cleared for startup ...
  [i] Docker start setup complete
  [i] pihole-FTL (no-daemon) will be started as pihole
s6-rc: info: service _startup successfully started
s6-rc: info: service pihole-FTL: starting
s6-rc: info: service pihole-FTL successfully started
s6-rc: info: service lighttpd: starting
s6-rc: info: service lighttpd successfully started
s6-rc: info: service _postFTL: starting
s6-rc: info: service _postFTL successfully started
s6-rc: info: service legacy-services: starting
  Checking if custom gravity.db is set in /etc/pihole/pihole-FTL.conf
s6-rc: info: service legacy-services successfully started
  [i] Creating new gravity database
  [i] Migrating content of /etc/pihole/adlists.list into new database
  [i] Neutrino emissions detected...

  [✓] Pulling blocklist source list into range
  [i] Preparing new gravity database...
  [✓] Preparing new gravity database
  [i] Creating new gravity databases...
  [✓] Creating new gravity databases
  [i] Libz compression not available
  [i] Target: https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts
  [i] Status: Pending...
  [✗] Status: https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts ()
  [✗] List download failed: no cached list available
  [i] Building tree...
  [✓] Building tree
  [i] Swapping databases...
  [✓] Swapping databases
  [✓] The old database remains available
  [i] Number of gravity domains: 0 (0 unique domains)
  [i] Number of exact blacklisted domains: 0
  [i] Number of regex blacklist filters: 0
  [i] Number of exact whitelisted domains: 0
  [i] Number of regex whitelist filters: 0
  [i] Cleaning up stray matter...
  [✓] Cleaning up stray matter
  [✓] FTL is listening on port 53
     [✓] UDP (IPv4)
     [✓] TCP (IPv4)
     [✓] UDP (IPv6)
     [✓] TCP (IPv6)
  [i] Pi-hole blocking will be enabled
  [i] Enabling blocking

  [✓] Pi-hole Enabled
fatal: 
fatal: 
fatal: 
  Pi-hole version is v5.17.1 (Latest: N/A)
  AdminLTE version is v5.20.1 (Latest: N/A)
  FTL version is v5.23 (Latest: N/A)
  Container tag is: 2023.05.2

I also tried to start from scratch with deleting the bind mounts, but no luck.

That's my compose file (Actually I'm using Portainer)

services:
  pihole_neu:
    container_name: "pihole_neu"
    environment:
      - "WEBPASSWORD=RQDQ6fVB"
      - "PIHOLE_DNS_=208.67.222.222;208.67.220.220"
      - "TZ=Europe/Berlin"
      - "PATH=/opt/pihole:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
      - "IPv6=True"
      - "DNSMASQ_USER=pihole"
      - "phpver=php"
      - "PHP_ERROR_LOG=/var/log/lighttpd/error-pihole.log"
      - "S6_KEEP_ENV=1"
      - "S6_BEHAVIOUR_IF_STAGE2_FAILS=2"
      - "S6_CMD_WAIT_FOR_SERVICES_MAXTIME=0"
      - "FTLCONF_LOCAL_IPV4=0.0.0.0"
      - "FTL_CMD=no-daemon"
    hostname: "5cf02c43d338"
    image: "pihole/pihole:latest"
    mac_address: "02:42:ac:11:00:06"
    network_mode: "bridge"
    ports:
      - "53:53/tcp"
      - "53:53/udp"
      - "8053:80/tcp"
    restart: "always"
    volumes:
      - "/opt/docker/pihole/etc/dnsmasq.d:/etc/dnsmasq.d"
      - "/opt/docker/pihole/etc/pihole:/etc/pihole"
    cap_drop:
      - "AUDIT_CONTROL"
      - "BLOCK_SUSPEND"
      - "DAC_READ_SEARCH"
      - "IPC_LOCK"
      - "IPC_OWNER"
      - "LEASE"
      - "LINUX_IMMUTABLE"
      - "MAC_ADMIN"
      - "MAC_OVERRIDE"
      - "NET_BROADCAST"
      - "SYSLOG"
      - "SYS_ADMIN"
      - "SYS_BOOT"
      - "SYS_MODULE"
      - "SYS_NICE"
      - "SYS_PACCT"
      - "SYS_PTRACE"
      - "SYS_RAWIO"
      - "SYS_RESOURCE"
      - "SYS_TIME"
      - "SYS_TTY_CONFIG"
      - "WAKE_ALARM"
version: "3.6"

I was able to resolve the issue by deleting all pi-hole related Docker images and volumes and restart completely from scratch.
No big deal in my case. Would be interesting to know what was causing the issue in the first place.