Issue when installing on Ubuntu Server 20.04 in an EC2 instance

Having the same issue when installing on Ubuntu Server 20.04 in an EC2 instance with an error of :

[✗] DNS resolution is currently unavailable
[✗] DNS resolution is not available

Am able to reproduce issue by rerunning script on fresh instance.

Debug output (unable to upload):

[?] Would you like to upload the log? [y/N] y
    * Using curl for transmission.
    * curl failed, contact Pi-hole support for assistance.
    * Error message: curl: (6) Could not resolve host: tricorder.pi-hole.net

[✗]  There was an error uploading your debug log.

Output of which timeout:

/usr/bin/timeout

Output of pihole -g -f :

  [✓] Deleting existing list cache
  [✗] DNS resolution is currently unavailable
  [✗] DNS resolution is not available

Happy to pull&post any part of the debug log manually. My Gravity Database log looks the same as the post above, and is empty.
I also noticed that it initially encountered an error when detecting the host OS:

*** [ DIAGNOSING ]: Operating system
[i] dig return code:  10
[i] dig response:  dig: couldn't get address for 'ns1.pi-hole.net': failure
Distro:  Ubuntu
Error: Ubuntu is not a supported distro 

I had no errors when running the same script a few months ago, and haven't changed anything on my end that I am aware of if it helps at all!

Please generate a debug log, upload it when prompted and post the token URL here.

Is it preferred to make manual changes to allow the debug log to upload (potentially changing info in logs)
or
Pull anything requested out and post it?

Make whatever nameserver changes you need so the log will upload. This won't alter the log contents.

Debug: https://tricorder.pi-hole.net/6A2HbkNo/

To get it to be able to finally upload I commented out DNSStubListener=no in /etc/systemd/resolved.conf and then systemctl restart systemd-resolved.

If that was the wrong way to go about it, I can easily reproduce the issue on a new instance and make a different change.

EDIT
Reread your response after posting and immediately facepalmed. Here's debug from fresh install where I actually just changed the name server in resolved.conf

https://tricorder.pi-hole.net/RCOC85sP/

Your gravity database is missing or damaged.

  [2022-08-03 00:03:02.485 15831/T15835] gravityDB_open(): /etc/pihole/gravity.db does not exist
   [2022-08-03 00:03:02.485 15831/T15835] gravityDB_count(0): Gravity database not available
   [2022-08-03 00:03:02.485 15831/T15835] gravityDB_open(): /etc/pihole/gravity.db does not exist
   [2022-08-03 00:03:02.485 15831/T15835] gravityDB_count(3): Gravity database not available
   [2022-08-03 00:03:02.485 15831/T15835] WARN: Database query failed, assuming there are no blacklist regex entries
   [2022-08-03 00:03:02.485 15831/T15835] gravityDB_open(): /etc/pihole/gravity.db does not exist
   [2022-08-03 00:03:02.485 15831/T15835] gravityDB_count(4): Gravity database not available
   [2022-08-03 00:03:02.485 15831/T15835] WARN: Database query failed, assuming there are no whitelist regex entries

That should be auto made during install though correct?
Looking through the issues on github and here I see lots of similar issues with varying potential causes. While I can manually make changes post install, I'm trying to avoid the initial install from breaking, and needing the manual intervention.
Any suggestions?

Which script are you referring to?

Sorry, I wasn't being clear in the initial post!
The script I was talking about there was in reference to a script that I made as part of a unattended install in my own project repo.

However just running the basic

curl -sSL https://install.pi-hole.net | bash

On a brand new instance with nothing else encounters the same issue.
If wanted/needed I pulled a debug log from a instance with just that single command ran for install/modification, nothing else and can upload it via SSL.

Can you give me the AMI of the image you are using so I can try to replicate?

The AMI id I have been testing on is: ami-040a251ee9d7d1a9b

Additionally, I tried the same installation method on a older AMIi that is the image that i used when initially creating my project four-ish months back. Older AMI id: ami-01f87c43e618bf8f0l

Thanks, and one more question:

Are you using the ARM instances? Processor type is in the debug log.

1 Like

Can you create a new instance and run:

curl -sSL https://install.pi-hole.net | sudo bash -x
to get some verbose logging?

And if the install completes can you run:

sudo ss -tlpn | grep 53
tail -n 20 /var/log/pihole/FTL.log
sudo systemctl status --full --no-pager systemd-resolved
sudo systemctl status --full --no-pager pihole-FTL

End of verbose logging with the script, it exited after DNS resolution wasn't availible:


  [i] Enabling pihole-FTL service to start on reboot...+ is_command systemctl
+ local check_command=systemctl
+ command -v systemctl
+ systemctl enable pihole-FTL
+ printf '%b  %b %s...\n' '\r' '[✓]' 'Enabling pihole-FTL service to start on reboot'
  [✓] Enabling pihole-FTL service to start on reboot...
+ stop_service pihole-FTL
+ '[' '!' -d /var/log/pihole/ ']'
+ mkdir -m 0755 /var/log/pihole/
+ '[' -f /var/log/pihole-FTL.log ']'
+ '[' -f /var/log/pihole.log ']'
+ restart_service pihole-FTL
+ local 'str=Restarting pihole-FTL service'
+ printf '  %b %s...' '[i]' 'Restarting pihole-FTL service'
  [i] Restarting pihole-FTL service...+ is_command systemctl
+ local check_command=systemctl
+ command -v systemctl
+ systemctl restart pihole-FTL
+ printf '%b  %b %s...\n' '\r' '[✓]' 'Restarting pihole-FTL service'
  [✓] Restarting pihole-FTL service...
+ runGravity
+ /opt/pihole/gravity.sh --force
  [i] Creating new gravity database
  [i] Migrating content of /etc/pihole/adlists.list into new database
  [✓] Deleting existing list cache
  [✗] DNS resolution is currently unavailable
  [✗] DNS resolution is not available

After each command:

sudo ss -tlpn | grep 53

LISTEN    0         32                 0.0.0.0:53               0.0.0.0:*        users:(("pihole-FTL",pid=12464,fd=5))                                          
LISTEN    0         32                    [::]:53                  [::]:*        users:(("pihole-FTL",pid=12464,fd=7))

tail -n 20 /var/log/pihole/FTL.log

[2022-08-04 03:42:44.114 12461M] Successfully accessed setupVars.conf
[2022-08-04 03:42:44.114 12461M] listening on 0.0.0.0 port 53
[2022-08-04 03:42:44.114 12461M] listening on :: port 53
[2022-08-04 03:42:44.127 12464M] PID of FTL process: 12464
[2022-08-04 03:42:44.127 12464M] INFO: FTL is running as user pihole (UID 997)
[2022-08-04 03:42:44.127 12464M] Reloading DNS cache
[2022-08-04 03:42:44.128 12464/T12468] gravityDB_open(): /etc/pihole/gravity.db does not exist
[2022-08-04 03:42:44.128 12464/T12468] gravityDB_open(): /etc/pihole/gravity.db does not exist
[2022-08-04 03:42:44.128 12464/T12468] gravityDB_count(0): Gravity database not available
[2022-08-04 03:42:44.128 12464/T12468] gravityDB_open(): /etc/pihole/gravity.db does not exist
[2022-08-04 03:42:44.128 12464/T12468] gravityDB_count(3): Gravity database not available
[2022-08-04 03:42:44.128 12464/T12468] WARN: Database query failed, assuming there are no blacklist regex entries
[2022-08-04 03:42:44.128 12464/T12468] gravityDB_open(): /etc/pihole/gravity.db does not exist
[2022-08-04 03:42:44.128 12464/T12468] gravityDB_count(4): Gravity database not available
[2022-08-04 03:42:44.128 12464/T12468] WARN: Database query failed, assuming there are no whitelist regex entries
[2022-08-04 03:42:44.128 12464/T12468] Compiled 0 whitelist and 0 blacklist regex filters for 0 clients in 0.1 msec
[2022-08-04 03:42:44.128 12464/T12468] Blocking status is enabled
[2022-08-04 03:42:44.128 12464/T12467] Listening on Unix socket
[2022-08-04 03:42:44.128 12464/T12466] Listening on port 4711 for incoming IPv6 telnet connections
[2022-08-04 03:42:44.128 12464/T12465] Listening on port 4711 for incoming IPv4 telnet connections

sudo systemctl status --full --no-pager systemd-resolved

● systemd-resolved.service - Network Name Resolution
     Loaded: loaded (/lib/systemd/system/systemd-resolved.service; enabled; vendor preset: enabled)
     Active: active (running) since Thu 2022-08-04 03:42:41 UTC; 20min ago
       Docs: man:systemd-resolved.service(8)
             https://www.freedesktop.org/wiki/Software/systemd/resolved
             https://www.freedesktop.org/wiki/Software/systemd/writing-network-configuration-managers
             https://www.freedesktop.org/wiki/Software/systemd/writing-resolver-clients
   Main PID: 12225 (systemd-resolve)
     Status: "Processing requests..."
      Tasks: 1 (limit: 1145)
     Memory: 4.2M
     CGroup: /system.slice/systemd-resolved.service
             └─12225 /lib/systemd/systemd-resolved

Aug 04 03:42:41 ip-172-31-18-49 systemd[1]: Starting Network Name Resolution...
Aug 04 03:42:41 ip-172-31-18-49 systemd-resolved[12225]: Positive Trust Anchors:
Aug 04 03:42:41 ip-172-31-18-49 systemd-resolved[12225]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Aug 04 03:42:41 ip-172-31-18-49 systemd-resolved[12225]: Negative trust anchors: 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test
Aug 04 03:42:41 ip-172-31-18-49 systemd-resolved[12225]: Using system hostname 'ip-172-31-18-49'.
Aug 04 03:42:41 ip-172-31-18-49 systemd-resolved[12225]: DNSStubListener= is disabled, but /etc/resolv.conf is a symlink to /run/systemd/resolve/stub-resolv.conf which expects DNSStubListener= to be enabled.
Aug 04 03:42:41 ip-172-31-18-49 systemd[1]: Started Network Name Resolution.

sudo systemctl status --full --no-pager pihole-FTL

● pihole-FTL.service - LSB: pihole-FTL daemon
     Loaded: loaded (/etc/init.d/pihole-FTL; generated)
     Active: active (exited) since Thu 2022-08-04 03:42:44 UTC; 21min ago
       Docs: man:systemd-sysv-generator(8)
    Process: 12426 ExecStart=/etc/init.d/pihole-FTL start (code=exited, status=0/SUCCESS)

Aug 04 03:42:43 ip-172-31-18-49 systemd[1]: Starting LSB: pihole-FTL daemon...
Aug 04 03:42:43 ip-172-31-18-49 pihole-FTL[12426]: Not running
Aug 04 03:42:43 ip-172-31-18-49 su[12452]: (to pihole) root on none
Aug 04 03:42:43 ip-172-31-18-49 su[12452]: pam_unix(su:session): session opened for user pihole by (uid=0)
Aug 04 03:42:44 ip-172-31-18-49 su[12452]: pam_unix(su:session): session closed for user pihole
Aug 04 03:42:44 ip-172-31-18-49 systemd[1]: Started LSB: pihole-FTL daemon.

Apologies if this is the wrong info/too much.

Not sure if is related but can you post output for below one?

findmnt

Sure can!
findmnt

TARGET                                SOURCE      FSTYPE     OPTIONS
/                                     /dev/xvda1  ext4       rw,relatime,discard
├─/dev                                devtmpfs    devtmpfs   rw,relatime,size=488748k,nr_inodes=122187,mode=755,
│ ├─/dev/shm                          tmpfs       tmpfs      rw,nosuid,nodev,inode64
│ ├─/dev/pts                          devpts      devpts     rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=0
│ ├─/dev/hugepages                    hugetlbfs   hugetlbfs  rw,relatime,pagesize=2M
│ └─/dev/mqueue                       mqueue      mqueue     rw,nosuid,nodev,noexec,relatime
├─/sys                                sysfs       sysfs      rw,nosuid,nodev,noexec,relatime
│ ├─/sys/kernel/security              securityfs  securityfs rw,nosuid,nodev,noexec,relatime
│ ├─/sys/fs/cgroup                    tmpfs       tmpfs      ro,nosuid,nodev,noexec,mode=755,inode64
│ │ ├─/sys/fs/cgroup/unified          cgroup2     cgroup2    rw,nosuid,nodev,noexec,relatime,nsdelegate
│ │ ├─/sys/fs/cgroup/systemd          cgroup      cgroup     rw,nosuid,nodev,noexec,relatime,xattr,name=systemd
│ │ ├─/sys/fs/cgroup/cpuset           cgroup      cgroup     rw,nosuid,nodev,noexec,relatime,cpuset
│ │ ├─/sys/fs/cgroup/net_cls,net_prio cgroup      cgroup     rw,nosuid,nodev,noexec,relatime,net_cls,net_prio
│ │ ├─/sys/fs/cgroup/hugetlb          cgroup      cgroup     rw,nosuid,nodev,noexec,relatime,hugetlb
│ │ ├─/sys/fs/cgroup/freezer          cgroup      cgroup     rw,nosuid,nodev,noexec,relatime,freezer
│ │ ├─/sys/fs/cgroup/memory           cgroup      cgroup     rw,nosuid,nodev,noexec,relatime,memory
│ │ ├─/sys/fs/cgroup/cpu,cpuacct      cgroup      cgroup     rw,nosuid,nodev,noexec,relatime,cpu,cpuacct
│ │ ├─/sys/fs/cgroup/misc             cgroup      cgroup     rw,nosuid,nodev,noexec,relatime,misc
│ │ ├─/sys/fs/cgroup/pids             cgroup      cgroup     rw,nosuid,nodev,noexec,relatime,pids
│ │ ├─/sys/fs/cgroup/rdma             cgroup      cgroup     rw,nosuid,nodev,noexec,relatime,rdma
│ │ ├─/sys/fs/cgroup/blkio            cgroup      cgroup     rw,nosuid,nodev,noexec,relatime,blkio
│ │ ├─/sys/fs/cgroup/devices          cgroup      cgroup     rw,nosuid,nodev,noexec,relatime,devices
│ │ └─/sys/fs/cgroup/perf_event       cgroup      cgroup     rw,nosuid,nodev,noexec,relatime,perf_event
│ ├─/sys/fs/pstore                    pstore      pstore     rw,nosuid,nodev,noexec,relatime
│ ├─/sys/fs/bpf                       none        bpf        rw,nosuid,nodev,noexec,relatime,mode=700
│ ├─/sys/kernel/debug                 debugfs     debugfs    rw,nosuid,nodev,noexec,relatime
│ ├─/sys/kernel/tracing               tracefs     tracefs    rw,nosuid,nodev,noexec,relatime
│ ├─/sys/fs/fuse/connections          fusectl     fusectl    rw,nosuid,nodev,noexec,relatime
│ └─/sys/kernel/config                configfs    configfs   rw,nosuid,nodev,noexec,relatime
├─/proc                               proc        proc       rw,nosuid,nodev,noexec,relatime
│ └─/proc/sys/fs/binfmt_misc          systemd-1   autofs     rw,relatime,fd=28,pgrp=1,timeout=0,minproto=5,maxpr
├─/run                                tmpfs       tmpfs      rw,nosuid,nodev,size=99116k,mode=755,inode64
│ ├─/run/lock                         tmpfs       tmpfs      rw,nosuid,nodev,noexec,relatime,size=5120k,inode64
│ ├─/run/user/1000                    tmpfs       tmpfs      rw,nosuid,nodev,relatime,size=99112k,mode=700,uid=1
│ ├─/run/user/997                     tmpfs       tmpfs      rw,nosuid,nodev,relatime,size=99112k,mode=700,uid=9
│ └─/run/snapd/ns                     tmpfs[/snapd/ns]
│                                                 tmpfs      rw,nosuid,nodev,size=99116k,mode=755,inode64
│   └─/run/snapd/ns/lxd.mnt           nsfs[mnt:[4026532289]]
│                                                 nsfs       rw
├─/snap/amazon-ssm-agent/5656         /dev/loop0  squashfs   ro,nodev,relatime
├─/snap/core18/2409                   /dev/loop1  squashfs   ro,nodev,relatime
├─/snap/core20/1518                   /dev/loop2  squashfs   ro,nodev,relatime
├─/snap/lxd/22753                     /dev/loop3  squashfs   ro,nodev,relatime
├─/snap/snapd/16292                   /dev/loop4  squashfs   ro,nodev,relatime
└─/boot/efi                           /dev/xvda15 vfat       rw,relatime,fmask=0077,dmask=0077,codepage=437,ioch
1 Like

Thanks!
I cant see anything out of the ordinary.
Was looking for tmpfs or zram mounts for the /tmp , /var/log and /etc/pihole folders.

1 Like

We discovered a few bugs thanks to your output. We'll work on them.

1 Like

Happy to have helped! Let me know if you would like any assistance in testing!

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.