Customizing Docker Images

I would also need the possibility to make a checkout permanent.
With the Dockerfile this is probably not possible and with ./docker-entrypoint.sh the Conainter simply needs too long to start.
Is it possible to write the command into a script file, which will be executed by the container at the appropriate time?

# docker exec -it dnsrec_pihole_1 bash
root@32302c83e05e:/var/log# pihole checkout ftl new/edns0
  Please note that changing branches severely alters your Pi-hole subsystems
  Features that work on the master branch, may not on a development branch
  This feature is NOT supported unless a Pi-hole developer explicitly asks!
  Have you read and understood this? [y/N] y

  [✓] Branch new/edns0 exists
  [✓] Downloading and Installing FTL
  [✓] Restarting pihole-FTL service...
  [✓] Enabling pihole-FTL service to start on reboot...

Then you'd need to use something other than Docker for your deployment. Docker just doesn't work that way.

Why should it not work with Docker? I can execute the command in the container manually, so why not automate it.
It is precisely for this reason, because it is not possible today, why the Feature Requests was opened.

Question:

  • Is it possible to detect if e.g. "new/edns0" is already installed on the system?
  • Is there a possibility to execute a script (bash) after the complete start of pihole?

Then a script could look like this

if ("new/edns0" not installed) {
   pihole checkout ftl new/edns0
}

The purpose of the pihole checkout function is primarily development/testing. No branch switched to via that command really need be in operation for any long amount of time. How often are you restarting your containers that it would need to survive a restart? e.g I have my Pi-hole instance running on docker, and I only ever restart on rare occaisions. It runs for weeks, even months at a time between updates... by which time the testing branch would have outlived it's life-cycle and I'd be back to the mainstream

If it's about being on the bleeding edge then I guess it's a possible point of discussion that we could do a nightly build based on the various development branches of the three components (I don't think we have a dev-based image, @diginc could correct me here)

Don't make it so complicated.
How can I let pihole execute the following script automatically after the complete startup?

The script is simply mapped into the container, so I keep the change even if the docker container is renewed on dockerhub.

#!/bin/bash

TESTFILE=/tmp/checkout-ftl-edns0
if [ ! -f "$TESTFILE" ]; then
  echo "new/edns0 not installed"
  echo y |pihole checkout ftl new/edns0
  touch "$TESTFILE"
else 
  echo "new/edns0 is not installed"
fi

Create your own Pi-hole image based on our image. Then you have full control over every aspect you need to customize.

2 Likes

But even then you have exactly the same problem, so the question is not answered.

It's your image, load in what ever scripts or changes to the entrypoints you want.

Answer: No, Pi-hole won't be doing it.

We have already determined that, that's why we have this topic.

I'm still not sure I understand why you would want to persist it across a rebuild. If a new version of the image is released, then the testing branch will likely by moot by that point in any case...

if so it would be ok?
I'm building and updating (download from dockerhub) my many docker containers once a week, but I don't want to break anything in my pihole container.

Point I'm trying to make here is that if it's stability you're after, then you need to be running one of the already tagged containers. The testing/development branches come with no guarantee of stability, and are not meant (as mentioned above) for long term use.

If you're pulling the image once a week, what are you getting from that? The image isn't updated all that often, and when it is, it will be because we have rolled the testing/developed branches into the release versions, so there would be no need to then check out the testing/development branch again

the pihole image is only renewed on my systems (currently three) if there is a new image on dockerhub. If the next update of the image on dockerhub includes the option "new/edns0", everything is fine.

Obviously it depends on the feature being approved and merged first, and there are any number of reasons why something wouldn't make a new release, so don't rely on it... But should be!

This branch was merged into development
There were already more merges into this branch, bugfixes, further features, etc.

The new/edns0 branch is already dead now, you should really use development

@PromoFaux An alternative may be to have a permanent development docker container. Could it be re-built on each push to this branch or would this be too much work? I don't know to which degree the docker creation is fully automated or not.

Already answered your question here above :wink:

I'll take a look into this at some point, we did a nightly for the 5.0 beta containers, so it shouldn't be too much work to base one on development (if only because it would be useful to me personally).

HOWEVER, this comes with a big caveat, sometimes the container could be completely broken, which is fine from a project point of view as we have no expectations for the dev branches to be stable. It wouldn't be for the faint of heart! Likely the next version that will have a beta run will be 6.0, and that will almost certainly have nightly container builds.

It may also be the other way around. Imagine an obscure bug which only happens on very few systems (this is typically the case for FTL when you follow the issues over there). It is rock-solid for 99.99% of the users but a few users observe crashes. When @DL6ER fixes them amazingly fast, "normal people" can benefit with pihole checkout, it's a bit harder for "docker people". Also because the majority of "docker people" actually know only barely enough to keep the containers going, let alone customizing one :upside_down_face:

I'll be blunt, cause that's pretty much my job here. If you barely know enough to keep things going and you're just doing a copy/paste without understanding what and why for docker, then the last thing you need to be doing is running unstable code.

Edit: And to be honest, most of the docker users do know a lot about it. The main pihole/pihole image has over 10 Million pulls and 1k stars. The pihole namespace is newish too so that number doesn't account for all the use of the older diginc namespace images. So making these changes that would only add support load for the team to handle for a very very small number of users, well, it's not a priority.

1 Like

You got me wrong here, I think. I have a few friends, they are all excited about proxmox, docker, and various other things, but they only read just enough to be able to get things set up.

Then they come and ask me (and probably others as well who do IT as their main job) for how to customize this and that. When you explain them: You first have to configure your firewall to forward port X, They say: "I have a firewall? Coll beans. What command do I have to run exactly? I am already logged in as root."

Sorry for generalizing my experience. It was not my intention to make Pi-hole users look bad or stupid. It may only be my personal experience that the users I know don't really know what they are doing.

In general any time I run into an issue modifying a container at runtime via docker exec or volumes I goto the source to understand it better and potentially modify it. That same practice/advice applies to our Pi-hole image. You could either re-build your own copy of the image (preferred IMO) from source of steal a particular block I'll explain below from the install.sh. Modifying the image build source is preferred since if you do it in such a way it doesn't interfer with the mainline build, it could be merged and re-used by others, a win-win.

See the CHECKOUT_BRANCH checkout code in install.sh. Note that we're usually testing a release where FTL & Core version match so that's why the echo "${CORE_VERSION}" | sudo tee /etc/pihole/ftlbranch + echo y | bash -x pihole checkout core ${CORE_VERSION} are used in conjunction and the checkout ftl command is commented out.

That code may need modified if there is no matching 'core' version for the desired FTL branch (Add a new checkout ftl $FTL_BRANCH perhaps?), but the CHECKOUT_BRANCHES being set to to true block is still important because of service and update-rc.d code which helps make it work (try building the image without it, I assume it will still fail). This image/docker debian in general is missing those 2 dependencies by default (intentionally left out), but pi-hole kinda of needs them. I say kind of because obviously they don't really need them since I didn't install them - I just faked them by making them always return true which works. Then I turn both links off after the fact, there is an s6-service fake 'service' script I add later in the build process and I didn't want the link interfering.

Hope that helped with the original question.

Regarding the average user's knowledge of how to really use docker in-depth, I'm guessing there are plenty of experts who are capable enough to figure things out on their own who we just don't hear from very often.

Docker Pi-hole is pretty full-stack: Networking (router and docker layer for pi-hole) multiple linux OSs (Host/Container), virtualization-like concepts, data volumes, and software. Unless they're a DevOps professional or similar, I don't expect the average individual to have mastered all of those skill sets but we all started somewhere and got a little extra push to help us grow into experts in our areas we are skilled in, so I usually give anyone a chance. If someone is really struggling with docker it maybe because they're trying to juggle the 3-4 different balls that make docker when they never fully learned to juggle the first 1-2 balls.