Pi-hole logging and visualization for the elasticsearch stack

#1

Hi there

I’ve created a elasticstack configuration for pi-hole so one can easily collect, filter, search and visualize pi-holes log data.

This is meant to be a alternative to the built in dashboards.

So if anyone is interested there is a github repo:

This repo requires you to previously setup the elk stack but then provides you with the required files/configuration to implement some nice viszualtion.

Feel free to try it out and/or ask if something isn’t working as expected

Thanks to @skaldenhoven for providing some nice input as well as testing and troubleshooting the parsing logic!

@DL6ER or @Mcat12 could you please move/tag this thread to the correct destination as I’m not sure where it belongs to.

3 Likes
#2

This category (Community How-to’s) is fine.

#3

I am quite interested on this, specifically creating a heat map to visualize a heat map with the location IPs such as this one with:

Would this be possible to run on a single Raspberry Pi 3 B?
You might know about the required resources. As of right now I am using 30% RAM and barely any CPU as this RPI is only hosting Pi-Hole and a MySQL DB.

It would be really sweet to see where all the queries are going to.

UPDATE:
Found this:

#4

Yes that would be possible, but you can’t run elastic, logstash or kibana on the pi. But of course you/we can implement a heat map on the elk side to visualize the destination of the query

#5

I am very interested in implementing this solution (thank you for contributing it). My first hurdle to overcome is getting Beats to work on my Raspberry Pi 3. Most of the steps I have followed to install Go and build it fail. I am wondering if it would be easier to install logstash on the Pi instead.

#6

What distribution do you use?

Logstash on the pi isnt a possible solution as logstash requires LOTS of ram (at least a few GB) not to mention the cpu requirements. You definelty need a separate Maschine. I’m running the complete elk Stack on an Intel nuc with esxi and 11 vms. This is sufficient

#7

@Glen_Urbina

Note that if you are trying to use the guide you posted the patterns and logstash parse logic isn’t complete

#8

I have
Linux raspberrypihole 4.14.98-v7+ #1200 SMP Tue Feb 12 20:27:48 GMT 2019 armv7l GNU/Linux

Debian 9.8

Is there a guide for getting Beats up and running on the Pi? I am going to keep trying and possibly abandon building it from source.

#9

It feels like it would be easier to write my own utility to tail logs and ship them to logstash than mess with building Go

Building Go cmd/dist using /home/pi/.gvm/gos/go1.8.
Building Go toolchain1 using /home/pi/.gvm/gos/go1.8.
Building Go bootstrap cmd/go (go_bootstrap) using Go toolchain1.
Building Go toolchain2 using go_bootstrap and Go toolchain1.
Building Go toolchain3 using go_bootstrap and Go toolchain2.
# cmd/compile/internal/ssa
fatal error: runtime: out of memory
#10

before creating the project I had it running in almost the same way but without filebeat. I did send all off /var/log/pihole.log via rsyslog to the remote logstash instance - that will work too.

did you try this https://www.elasticice.net/?p=92 or https://github.com/dam90/pibeats

#11

@Glen_Urbina

added the requested dns heatmap feature. let me know if i t works for you

1 Like
#12

Slowly working through the process of getting this working. Now I have data flowing into ELK but I noticed that the timestamp is the time the data was received not the datetime in the log. First entry in the log shows May 7 00:00:05 and that is what I expected the @timestamp to be. Is this how it works for you?

{
  "_index": "logstash-syslog-dns-2019.05",
  "_type": "doc",
  "_id": "EC-skmoBZ1gRWBzYNo6F",
  "_version": 1,
  "_score": null,
  "_source": {
    "source_fqdn": "192.168.1.2",
    "pid": "1225",
    "source_port": "52380",
    "@timestamp": "2019-05-07T14:22:00.535Z",
    "@version": "1",
    "offset": 621,
    "date": "May  7 00:00:05",
    "message": "May  7 00:00:05 dnsmasq[1225]: 4941 192.168.1.2/52380 reply gsp-ssl-frontend.ls-apple.com.akadns.net is <CNAME>",
    "tags": [
      "pihole",
      "5141",
      "beats_input_codec_plain_applied",
      "response domain to ip CNAME"
    ],
    "beat": {
      "version": "6.2.4",
      "name": "MacBook-Pro.localdomain",
      "hostname": "MacBook-Pro.localdomain"
    },
    "host": "MacBook-Pro.localdomain",
    "logrow": "4941",
    "source_host": "192.168.1.2",
    "type": "logs",
    "source": "/usr/local/var/log/pihole.log",
    "program": "dnsmasq",
    "domain_request": "gsp-ssl-frontend.ls-apple.com.akadns.net"
  },
  "fields": {
    "@timestamp": [
      "2019-05-07T14:22:00.535Z"
    ]
  },
  "sort": [
    1557238920535
  ]
}
#13

//fixed: https://github.com/nin9s/elk-hole/blob/master/logstash/conf.d/20-dns-syslog.conf

@aviationfan sorry I initially missunderstood the core of your question. You are right, as of now the order is dependant of @timestamp. We have to customize the filebeat index mapping to change this - I will have a look “how” and will get back to you

Yes this is expected as @timestamp is a meta field generated by logstash to display the processing time

There is a field called “date” representing the actual dnsmasq timestamp

#14

Here is a way I solved this in another conf file for another project. I just used the mutate to use the datetime from the log file.

  mutate {
    replace  => { "datetime" => "%{created}" }
  }

I was looking at your setup and wondering how to do the same thing. In hindsight I would have used a better name than “created” do describe the date and time stamp from the log.

#15

have you checked the recent version? - https://github.com/nin9s/elk-hole/blob/master/logstash/conf.d/20-dns-syslog.conf

@timestamp now represents the actual time dnsmasq was processing the request

#16

Oops, sorry, should have checked the latest version first.