ELK stack in k8s cluster
Deploy ELK on Kubernetes is very useful for monitoring and log analysis purposes. These concepts are described in following blog.
Logstash Features#
Because it is open source, Logstash is completely free to use. They also have a paid license though available for those needing additional features. You can use elasticsearch, kibana, and logstash together. Kibana allows you to more easily explore and visualize the log data you bring in with logstash, and elasticsearch gives you the powerful real-time search and analytics capabilities.
In this blog i am trying to start a practical study how to deploy ELK stack for a Kubernetes cluster. Lets start simple.
First we have to understand some special background kubernetes technologies behind this task.
DaemonSet
A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected. Deleting a DaemonSet will clean up the Pods it created.
Some typical uses of a DaemonSet are:
- running a cluster storage daemon, such as
glusterd
,ceph
, on each node. - running a logs collection daemon on every node, such as
fluentd
orlogstash
. - running a node monitoring daemon on every node, such as Prometheus Node Exporter,
collectd
, Dynatrace OneAgent, AppDynamics Agent, Datadog agent, New Relic agent, Gangliagmond
or Instana agent.
In a simple case, one DaemonSet, covering all nodes, would be used for each type of daemon. A more complex setup might use multiple DaemonSets for a single type of daemon, but with different flags and/or different memory and cpu requests for different hardware types.
StatefulSets
StatefulSet is the workload API object used to manage stateful applications.
Manages the deployment and scaling of a set of Pods , and provides guarantees about the ordering and uniqueness of these Pods.
Like a Deployment , a StatefulSet manages Pods that are based on an identical container spec. Unlike a Deployment, a StatefulSet maintains a sticky identity for each of their Pods. These pods are created from the same spec, but are not interchangeable: each has a persistent identifier that it maintains across any rescheduling.
A StatefulSet operates under the same pattern as any other Controller. You define your desired state in a StatefulSet object, and the StatefulSet controller makes any necessary updates to get there from the current state.
ELK stack
The latest version ushers in new geospatial analysis and uptime monitoring capabilities. Plus, cross cluster replication, index lifecycle management, Function beat, Elasticsearch SQL, the Logs UI, and the Infrastructure UI are now generally available.
How ELK stack works together.
Then lets start step by step k8s ELK configuration.
Collecting logs.
According to “nick.sarbicki.com/blog” says there are few things to consider when we are developing ELK based monitoring tool.
- Setup a place to store all our logs (Elasticsearch).
- Setup something to forward our logs to our storage (Logstash).
- Setup services to farm all the logs from all our nodes (Filebeat and Metricbeat).
- Setup a service to visualise the logs (Kibana).
- Setup an ingress to view the visualisations (we’ll use Traefik).
- Setup a service to regularly delete logs so our cluster never gets overwhelmed (Curator).
All of the files required to do this are in follwing gitlab repository. But i have added some changes to work for.
Setup a place to store all our logs (Elasticsearch).
>> link to elasticsearch.yaml file
The file for the Elasticsearch cluster in the logging repo is elasticsearch-ss.yaml. This sets up the stateful pods, a service for internal routing to the pods as well as the necessary permissions and configuration values to have the elasticsearch cluster run. You don’t need to change anything in the file so feel free to apply it to the cluster.
But to access it from local browser changed it service activating nodePort.
$kubectl apply -f elasticsearch-ss.yaml
After successfully running in the kubernetes cluster you may see them by using bellow command.
kubectl get pods -n kube-system
Setting plugings for logstash.
Refferences:
https://gist.github.com/danslimmon/6084415
Setting up log forwarding with Logstash
Logstash essentially acts as a single place that receives all our logs, analyses and processes them into a format Elasticsearch can understand and then forwards this to our Elasticsearch cluster.
The file we use here is logstash-deployment.yaml. The deployment section shouldn’t be changed, it simply sets up the official elastic image for Logstash and mounts the configmap as the logstash configuration. The configmap section however can be changed freely to match the configuration required for your logs as long as you keep the output and input values the same. I tend to format my logs as JSON so they can be easily parsed and you can see me checking for that in the message variable. As we are using Traefik we can also grab a users geo location from the ClientHost variable.
Once you’ve changed the configuration to what you want (you could leave it as is and it should still work well) feel free to apply the file, Logstash should start working almost instantly.
kubectl apply -f logstash-deployment.yaml
Logstash has a good log filter option which is Grok. For advanced filtering options this solution can be used.
Settingup log collectors.
Now we have a place to store our logs and a log forwarder, but nothing actually collecting our logs. This is where Filebeat and Metricbeat come in. They are both from Elastic Co and are designed to send server metrics logs and service logs to logstash. These are a bit more complex as they need to be done as DaemonSets and require certain privileges to be able to access all the logs on each node.
DaemonSets are a Kubernetes concept which enforces every node having an instance of the service. As logs are saved as files on each Kubernetes node we require each node to have one of the Beats services running to collect the logs. We also require access to these logs which means that we have to give the services access to the underlying filesystem via RBAC.
Luckily for us Elastic have already documented how to do this for both Filebeat and Metricbeat. Both the resource files I used (here and here) are based heavily on Elastics own.
Go ahead and apply them, they will spin up quickly and as soon as you do logs will start being sent to logstash.
kubectl apply -f filebeat-ds.yaml
kubectl apply -f metricbeat-ds.yaml
Metricbeat
Metricbeat comes with internal modules that collect metrics from services like Apache, Jolokia, NGINX, MongoDB, MySQL, PostgreSQL, Prometheus, and more. Installation is easy, requiring absolutely zero dependencies. Just enable the modules you want in the configuration file.
And if you don’t see the module you’re looking for, build your own. Written in Go, creating a new Metricbeat module is simple.
Spool your metrics to disk so your pipeline doesn’t skip a data point — even when interruptions such as network issues occur. Metricbeat holds onto incoming data and then ships those metrics to Elasticsearch or Logstash when things are back online.
Setting up Kibana
By now the system is collecting all the logs from all the services as well as from the nodes themselves, having Logstash analyse them and then sending them on to Elasticsearch to be indexed. However we still can’t visualise them, this is where Kibana comes in.
This one is very simple, we setup a Kibana instance based on elastics docker image and point it towards our internal Elasticsearch cluster, we then setup a service so it can be recognised by an ingress, and then setup the ingress so it can be accessed from your browser.
We have a resource file for it called kibana-deployment.yaml. Nothing needs to be changed apart from the URL used in the ingress which needs to be one you control and which points to the clusters load balancer.
Warning applying this without any form of auth on an exposed endpoint will open up your logs for the world to see.
Before applying the Kibana setup read up on how to do auth in traefik if that suits. Alternatively you can set up auth with Kibana however it will take a lot more work with X-pack around the actual image itself. Another option is to use a different ingress (or none at all) which allows you to expose it to an internal only endpoint.
Once you are ready go ahead and apply the file for Kibana, again this one starts up almost instantly, it may however take a few minutes for traefik and your DNS to recognise it:
kubectl apply -f kibana-deployment.yaml
Deleting your logs
At this point everything is ready to start viewing your logs. However this is a resource hungry system and won’t actively delete your indexed logs as they fill up. If you leave it for too long this can affect your whole cluster as the nodes are filled with logs.
You can delete them manually with a curl command but that is cumbersome.
Luckily Elastic helps us to solve this with the curator. The final thing we have to do is setup a CronJob which checks our indexes daily and deletes anything older than a week (you can adjust this to suit your needs). The final resource file we have does this for us. Take a look at the curator-cronjob.yaml and tweak the conditions to fit your needs. Once you are ready apply it:
kubectl apply -f curator-cronjob.yaml
Setup Kibana
Everything is now setup!
The last thing to do is to get onto the Kibana dashboard and setup the index pattern.
When you load up the dashboard you should be encouraged to create an index pattern. If not head over to the management tab and go to index patterns, in the top left is a button to create a new index pattern.
The index pattern to use here is logstash-* which will capture all of the logs forwarded by the system we just made. Set the time filter to be @timestamp and that is it.
How to monitor postfix container from filebeat.
installing filebeat in ubuntu
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.7.1-linux-x86_64.tar.gztar xzvf filebeat-6.7.1-linux-x86_64.tar.gz
Start mac and linux:
cd filebeat<TAB>sudo chown root filebeat.ymlsudo ./filebeat -e
Then it is possible to view filebeat service starting process and dispatching events according to output configuration. Lets check filebeat configuration file. Its filebeat.yml file in the filebeat folder. First of all we have properly configure the event output of filebeat. you may dispatch events to logstash or directly to elasticsearch. Lets see.
Dispatch events to elasticserach
#-------------------------- Elasticsearch output ---------------
output.elasticsearch:
# Array of hosts to connect to.
#hosts: ["localhost:9200"]
hosts: ["elasticsearch:9200"]
# Enabled ilm (beta) to use index lifecycle management instead daily indices.
#ilm.enabled: false
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
Instead of that you may dispatch events to logstash which is logs manger.
#----------------------------- Logstash output --------------------
output.logstash:
# The Logstash hosts
#hosts: ["localhost:5044"]
hosts: ["logstash:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
Thne start the filebeat and you will see connection establishment with logstash.
In above console you will see elasticsearch connection establish messaged and then event submission to the server.
But if you dont receive that connection established message then you have to troubleshoot the connection issue. Best way to start trouble shooting is just login to the filebeat working container and try to curl to the filebeat output url.
Ex :
$curl http://elasticsearch.copperhub.svc.cluster.local:9200/
This url configuration can be found in your filebeat.yml configuration file.
Typically it is localhost:9200. But according to your requirement it can be changed. Further In above example i have used kubernetes default dns name for any container running inside a defferent namespace. How the dns name is created.
<svc name>.<namespace>.svc.cluster.local
Even though your service has nodeport configuraiton you have to use internal port Ex : 9200 is internal port and external nodeport is 31335. but when we use the dns name we have to use internal port .
If you could open the connection from curl command then you have to check your filebeat.yml configuration again . If connection failed then you have to correct it first.
How we ask to use specific log file from filebeat.
You can define many inputs . According to bellow example i have asked to use some of log files in the /var/log/ folder in filebeat.yml file
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.- type: log# Change to true to enable this input configuration.enabled: true# Paths that should be crawled and fetched. Glob based paths.paths: #- /var/log/*.log - /var/log/mail.log - /var/log/dovecot.log- type: logpaths: - "/var/log/syslog" #fields: # postfix: true
But it is possible to configure according to our selection as described in ELK url.
Ex :
- type: log
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /var/log/mail.log
- /var/log/dovecot.log
- type: log
paths:
- "/var/log/syslog"
fields:
postfix: true
# this field will enable sending only postfix field available logs.
After starting the file beat if any change happens in these files you may see filebeat harvesting those changes. When there is a change any log file which is monitored by filebeat harvesting process will be started.
How to restart filebeat .
Every time you start a file beat a new service will be created in the linux environment.
Check running services
$ ps -A
So in such cases stop the current filebeat service by following command.
$ kill -9 989
then check again whether filebeat service exists
$ ps -A
if not start the filebeat again
$ ./filebeat -e
Stop the currently running filebeat thread
$ Ctrl + c
Elastic REST API interface
We can use elk API interface. Default port is 9200 but in our example we have changed it to k8s node port which is 31335 which is mapped to 9200.
Execute queries using postman.
This is a nice project to use spring boot as a elasticsearch client (They are working together).