Last Updated on February 13, 2023 by cscontents
Introduction
Monitoring of logs and metric sets of Kubernetes environment is very crucial. And for this we should use some tool which can fetch the logs and metrics from Kubernetes cluster, and finally we can visualize them. In this article we will see how we can monitor Kubernetes cluster completely using one of most popular monitoring tools which is ELK stack.
Before you go through this article, I would suggest you go through the article below first. In the article below you will gain the required knowledge about Kubernetes logging and monitoring.
Kubernetes Logging and Monitoring – a simple guide
A brief introduction about ELK stack, it is combination of 3 tools (Elasticsearch, Logstash & Kibana). You can go with the self-hosted option or cloud-hosted option of ELK stack. It also provides beats which are lightweight agent to collect the logs/metrics from the target machine, server, Kubernetes cluster etc.
Prerequisite
To complete this tutorial, you need the below –
Prerequisite Knowledge
- Hands-on knowledge of ELK stack.
- Hands-on knowledge of Kubernetes.
- Knowledge on Kubernetes objects like pod, service, daemonset etc.
Prerequisite setup
- Up & running Kubernetes cluster which you want to monitor.
- Up & running Elasticsearch & Kibana where data will be visualized.
Offering from ELK stack to monitor Kubernetes environment
Elastic stack provides a great set of tools to achieve complete observability of your Kubernetes environment.
- Elasticsearch – This is the centralized place where all kinds of logs, metrics and other data will be stored.
- Logstash – This is a pipeline tool where logs can be parsed and enriched. Logstash has many input and output plugins which are used in the pipeline.
- Kibana – It is one of the most popular visualization tools. Based on your requirements you can create various types of pie-charts, bar graphs & many more.
- Beats – these are lightweight agents which are installed in the target machine.
- Filebeat – This is installed on target machine, server or deployed in Kubernetes cluster to fetch the logs.
- Metricbeat – This is installed on target machine server or deployed in Kubernetes cluster to fetch various types of metric sets.
- Heartbeat – This is installed or deployed to fetch the uptime related data.
- RUM agent – This is installed to fetch data related to real user experience and their interaction with the site.
- Packetbeat – This is installed or deployed to fetch network packet related data.
- Elastic agent – This is a single agent which can replace all the beats which are mentioned above. If you are using beats, then there is no need to use elastic agent. And if you are using elastic agent then there is no need to use elastic beats (filebeat, metricbeat etc.).
In this article we will try to implement Kubernetes cluster monitoring using filebeat, metricbeat, heartbeat agents.
Steps to implement Kubernetes monitoring using Elastic stack
Below is the step-by-step guide –
Step 1: Understanding logging of your application
This is the first step when we think about monitoring any application. We need to know what are the various log files that are generated by the application.
There can be two cases –
Case 1: All your application logs are written in stdout/stderr
In this case we are good and there is no need to run any extra container with your application container to write the logs into stdout/stderr.
Case 2: All your application logs are not written in stdout/stderr
In this case we need to run a sidecar container with the main application container so that it can write all the application logs into stdout/stderr. In this case if you are deploying your application as pod in K8s, then you need to include one more container (sidecar container with busybox image) inside the application pod. So total 2 containers will be running inside application pod.
Step 2: Deploy the application pods/ Ensure application pods are running
Now according to step-1, you need to take decision and write your Kubernetes deployment/pod manifest file accordingly. In our case for simplicity, we will assume case – 1, where all the logs are written in stdout.
Now to deploy your application in Kubernetes cluster you can follow the article below.
Jenkins pipeline code to build & deploy application in Kubernetes
Step 3: Deploy filebeat as daemonset to fetch the logs
Once we have our application pod up & running, we need to deploy the logging agent to fetch the logs from Kubernetes cluster.
1. Download the filebeat daemonset manifest file by running below command.
curl -L -O https://raw.githubusercontent.com/elastic/beats/8.6/deploy/kubernetes/filebeat-kubernetes.yaml
2. Update your Elasticsearch cluster details in the filebeat-kubernetes.yaml If you want to send the logs to Logstash at first, then enter the Logstash server details and comment out output.elasticsearch
section. In case of cloud hosted Elasticsearch you need to provide the correct endpoint.
3. Assuming you have kubectl installed, you need to run the below CLI command to deploy filebeat as daemonset in the Kubernetes cluster.
kubectl apply -f filebeat-kubernetes.yaml
Step 4: Deploy metricbeat as daemonset to fetch the metric sets
To fetch various metric set from the Kubernetes cluster we need to deploy metricbeat as daemonset.
1. Download the metricbeat daemonset manifest file by running below curl command.
curl -L -O https://raw.githubusercontent.com/elastic/beats/8.6/deploy/kubernetes/metricbeat-kubernetes.yaml
2. Update your Elasticsearch cluster details in the metricbeat-kubernetes.yaml file. In case of cloud hosted Elasticsearch you need to provide the correct endpoint.
3. Run the below CLI command to deploy metricbeat as daemonset in the Kubernetes cluster.
kubectl apply -f metricbeat-kubernetes.yaml
Step 5: Deploy heartbeat to fetch uptime related data
To fetch uptime data of various component in Kubernetes cluster we need to deploy a single pod of heartbeat. A single pod of heartbeat can fetch all the uptime data from the whole Kubernetes cluster.
1. Download the heartbeat deployment manifest file by running below command.
curl -L -O https://raw.githubusercontent.com/elastic/beats/8.6/deploy/kubernetes/heartbeat-kubernetes.yaml
2. Update your Elasticsearch cluster details in the heartbeat-kubernetes.yaml file. In case of cloud hosted Elasticsearch you need to provide the correct endpoint.
3. Run the below CLI command to deploy heartbeat pod in the Kubernetes cluster.
kubectl apply -f heartbeat-kubernetes.yaml
Step 6: Visualize monitoring data in Kibana
In the previous steps we have deployed the agents into Kubernetes cluster to fetch the various logs, metric sets etc. Now it is time to check the data in Kibana dashboard. Assuming your Kibana service is up & running, open the below URL.
Thank You.
If you are interested in learning DevOps, please have a look at the below articles, which will help you greatly.
- Kubernetes Series: Part 1 – Introduction to Kubernetes | Background of Kubernetes
- Kubernetes Series: Part 2 – Components of Kubernetes cluster | Kubernetes cluster in detail
- Kubernetes Series: Part 3 – What is Minikube and How to create a Kubernetes cluster (on Linux) using Minikube?
- Introduction to Azure DevOps – High level information
- Introduction to Ansible | High Level Understanding of Ansible
- Basics of automation using Ansible | Automate any task
- 10 frequently used ansible modules with example
- Jenkins Pipeline as code – High level information
- What is End-to-End Monitoring of any web application and Why do we need it?
- DevOps Engineer or Software Developer Engineer which is better for you?- Let’s discuss
- How To Be A Good DevOps Engineer?
- How to do git push, git pull, git add, git commit etc. with Bitbucket