Logstash – Elastic Stack Tutorial (Part 1)

When dealing with multiple servers, and especially in a high availability environment handling logs can get quite complex. It can become difficult to debug with logs spread out over multiple servers and this is one of the problems that Logstash attempts to address. Logstash is also part of the ELK (ElasticSearch, Logstash, and Kibana) Stack, lately referred to as the Elastic Stack, which together form a very powerful tool for managing, reading and visualizing logs and data. In the Elastic Stack series, we are going to have a look at each of the tools and go through some best practices.

This is the first part of the Elastic Stack tutorial, if you enjoy this post and would like to learn more about Elasticsearch and Kibana as well, make sure to read my other articles.

We are going to run each component in the Elastic Stack in Docker. If you don’t know what Docker is and how it works, then I recommend reading my previous article with a docker example first.

In this first article in the series, we are going to have a look at Logstash. A simple way to describe Logstash is that it is a pipeline that logs flow through. We configure one (or many) input sources, that you either send directly to an output source, or you let it go through a filter where you modify the data before sending it to the output.

Logstash flow

The input is usually a log file, and you want to run Logstash on each machine that runs some type of server with logs that you want to gather. For example, if you are running Docker containers you will normally want to make sure that you map the log file to a volume that Logstash can reach. You can configure Logstash to have multiple input sources.

In this article, we are going to make it more simple for us by deploy a pre-existing Elastic Stack configuration that is available on GitHub. Clone that project and run docker-compose up -d to start the containers.

This will launch Logstash, ElasticSearch & Kibana with some good base configuration. We will leave ElasticSearch, and Kibana alone for this article.

Logstash will expose port 5000 for receiving TCP input. This means that you can send log files over TCP to Logstash, that it will process and send to an output, which in our case will be ElasticSearch.

This can be done with the following command as an example.

nc localhost 5000 < debug.log

After receiving the data over TCP Logstash will process it, and transform it into a document that is suitable for ElasticSearch and after that send it. Of course, you don’t have to send the whole file at once, you can send any message over TCP which means that you can tail a log and send it to Logstash instantly as something gets written.

Final words

We have now gained some basic knowledge what the Elastic Stack is, and we also deployed it with Docker. Logstash should now also be a bit more familar when we know what it is, and what it does. We have set up Logstash so that it can receive data over TCP so that we have an easy way to send input to Logstash from any server. Now that we have sent data to Logstash, we are going to have a look at ElasticSearch in the next article where all the data is stored, and later will be queried by Kibana for visualization.

Leave a Reply