Thursday, May 28, 2015

Setting up an ELK Stack with Kibana4, Beaver, and Docker-Compose

At this point you should be familiar with both manually setting up ELK, as well as through Dockerfiles. Well, here we go again- this time, with Docker Compose. Most of what we cover here relates more to workflow, rather than the writing of actual Docker Compose files. You'll have the files in my repository to reference if you're curious about the make up of these.

Prerequisites: Make sure to carry out the steps in the original ELK Docker before carrying this out with Docker Compose. You'll also want to be comfortable with Docker Machine. This guide was written for Mac OSX users.

Preparing the Docker VM and the Logstash Certificates

# point to the dev docker machine
docker-machine env dev

# grab the docker machine's ip
docker-machine ip

# clone the repo
git clone
cd docker-compose-files/elk

# generate certs
cd certs

# enter the following
Common name: Name
DNS or IP address 1: [Boot2Docker or Docker Machine IP]
Number of days: 3656

mv selfsigned.crt logstash-forwarder.crt
mv selfsigned.key logstash-forwarder.key

Note: You may have to change the permissions for logstash-forwarder.key.

Running Docker Compose

# go up one directory
cd ..

# run the containers in the foreground
docker-compose up

# open up a new terminal window
# then run this command to connect to the machine
docker-machine env dev | source

# look at the status of the containers
docker ps
CONTAINER ID        IMAGE                             COMMAND                CREATED             STATUS              PORTS                                            NAMES
160fe3ab6585        logstash:1.4.2                    "/docker-entrypoint.   15 minutes ago      Up 15 minutes>5000/tcp                           elk_logstash_1
b287cb003199        digitalwonderland/kibana:latest   "/usr/local/sbin/sta   16 minutes ago      Up 16 minutes>5601/tcp                           elk_kibana4_1
401fe80b8398        elasticsearch:1.4                 "/docker-entrypoint.   17 minutes ago      Up 17 minutes>9200/tcp,>9300/tcp   elk_elasticsearch_1

# open up a new tty in the logstash container
docker exec -it elk_logstash_1 /bin/bash

Testing Logstash

To test that logstash can communicate to elasticsearch properly, run the following command and then enter a series of strings:

logstash -e 'input { stdin { } } output { elasticsearch { host => "[docker-machine ip]" port => "9200" protocol => http } }'

Then navigate to http://[docker-machine ip]:9200 and few the added elasticsearch entries.

Logstash forwarder

Now, for every logstash forwarder you set up, either manually or through a Dockerfile, just make sure you point to the machine logstash is running on in your logstash-forwarder.conf and you'll be good to go. So on the logstash-forwarder machine:

sudo nano /etc/logstash-forwarder.conf

  "network": {
    "servers": [ "[docker-machine ip]:5000" ],
    "ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt",
    "timeout": 15

  # The list of file configurations
  "files": [
      "paths": [
      "fields": { "type": "syslog" }

To test that the logstash-forwarder can communicate with the logstash server properly, you'll want to run it in the foreground and observe the STDOUT with the following command: /opt/logstash-forwarder/bin/logstash-forwarder -config="/etc/logstash-forwarder.conf". Make sure you copied key to the forwarder's machine: scp logstash-forwarder.crt vagrant@

Using with Beaver

You also have the option of doing away with logstash-forwarders altogether and use a logstash set up that uses redis as it's input. You then set up one beaver container per host and have those populate the redis container with docker logs, instead:

# clone the repo
git clone
cd docker-compose-files/elkb

# generate an htpasswd file and enter a password
sudo htpasswd -c sec/htpasswd.users kibanauser

# run the containers
docker-compose up

# set up a separate host
docker-machine create --driver virtualbox dev2
docker-machine env dev2

# navigate to the beaver files
cd docker-compose-files/beaver
nano config/beaver.conf

logstash_version: 1
transport: redis
redis_url: redis://[docker-machine ip]:6379/0
redis_namespace: logstash:beaver
format: json

type: docker
format: rawjson
# spin up beaver
docker-compose up

Now navigate to http://[docker-machine dev ip]:5601.

About Data-only containers

I'm sure you've noticed data-only containers being referenced in each of our docker-compose.yml files. We utilize them to accomplish the following three goals:

  • To isolate the management of persistent data to one container
  • To share volumes with the host in order to persist data as containers are being destroyed and recreated
  • To make it possible for static files to be managed at the Docker Compose level when using Docker Machine

When using Docker Machine, you'll realize that it creates one more level of abstraction that will make it harder to copy static files onto your containers. Now you can get around this by performing that function through the Dockerfile itself, but I don't like having to extend every single image just because I want to copy files over. So I make a custom Dockerfile for the data-only container and that gives me the ability to share them across all of the containers in the cluster through the docker-compose.yml file. See the files in docker-compose-files/elkb for reference.

When it comes to data persistence and docker containers, you're better off using separate redundant servers and throwing them under a load balancer.

No comments:

Post a Comment