Sunday, December 14, 2014

Setting Up a Logstash Forwarder and Logstash Server

This is going to be a straightforward guide to setting up a logstash server and forwarder on a server. Make sure to check out this article if you'd like to get an overview of the ELK stack and the importance of centralized logging. Alright, let's get to it.

Prerequisites: You'll need an Ubuntu instance so make sure to head over to this guide, first. Everything we're setting up today will be on this instance, so if you're not creating the VM on a Mac, any Ubuntu server will do. You'll also want to be familiar with logging, log rotation, and the ELK stack. Also, the VM you provision must have at least 4GB of ram.

Java Installation

Now that you have your Ubuntu instance set up, enter the following commands in your shell:

Java 8 Installation

# fetch oracle java ppa
sudo add-apt-repository -y ppa:webupd8team/java

# update the packages
sudo apt-get update

# install the latest stable oracle java 8
sudo apt-get -y install oracle-java8-installer

Elasticsearch

Elasticsearch Installation

# fetch the elasticsearch/logstash public GPG key
wget -O - http://packages.elasticsearch.org/GPG-KEY-elasticsearch | sudo apt-key add -

# compile the elasticsearch source list
echo 'deb http://packages.elasticsearch.org/elasticsearch/1.4/debian stable main' | sudo tee /etc/apt/sources.list.d/elasticsearch.list

# update the packages
sudo apt-get update

# install the elasticsearch 1.4.2 release
sudo apt-get -y install elasticsearch=1.4.2

Elasticsearch Configuration

# edit a configuration file for elasticsearch
sudo nano /etc/elasticsearch/elasticsearch.yml

# add the following lines to this file
script.disable_dynamic: true
network.host: localhost

# restart elasticsearch to take the new config into effect
sudo service elasticsearch restart

# configure elasticsearch to run on startup
sudo update-rc.d elasticsearch defaults 95 10

Now verify that elasticsearch can be reached by running curl -i localhost:9200. You should see something similar to the following output:

HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
Content-Length: 344

{
  "status" : 200,
  "name" : "American Samurai",
  "cluster_name" : "elasticsearch",
  "version" : {
    "number" : "1.4.2",
    "build_hash" : "927caff6f05403e936c20bf4529f144f0c89fd8c",
    "build_timestamp" : "2014-12-16T14:11:12Z",
    "build_snapshot" : false,
    "lucene_version" : "4.10.2"
  },
  "tagline" : "You Know, for Search"
}

Logstash

Logstash Installation

Note: We'll be installing Logstash 1.5.0.beta1(not production ready) because it has a TCP bugfix and is compatible with the Elasticsearch 1.4.2 release.

# compile the logstash source list
echo 'deb http://packages.elasticsearch.org/logstash/1.5/debian stable main' | sudo tee /etc/apt/sources.list.d/logstash.list

# update the packages
sudo apt-get update

# install logstash
sudo apt-get install logstash=1.5.0.beta1-1

Logstash Server Configuration

Run sudo nano /etc/logstash/conf.d/logstash.conf and add the following input, filter, and output blocks. This basically configures the logstash server to accept syslogs from any machine that hits its endpoint.

input {
  lumberjack {
    port => 5000
    type => "logs"
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
  }
}

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}

output {
  elasticsearch { host => localhost }
  stdout { codec => rubydebug }
}

Make sure to leverage logstash's configuration check by running /opt/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf -t. In this command, we're indicating the path to logstash, as well as our config file. You'll get the following error, at this point, because we haven't created any certificate key pairs yet. That's ok. We'll rerun this command later.

Using milestone 1 input plugin 'lumberjack'. This plugin should work, but would benefit from use by folks like you. Please let us know if you find bugs or have suggestions on how to improve this plugin.  For more information on plugin milestones, see http://logstash.net/docs/1.5.0.beta1/plugin-milestones {:level=>:warn}
Invalid setting for lumberjack input plugin:

  input {
    lumberjack {
      # This setting must be a path
      # File does not exist or cannot be opened /etc/pki/tls/certs/logstash-forwarder.crt
      ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
      ...
    }
  } {:level=>:error}
Invalid setting for lumberjack input plugin:

  input {
    lumberjack {
      # This setting must be a path
      # File does not exist or cannot be opened /etc/pki/tls/private/logstash-forwarder.key
      ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
      ...
    }
  } {:level=>:error}
Error: Something is wrong with your configuration.

Authenticating Forwarders

Generate an SSL Certificate

You should already be familiar with generating SSL certificates if you've seen my guide on configuring an SSL server block for Nginx. Either way, carry out the following steps:

# create the appropriate folders
sudo mkdir -p /etc/pki/tls/certs
sudo mkdir /etc/pki/tls/private

# create the certificate
cd /etc/pki/tls; sudo openssl req -x509 -batch -nodes -days 3650 -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

Now restart logstash with sudo service logstash restart. Run sudo service logstash status to verify that it's running. You should also run curl localhost:5000 to ensure that you don't get a curl: (7) Failed to connect to localhost port 5000: Connection refused error. Now that we have our logstash server up and running on this machine, we're ready to accept logs from any of our forwarders. We'll come back to that later. In the future, if you don't see your changes in Kibana, tail your logstash log by running sudo tail -f /var/log/logstash/logstash.log to make sure your logstash server has restarted properly.

Kibana

Kibana Installation

Note: We'll be installing Kibana 3.0.1 which is compatible with our Logstash 1.4.2.

# navigate to your opt directory
cd /opt

# fetch kibana
sudo wget https://download.elasticsearch.org/kibana/kibana/kibana-3.0.1.tar.gz

# extract
sudo tar xvf kibana-3.0.1.tar.gz

# edit the kibana config
sudo vi /opt/kibana-3.0.1/config.js

# add change the following line to access kibana on port 80
elasticsearch: "http://"+window.location.hostname+":80",

Nginx

Configuring Nginx to Serve Kibana

Now, I like to set up DNS records so that by navigating to logs.mydomain.com, I can open up the Kibana dashboard, but feel free to set up the domain to anything you'd like.

# install nginx
sudo apt-get install nginx

# create a directory for kibana
sudo mkdir -p /var/www/kibana3

# copy the kibana files into your new directory
sudo cp -R /opt/kibana-3.0.1/* /var/www/kibana3/

# navigate to your nginx config directory
cd /etc/nginx/sites-available/

# fetch the sample nginx.conf
sudo wget https://gist.githubusercontent.com/thisismitch/2205786838a6a5d61f55/raw/f91e06198a7c455925f6e3099e3ea7c186d0b263/nginx.conf

# delete the old default file
sudo rm default

# rename the file
sudo mv nginx.conf default

# open the file for editing
sudo nano default

# update the following lines
server_name logs.domain.com; # your domain name or localhost
access_log            /var/log/nginx/kibana.access.log;
root /var/www/kibana3;

# install the htpasswd utility
sudo apt-get install apache2-utils

# generate a login for the kibana dashboard
sudo htpasswd -c /etc/nginx/conf.d/kibana.myhost.org.htpasswd user

# restart nginx
sudo service nginx restart

Logstash Revisisted

Logstash Forwarder Installation

Now the final step is to set up a forwarder. A forwarder is just a server or VM instance with the logstash forwarder software installed and pointing to a logstash server. While the server is usually set up in one centralized location, forwarders are set up on every server that needs to be logged. So if you have four total servers, you'll want a Logstash server set up on one, and a forwarder set up on all four. So one server with have both a logstash server and a logstash forwarder configure so it can send logs to itself, as well as collect logs from every other forwarder.

Note: You'll want to carry out the following steps for every forwarder you want to set up. So each potential forwarder will need its own dedicated IP address. In this article, we're installing both logstash server and forwarder on the same machine.

# from the logstash server machine, scp the certificate to the logstash forwarder
# in this case, the server_ip is the same as the server

# since we're on the same maching, do this 
cp /etc/pki/tls/certs/logstash-forwarder.crt /tmp

# if you had a forwarder on different machine, do this
scp /etc/pki/tls/certs/logstash-forwarder.crt user@server_ip:/tmp

# from the logstash forwarder machine, compile the forwarder source list
echo 'deb http://packages.elasticsearch.org/logstashforwarder/debian stable main' | sudo tee /etc/apt/sources.list.d/logstashforwarder.list

# update
sudo apt-get update

# install
sudo apt-get install logstash-forwarder

# make a directory for the certificate
sudo mkdir -p /etc/pki/tls/certs

# copy the certificate over
sudo cp /tmp/logstash-forwarder.crt /etc/pki/tls/certs/

Configuring the Logstash Forwarder

Run sudo nano /etc/logstash-forwarder and update the following block with your logstash server's ip address to send your syslog and authlogs. Logstash will be running on port 5000. You can also set this to localhost if it's all running on the same machine.

{
  "network": {
    "servers": [ "logstash_server_ip:5000" ],
    "timeout": 15,
    "ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
  },
  "files": [
    {
      "paths": [
        "/var/log/syslog",
        "/var/log/auth.log"
       ],
      "fields": { "type": "syslog" }
    }
   ]
}

Now that we have the keys set up, let's check our config file again with /opt/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf -t. Verify that you have the following output:

Using milestone 1 input plugin 'lumberjack'. This plugin should work, but would benefit from use by folks like you. Please let us know if you find bugs or have suggestions on how to improve this plugin.  For more information on plugin milestones, see http://logstash.net/docs/1.5.0.beta1/plugin-milestones {:level=>:warn}
Using milestone 1 filter plugin 'syslog_pri'. This plugin should work, but would benefit from use by folks like you. Please let us know if you find bugs or have suggestions on how to improve this plugin.  For more information on plugin milestones, see http://logstash.net/docs/1.5.0.beta1/plugin-milestones {:level=>:warn}
Configuration OK

You should also verify that you can hit the logstash server by running curl -i localhost:5000. If you get curl: (52) Empty reply from server, then it works.

Now run sudo service logstash-forwarder restart to restart the forwarder, and navigate to the domain you set nginx up with. You should see the following Kibana dashboard:

Read the instructions on this page. If everything was set up properly, you'll see a histogram with log data streaming in from one of your servers.

If for some reason Kibana ever stops visualizing your data, make sure to check your logstash logs, first, by running sudo tail -f /var/log/logstash/logstash.log. Also, don't forget to restart your logstash and logstash forwarders whenever you make changes to the config files by running sudo service logstash restart and sudo service logstash-forwarder restart. Lastly, remember that your elasticsearch service may need to be restarted, as well.

If you start to notice your CPU usage climbing, tail your logstash logs with sudo tail -f /var/log/logstash/logstash.log and look for the error The error reported is: \n Address already in use - bind - Address already in use. This is a result of logstash-web trying to bind to the same port as kibana. Just disable logstash-web by running echo manual | sudo tee /etc/init/logstash-web.override and reboot the server.

Pruning Your Elasticsearch Logs

Elasticsearch is set to rotate your logs, daily, as indicated in your /etc/elasticsearch/logging.yml file. You'll want to make sure to prune your logs so you don't run out of disk space. To limit the archive to 7 days worth, update this file with a maxBackupIndex like so:

  file:
    type: dailyRollingFile
    file: ${path.logs}/${cluster.name}.log
    datePattern: "'.'yyyy-MM-dd"
    maxBackupIndex: 7
    layout:
      type: pattern
      conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"

25 comments:

  1. Hey Rob,

    Thank you for the brilliant tutorial. But I;m having difficulty getting logstash server to work. Here's the error I get:
    {:timestamp=>"2015-03-29T08:03:49.983000+0000", :message=>"SIGTERM received. Shutting down the pipeline.", :level=>:warn}
    {:timestamp=>"2015-03-29T08:03:49.990000+0000", :message=>"Exception in lumberjack input", :exception=>#, :level=>:error}

    ReplyDelete
    Replies
    1. Hi Karthik K. Thanks for the compliment!

      First run the following command and let me know if that points out any errors in your config file:
      /opt/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf -t

      Then check to see if your services are running:
      sudo service logstash status
      sudo service logstash-forwarder status

      If your logstash-forwarder isn’t running, check the log with:
      sudo tail -f /var/log/logstash-forwarder/logstash-forwarder.err

      If there’s nothing recent in there, try running in the foreground:
      sudo /opt/logstash-forwarder/bin/logstash-forwarder -config=/etc/logstash-forwarder

      You may be getting a “Failure connecting to 127.0.0.1: dial tcp 127.0.0.1:5000: connection refused” if it has problems connecting to the logstash server.

      You also want to verify that elasticsearch is running by curling it with:
      curl -i localhost:9200

      Also let me know if you'll installing everything on the same server or if you have a logstash server on one machine and a logstash-forwarder on another.

      By the way, I did update the steps in this article with some steps that might help with diagnostics. I'm confident we'll be able to get to a resolution, soon. Let me know if this helps.

      Delete
    2. Thanks for the reply Rob. Appreciate it. You were right. I keep getting 'Failure connecting to...' message on the forwarder. And elastic search is running just fine.

      I'm running Logstash server on EC2, and forwarder on my server at home.

      Delete
    3. Ok I believe we need to regenerate your certificate and key to take into account the IP address of the EC2 instance. Give me 5 minutes to add the instructions in the next reply and then I will update the article if it solves your problem. Thanks for being patient. If we figure this out, it will help others as well.

      Delete
    4. Ok go into your server and edit your openssl config:
      sudo nano /etc/ssl/openssl.cnf

      Go to the [ v3_ca ] section and add your EC2 instance's IP address:
      subjectAltName = IP: ec2.instance.ip.address

      Generate the keys again:
      cd /etc/pki/tls; sudo openssl req -config /etc/ssl/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

      Copy the key to your home server
      scp /etc/pki/tls/certs/logstash-forwarder.crt user@my.home.server.ip:/tmp

      Restart all of your services. Tell me if this works. If it doesn't, we'll try something else.

      Delete
    5. Hey Rob,
      That's so nice of you. My server went down, and I'm trying to bring that up right now. Will keep you posted. I have a feeling that this method would work just fine. Do you think its because I have a domain name mapped to the server? Like logs.abcdef.com ?

      Delete
    6. I believe you should still be ok. My production instance also has a domain name mapped to the server. It's even easier if you do because you can just generate your keys like this:
      cd /etc/pki/tls; sudo openssl req -subj '/CN=logs.abcdef.com' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

      Delete
    7. I tried even that. But I think the issue is with logstash server itself. Doing the config test says that the configuration is OK. But when I restart the logstash server, I still get the error message saying SIGTERM received, since there is an error with the lumberjack input. :/

      Delete
    8. Ok another question. When you start the logstash server, does it stay running? What is the output of:
      sudo service logstash status

      Delete
    9. It's running just fine. Which is why I'm not able to understand what's going wrong. This is not my first try though. Should I change someone in the nginx conf for it to listen to port 5000 ?

      Delete
    10. Hmm... Good idea, but I'm looking at the nginx for my server and I'm not proxying anything to 5000 so you shouldn't have to. Let me try something else real quick. Be back in 10 minutes.

      Delete
    11. Thanks Rob! :) Will wait for your inputs.

      Delete
    12. Hm we should go back to when we noticed the error with the logstash-forwarder not connecting to the server. I did notice that my logstash server stayed up and running in my sandbox, but the logstash-forwarder wasn't. So I followed this thread: https://github.com/elastic/logstash-forwarder/issues/221 and came to the conclusion that I needed to generate the keys this way:
      cd /etc/pki/tls; sudo openssl req -subj '/CN=*/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

      Then, I ran the forwarder in the foreground again with this command:
      sudo /opt/logstash-forwarder/bin/logstash-forwarder -config=/etc/logstash-forwarder

      And I got the following output:
      2015/03/29 20:31:27.407981 Waiting for 1 prospectors to initialise
      2015/03/29 20:31:27.408276 Launching harvester on new file: /var/log/syslog
      2015/03/29 20:31:27.408483 Launching harvester on new file: /var/log/auth.log
      2015/03/29 20:31:27.408532 harvest: "/var/log/syslog" (offset snapshot:0)
      2015/03/29 20:31:27.408861 harvest: "/var/log/auth.log" (offset snapshot:0)
      2015/03/29 20:31:27.408993 All prospectors initialised with 0 states to persist
      2015/03/29 20:31:27.409081 Setting trusted CA from file: /etc/pki/tls/certs/logstash-forwarder.crt
      2015/03/29 20:31:27.410034 Connecting to [127.0.0.1]:5000 (localhost)
      2015/03/29 20:31:27.411130 Failure connecting to 127.0.0.1: dial tcp 127.0.0.1:5000: connection refused
      2015/03/29 20:31:28.412213 Connecting to [127.0.0.1]:5000 (localhost)
      2015/03/29 20:31:28.412960 Failure connecting to 127.0.0.1: dial tcp 127.0.0.1:5000: connection refused
      2015/03/29 20:31:29.415183 Connecting to [127.0.0.1]:5000 (localhost)
      2015/03/29 20:31:29.416770 Failure connecting to 127.0.0.1: dial tcp 127.0.0.1:5000: connection refused
      2015/03/29 20:31:30.417390 Connecting to [127.0.0.1]:5000 (localhost)
      2015/03/29 20:31:30.417755 Failure connecting to 127.0.0.1: dial tcp 127.0.0.1:5000: connection refused
      2015/03/29 20:31:31.419842 Connecting to [127.0.0.1]:5000 (localhost)
      2015/03/29 20:31:31.420340 Failure connecting to 127.0.0.1: dial tcp 127.0.0.1:5000: connection refused
      2015/03/29 20:31:32.421117 Connecting to [127.0.0.1]:5000 (localhost)
      2015/03/29 20:31:33.189479 Connected to 127.0.0.1

      So it finally did connect in the end. Let me know if this helps.

      Delete
    13. Also let me know how much RAM and cores have been dedicated to your ec2 instance.

      Delete
    14. Hey Rob,

      I'm on a t2.micro on EC2. I tried creating a new cert and copying it to the forwarder. Still no luck. The connection is still failing.

      Delete
    15. Ok. It has gotten to a whole new level now. I tried installing logstash-forwarder on the same machine as the server. But this is what happens now in the logstash-forwarder.err log:

      panic: runtime error: integer divide by zero
      [signal 0x8 code=0x1 addr=0x40b33c pc=0x40b33c]

      goroutine 10 [running]:
      main.connect(0xc208010150, 0xc208078000)
      /home/jenkins/workspace/logstash-forwarder/publisher1.go:169 +0xa2c
      main.Publishv1(0xc208050120, 0xc208050180, 0xc208010150)
      /home/jenkins/workspace/logstash-forwarder/publisher1.go:41 +0x96
      created by main.main
      /home/jenkins/workspace/logstash-forwarder/logstash-forwarder.go:208 +0x1175

      goroutine 1 [chan receive]:
      main.Registrar(0xc20800a840, 0xc208050180)
      /home/jenkins/workspace/logstash-forwarder/registrar.go:9 +0x7d
      main.main()
      /home/jenkins/workspace/logstash-forwarder/logstash-forwarder.go:211 +0x1195

      goroutine 5 [sleep]:
      main.(*Prospector).Prospect(0xc208050240, 0xc20802a5a0, 0xc2080500c0)
      /home/jenkins/workspace/logstash-forwarder/prospector.go:67 +0x6c6
      created by main.main
      /home/jenkins/workspace/logstash-forwarder/logstash-forwarder.go:182 +0xe26

      goroutine 6 [syscall]:
      syscall.Syscall(0x0, 0x3, 0xc20807a000, 0x4000, 0x1, 0x1e, 0x0)
      /usr/local/go/src/syscall/asm_linux_amd64.s:21 +0x5
      syscall.read(0x3, 0xc20807a000, 0x4000, 0x4000, 0x68db88, 0x0, 0x0)
      /usr/local/go/src/syscall/zsyscall_linux_amd64.go:867 +0x6e
      syscall.Read(0x3, 0xc20807a000, 0x4000, 0x4000, 0x0, 0x0, 0x0)
      /usr/local/go/src/syscall/syscall_unix.go:136 +0x58
      os.(*File).read(0xc208036028, 0xc20807a000, 0x4000, 0x4000, 0x7f516aa3b010, 0x0, 0x0)
      /usr/local/go/src/os/file_unix.go:191 +0x5e
      os.(*File).Read(0xc208036028, 0xc20807a000, 0x4000, 0x4000, 0x7f516aa3b010, 0x0, 0x0)
      /usr/local/go/src/os/file.go:95 +0x91
      bufio.(*Reader).fill(0xc2080503c0)
      /usr/local/go/src/bufio/bufio.go:97 +0x1ce
      bufio.(*Reader).ReadSlice(0xc2080503c0, 0xc20805fc0a, 0x0, 0x0, 0x0, 0x0, 0x0)
      /usr/local/go/src/bufio/bufio.go:295 +0x257
      bufio.(*Reader).ReadBytes(0xc2080503c0, 0x379c880a, 0x0, 0x0, 0x0, 0x0, 0x0)
      /usr/local/go/src/bufio/bufio.go:374 +0xd2
      main.(*Harvester).readline(0xc208050300, 0xc2080503c0, 0xc208078150, 0x2540be400, 0x2, 0x0, 0x0, 0x0)
      /home/jenkins/workspace/logstash-forwarder/harvester.go:133 +0x8a
      main.(*Harvester).Harvest(0xc208050300, 0xc2080500c0)
      /home/jenkins/workspace/logstash-forwarder/harvester.go:53 +0x586
      created by main.(*Prospector).scan
      /home/jenkins/workspace/logstash-forwarder/prospector.go:164 +0x1119

      goroutine 7 [syscall]:
      syscall.Syscall(0x0, 0x4, 0xc208086000, 0x4000, 0x1, 0x1e, 0x0)
      /usr/local/go/src/syscall/asm_linux_amd64.s:21 +0x5
      syscall.read(0x4, 0xc208086000, 0x4000, 0x4000, 0x0, 0x0, 0x0)
      /usr/local/go/src/syscall/zsyscall_linux_amd64.go:867 +0x6e
      syscall.Read(0x4, 0xc208086000, 0x4000, 0x4000, 0x0, 0x0, 0x0)
      /usr/local/go/src/syscall/syscall_unix.go:136 +0x58
      os.(*File).read(0xc208036030, 0xc208086000, 0x4000, 0x4000, 0x7f516aa3b010, 0x0, 0x0)
      /usr/local/go/src/os/file_unix.go:191 +0x5e
      os.(*File).Read(0xc208036030, 0xc208086000, 0x4000, 0x4000, 0x7f516aa3b010, 0x0, 0x0)
      /usr/local/go/src/os/file.go:95 +0x91
      bufio.(*Reader).fill(0xc208050420)
      /usr/local/go/src/bufio/bufio.go:97 +0x1ce
      bufio.(*Reader).ReadSlice(0xc208050420, 0xc208060c0a, 0x0, 0x0, 0x0, 0x0, 0x0)
      /usr/local/go/src/bufio/bufio.go:295 +0x257
      bufio.(*Reader).ReadBytes(0xc208050420, 0x37a36f0a, 0x0, 0x0, 0x0, 0x0, 0x0)
      /usr/local/go/src/bufio/bufio.go:374 +0xd2
      main.(*Harvester).readline(0xc208050360, 0xc208050420, 0xc2080781c0, 0x2540be400, 0x2, 0x0, 0x0, 0x0)
      /home/jenkins/workspace/logstash-forwarder/harvester.go:133 +0x8a
      main.(*Harvester).Harvest(0xc208050360, 0xc2080500c0)
      /home/jenkins/workspace/logstash-forwarder/harvester.go:53 +0x586
      created by main.(*Prospector).scan
      /home/jenkins/workspace/logstash-forwarder/prospector.go:164 +0x1119

      goroutine 9 [select]:
      main.Spool(0xc2080500c0, 0xc208050120, 0x400, 0x12a05f200)
      /home/jenkins/workspace/logstash-forwarder/spooler.go:26 +0x6c6
      created by main.main
      /home/jenkins/workspace/logstash-forwarder/logstash-forwarder.go:206 +0x113e

      Delete
    16. Haha wow. Maybe we're both running two different versions? A lot of these problems with provisioning environments and dependencies is what Docker aims to solve. Here's what we should do. I'm going to try to reproduce both of your bugs- this new one and the EC2 instance one by setting up my own EC2. Until then, please try using my pre-built Docker containers here: http://www.roblayton.com/2015/03/docker-setup-for-elasticsearch-logstash.html

      I feel like that should be much easier, but the bad news is that you have to learn Docker. Might as well try it until I can help solve your problem. Give me a few days to reproduce the errors.

      Delete
  2. I don't mind learning Docker. Heard too many good reviews about it. But just wanted to check if there's any way to use the existing RabbitMQ to forward the logs to logstash server. Like I have Sensu, Graphite and Grafana setup already using that Architecture.

    ReplyDelete
    Replies
    1. Ok I set time aside tonight to experiment with a local forwarder and a server running on an EC2 instance. Don't forget to set your security group in the console to open up port 9200 or whatever port the logstash server is running on. You should get the following output after running sudo /opt/logstash-forwarder/bin/logstash-forwarder -config=/etc/logstash-forwarder

      2015/04/02 01:12:07.958104 Registrar: processing 1024 events
      2015/04/02 01:12:09.183028 Registrar: processing 140 events
      2015/04/02 01:15:34.441897 Registrar: processing 3 events
      2015/04/02 01:17:01.927232 Registrar: processing 3 events
      2015/04/02 01:18:34.415224 Registrar: processing 3 events

      Delete
    2. Hey Rob,

      Thanks for the help. Like I'm not able to pass the logs from within the logstash server itself. I installed the forwarder on the server machine, and did the same configuration and it worked. The issue is only happening when forwarding from other clients.

      Delete
    3. Damn it. I am still with the error:

      2015/04/02 13:21:20.721374 Setting trusted CA from file: /etc/pki/tls/certs/logstash-forwarder.crt
      2015/04/02 13:21:20.721832 Connecting to [54.148.32.152]:5000 (54.148.32.152)
      2015/04/02 13:21:35.722226 Failure connecting to 54.148.32.152: dial tcp 54.148.32.152:5000: i/o timeout
      2015/04/02 13:21:36.722641 Connecting to [54.148.32.152]:5000 (54.148.32.152)
      2015/04/02 13:21:51.723054 Failure connecting to 54.148.32.152: dial tcp 54.148.32.152:5000: i/o timeout
      2015/04/02 13:21:52.723418 Connecting to [54.148.32.152]:5000 (54.148.32.152)
      2015/04/02 13:22:07.723690 Failure connecting to 54.148.32.152: dial tcp 54.148.32.152:5000: i/o timeout
      2015/04/02 13:22:08.724007 Connecting to [54.148.32.152]:5000 (54.148.32.152)
      2015/04/02 13:22:23.724401 Failure connecting to 54.148.32.152: dial tcp 54.148.32.152:5000: i/o timeout
      2015/04/02 13:22:24.724754 Connecting to [54.148.32.152]:5000 (54.148.32.152)

      Delete
    4. I was finally able to solve the issue. For some reason, even now, I'm not able to listen at port 5000. I had to move to 5001 or anything other than 5000 to start receiving logs. :) Really appreciate your help! :)

      Delete
    5. Wow, nice work. I'm really glad that worked out.

      Delete
    6. Rob, i have few issues in setting up the logstash - forwarder. Can you please help me?

      Delete
  3. Rob, i have few issues in setting up the logstash - forwarder. Can you please help me?

    ReplyDelete