Kibana - Log Aggregation Tool - Setup
Get Kibana from Github repository
sudo su
cd /srv
git clone --branch=kibana-ruby https://github.com/rashidkpc/Kibana.git
mv Kibana kibana
cd kibana
bundle install
ln -s static public # we will need it to run kibana under nginx and passenger
Kibana also needs Elasticsearch (please don't install the newest Elasticsearch - it isn't compatible with current Logstash version)
cd ~
wget https://github.com/downloads/elasticsearch/elasticsearch/elasticsearch-0.19.10.deb
dpkg -i elasticsearch-0.19.10.deb
rm elasticsearch-0.19.10.deb
Elasticsearch configuration (file /etc/elasticsearch/elasticsearch.yaml):
cluster.name: elasticsearch
node.name: "your_host_name"
node.master: true
node.data: true
index.number_of_shards: 5
index.number_of_replicas: 1
index.number_of_shards: 1
index.number_of_replicas: 0
bootstrap.mlockall: true
network.host: 127.0.0.1
http.max_content_length: 1000mb
index.search.slowlog.level: TRACE
index.search.slowlog.threshold.query.warn: 10s
index.search.slowlog.threshold.query.info: 5s
index.search.slowlog.threshold.query.debug: 2s
index.search.slowlog.threshold.query.trace: 500ms
index.search.slowlog.threshold.fetch.warn: 1s
index.search.slowlog.threshold.fetch.info: 800ms
index.search.slowlog.threshold.fetch.debug: 500ms
index.search.slowlog.threshold.fetch.trace: 200ms
monitor.jvm.gc.ParNew.warn: 1000ms
monitor.jvm.gc.ParNew.info: 700ms
monitor.jvm.gc.ParNew.debug: 400ms
monitor.jvm.gc.ConcurrentMarkSweep.warn: 10s
monitor.jvm.gc.ConcurrentMarkSweep.info: 5s
monitor.jvm.gc.ConcurrentMarkSweep.debug: 2s
Kibana basic configuration (file /srv/kibana/KibanaConfig.rb):
module KibanaConfig
# A Note: While the only option you really have to set is "Elasticsearch" it
# is HIGHLY recommended you glance over every option. I personally consider
# 'Facet_index_limit' really important.
# Your elastic search server(s). This may be set as an array for round robin
# load balancing
# Elasticsearch = ["elasticsearch1:9200","elasticsearch2:9200"]
Elasticsearch = "localhost:9200"
#Set the Net::HTTP read/open timeouts for the connection to the ES backend
ElasticsearchTimeout = 500
# The port Kibana should listen on
KibanaPort = 8998
# The adress ip Kibana should listen on. Comment out or set to
# 0.0.0.0 to listen on all interfaces.
KibanaHost = '0.0.0.0'
# The record type as defined in your logstash configuration.
# Seperate multiple types with a comma, no spaces. Leave blank
# for all.
Type = ''
# Results to show per page
Per_page = 50
# Timezone. Leave this set to 'user' to have the user's browser autocorrect.
# Otherwise, set a timezone string
# Examples: 'UTC', 'America/Phoenix', 'Europe/Athens', MST
# You can use `date +%Z` on linux to get your timezone string
Timezone = 'user'
# Format for timestamps. Defaults to mm/dd HH:MM:ss.
# For syntax see: http://blog.stevenlevithan.com/archives/date-time-format
# Time_format = 'isoDateTime'
Time_format = 'mm/dd HH:MM:ss'
# Change which fields are shown by default. Must be set as an array
# Default_fields = ['@fields.vhost','@fields.response','@fields.request']
Default_fields = ['@message']
# The default operator used if no explicit operator is specified.
# For example, with a default operator of OR, the query capital of
# Hungary is translated to capital OR of OR Hungary, and with default
# operator of AND, the same query is translated to capital AND of AND
# Hungary. The default value is OR.
Default_operator = 'OR'
# When using analyze, use this many of the most recent
# results for user's query
Analyze_limit = 2000
# Show this many results in analyze/trend/terms/stats modes
Analyze_show = 25
# Show this many results in an rss feed
Rss_show = 25
# Show this many results in an exported file
Export_show = 2000
# Delimit exported file fields with what?
# You may want to change this to something like "\t" (tab) if you have
# commas in your logs
Export_delimiter = ","
# You may wish to insert a default search which all user searches
# must match. For example @source_host:www1 might only show results
# from www1.
Filter = ''
# When searching, Kibana will attempt to only search indices
# that match your timeframe, to make searches faster. You can
# turn this behavior off if you use something other than daily
# indexing
Smart_index = true
# You can define your custom pattern here for index names if you
# use something other than daily indexing. Pattern needs to have
# date formatting like '%Y.%m.%d'. Will accept an array of smart
# indexes.
# Smart_index_pattern = ['logstash-web-%Y.%m.%d', 'logstash-mail-%Y.%m.%d']
Smart_index_pattern = 'logstash-%Y.%m.%d'
# Number of seconds between each index. 86400 = 1 day.
Smart_index_step = 86400
# ElasticSearch has a default limit on URL size for REST calls,
# so Kibana will fall back to _all if a search spans too many
# indices. Use this to set that 'too many' number. By default this
# is set really high, ES might not like this
Smart_index_limit = 150
# Elasticsearch has an internal mechanism called "faceting" for performing
# analysis that we use for the "Stats" and "Terms" modes. However, on large
# data sets/queries facetting can cause ES to crash if there isn't enough
# memory available. It is suggested that you limit the number of indices that
# Kibana will use for the "Stats" and "Terms" to prevent ES crashes. For very
# large data sets and undersized ES clusers, a limit of 1 is not unreasonable.
# Default is 0 (unlimited)
Facet_index_limit = 0
# You probably don't want to touch anything below this line
# unless you really know what you're doing
# Primary field. By default Elastic Search has a special
# field called _all that is searched when no field is specified.
# Dropping _all can reduce index size significantly. If you do that
# you'll need to change primary_field to be '@message'
Primary_field = '_all'
# Default Elastic Search index to query
Default_index = '_all'
# TODO: This isn't functional yet
# Prevent wildcard search terms which result in extremely slow queries
# See: http:#www.elasticsearch.org/guide/reference/query-dsl/wildcard-query.html
Disable_fullscan = false
# Set headers to allow kibana to be loaded in an iframe from a different origin.
Allow_iframed = false
# Use this interval as fallback.
Fallback_interval = 900
end
Install Logstash
sudo su
apt-get install openjdk-7-jre
mkdir /etc/logstash
cd /etc/logstash
wget https://logstash.objects.dreamhost.com/release/logstash-1.1.5-monolithic.jar -O logstash.jar
mkdir /var/log/logstash
Configure Logstash (file /etc/logstash/logstash.conf):
input {
file {
type => nginx_web
path => ["/var/log/nginx/*"]
exclude => ["*.gz"]
sincedb_path => "$HOME/.sincedb"
}
}
input {
file {
type => "rails_app"
path => [ "/var/log/rails_app*.log" ]
exclude => ["*.gz"]
sincedb_path => "$HOME/.sincedb"
}
}
output {
elasticsearch {
host => "127.0.0.1"
port => 9300
}
}
Init file for Logstash:
#! /bin/sh
### BEGIN INIT INFO
# Provides: logstash-shipper
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Start daemon at boot time
# Description: Enable service provided by daemon.
### END INIT INFO
. /lib/lsb/init-functions
name="logstash"
logstash_bin="/usr/bin/java -- -jar /etc/logstash/logstash.jar"
logstash_conf="/etc/logstash/logstash.conf"
logstash_log="/var/log/logstash/logstash.log"
pid_file="/var/run/$name.pid"
start () {
command="${logstash_bin} agent -f $logstash_conf --log ${logstash_log}"
log_daemon_msg "Starting $name"
if start-stop-daemon --start --quiet --oknodo --pidfile "$pid_file" -b -m --exec $command; then
log_end_msg 0
else
log_end_msg 1
fi
}
stop () {
start-stop-daemon --stop --quiet --oknodo --pidfile "$pid_file"
}
status () {
status_of_proc -p $pid_file "" "$name"
}
case $1 in
start)
if status; then exit 0; fi
start
;;
stop)
stop
;;
reload)
stop
start
;;
restart)
stop
start
;;
status)
status && exit 0 || exit $?
;;
*)
echo "Usage: $0 {start|stop|restart|reload|status}"
exit 1
;;
esac
exit 0
You have to also add permission to execute file and start Logstash process:
$ chmod +x /etc/init.d/logstash
$ /etc/init.d/logstash start
Nginx configuration for Kibana (file /etc/nginx/nginx.conf):
worker_processes 2;
events {
worker_connections 1024;
}
http {
passenger_root /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.18;
passenger_ruby /usr/local/bin/ruby;
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" '
'$ssl_cipher $request_time $host';
sendfile on;
keepalive_timeout 65;
gzip on;
server {
listen 80;
server_name kibana;
root /srv/kibana/public;
passenger_enabled on;
rack_env production;
}
}
Setup logger for Rails application:
* install SyslogLogger, just add following line to Gemfile
gem "SyslogLogger", "~> 2.0", :require => 'syslog/logger'
-
setup logger in your application, in production.rb
config.logger = Syslog::Logger.new "yourappname_#{Rails.env}"
Enjoy!
Written by Bartłomiej Danek
Related protips
1 Response
Hi Bartłomiej Danek great post loved it, I have setup Logstash, ElasticSearch and Kibana2(with sinatra) stack. I am not able to figure out how to setup Kibana 3 on rails, could you please help me with this..