Slick log aggregation with Kibana
Log aggregation is awesome-cool, you already know this. There’s no shortage of solutions for doing this, and you’ve probably heard of a few like Splunk, Graylog, Logstash, Scribe. There’s plenty more that integrate and gather data in various ways, and you can always can roll your own.
Having all your logs in one place makes it much easier to spot patterns and perform analysis. This is particularly useful when you have a cluster of machines doing roughly the same thing, as it makes anomalies more obvious and less prone to noise from false positives.
One of our devs was itching for a frontend tie into the service-oriented architecture stuff that we’re building at the moment, particularly one that could be easily hacked up and modified, so he went looking and came back with Kibana.
Kibana is interesting because it’s not too tightly coupled to the collection and aggregation framework, which in this case is Logstash for collection and ElasticSearch for indexing and search (check out their suggested infrastructure diagram to see how it can fit into a much larger ecosystem). We’ve got some pretty solid knowledge of ElasticSearch these days, so it’s a fine match for us.
We’ve only been running it for this last week, but we reckon Kibana is definitely worth checking out, especially if you’re already using Logstash. It’s just so dead simple to setup and easy to use, there’s no excuse not to.
At the moment we’re only collecting hits against the service APIs that we’re developing, but there’s nothing stopping us from throwing the entire systemwide logs from each server at Logstash. Log entries get categorised according to their pedigree, so we can be drilling down into network errors one moment, then jump into a customer’s PHP error logs in the next. Very smooth.