You are on page 1of 24

Log Management and Analytics

A Quick Guide to Logging Basics


Logging is in our DNA

If you are reading this booklet then there is a good chance that you
are looking to replace Splunk with Elasticsearch, Logstash, and
Kibana (ELK) or an alternative logging stack. If so, then you’ve come
to the right place.
Complementary to our Search and Big Data consulting services and
Logsene -- our log Cloud/On Premises log management and analytics
solution -- Sematext provides Logging Consulting services. We have
deep expertise not only in Elasticsearch (the “E” in ELK), but also with
a number of open-source logging tools, such as:
• Logstash - the "L" in ELK
• Rsyslog
• Kibana - the "K" in ELK
• Flume
• Fluentd
Our consulting customers range from large, multinational household
name companies, governmental and financial institutions, to smaller
companies and startups around the world.

sematext.com @sematext +13474801610


Here's what's inside:

Pages 1 -3
• 5-Minute Logstash: Parsing and Sending a Log File
Read it online: http://wp.me/pwdA7-Rp
Pages 4-9
• Encrypting Logs on Their Way to Elasticsearch (Part 1 )
Read it online: http://wp.me/pwdA7-VF
• Encrypting Logs on Their Way to Elasticsearch (Part 2)
Read it online: http://wp.me/pwdA7-VY

Pages 1 0-1 9
• Recipe: rsyslog + Elasticsearch + Kibana
Read it online: http://wp.me/pwdA7-MJ
• Structured Logging with rsyslog and Elasticsearch
Read it online: http://wp.me/pwdA7-Gq

© Sematext Group - All rights reserved


5-Minute Logstash: Parsing and Sending a Log File

We like Logstash a lot at Sematext, because it’s a good (if not the) swiss-
army knife for logs. Plus, it’s one of the easiest logging tools to get started
with, which is exactly what this post is about. In less than 5 minutes, you’ll
learn how to send logs from a file, parse them to extract metrics from those
logs and send them to Logsene, our logging SaaS.
NOTE: Because Logsene exposes the Elasticsearch API, the same steps
will work if you have a local Elasticsearch cluster.
NOTE: If this sort of stuff excites you, we are hiring world-wide for
positions from devops and core product engineering to marketing and
sales.
Overview
As an example, we’ll take an Apache log, written in its combined logging
format. Your Logstash configuration would be made up of three parts:

• a file input, that will follow the log


• a grok filter, that would parse its contents to make a structured event
• an elasticsearch output, that will send your logs to Logsene via HTTP, so
you can use Kibana or its native UI to explore those logs. For example,
with Kibana you can make a pie-chart of response codes:

1
The Input
The first part of your configuration file would be about your inputs. Inputs
are modules of Logstash responsible for ingesting data. You can use the file
input to tail your files. There are a lot of options around this input, and read
the documentation for help. For now, let’s assume you want to send the
existing contents of that file, in addition to the new content. To do that,
you’d set the start_position to beginning. Here’s how the whole input
configuration will look like:

input {
file {
path => "/var/log/apache.log"
type => "apache-access" # a type to identify those logs (will need this later)
start_position => "beginning"
}
}

The Filter
Filters are modules that can take your raw data and try to make sense of it.
Logstash has lots of such plugins, and one of the most useful is grok. Grok
makes it easy for you to parse logs with regular expressions, by assigning
labels to commonly used patterns. One such label is called
COMBINEDAPACHELOG, which is exactly what we need:

filter {
if [type] == "apache-access" {
# this is where we use the type from the input section
grok {
match => [ "message", "%{COMBINEDAPACHELOG}" ]
}
}
}

If you need to use more complicated grok patterns, we suggest trying the
grok debugger.

2
The Output
To send logs to Logsene (or your own Elasticsearch cluster) via HTTP, you
can use the elasticsearch output. You’ll need to specify that you want the
HTTP protocol, the host and port of an Elasticsearch server.

For Logsene, those would be logsene-receiver.sematext.com and port 80.


Another Logsene-specific requirement is to specify the access token for
your Logsene app as the Elasticsearch index. You can find that token in
your Sematext account, under Services -> Logsene.
The complete output configuration would be:

output {
elasticsearch {
host => "logsene-receiver.sematext.com"
port => 80
index => "your Logsene app token goes here"
protocol => "http"
manage_template => false
}
}

Wrapping Up
To start sending your logs, you’d have to download Logstash and put the
three configuration snippets above in a file (let’s say,
/etc/logstash/conf.d/logstash.conf). Then start Logstash. Once your logs
are in, you can start exploring your data by using Kibana or the native
Logsene UI.

3
Encrypting Logs on Their Way to Elasticsearch -- Part 1
Let’s assume you want to send your logs to Elasticsearch, so you can
search or analyze them in realtime. If your Elasticsearch cluster is in a
remote location (EC2?) or is our log analytics service, Logsene (which
exposes the Elasticsearch API), you might need to forward your data over
an encrypted channel.
There’s more than one way to forward over SSL, and this post is part 1 of a
series explaining how.
Today’s method is about sending data over HTTPS to Elasticsearch (or
Logsene), instead of plain HTTP. You’ll need two pieces to achieve this:
1 . a tool that can send logs over HTTPS
2. the Elasticsearch REST API exposed over HTTPS
You can build your own tool or use existing ones. In today's method we’ll
show you how to use rsyslog’s Elasticsearch output to do that. For the API,
you can use Nginx or Apache as a reverse proxy for HTTPS in front of your
Elasticseach, or you can use Logsene’s HTTPS endpoint:

Rsyslog Configuration
To get rsyslog’s omelasticsearch plugin, you need at least version 6.6.
HTTPS support was just added to master, and it’s expected to land in
version 8.2.0. Once that is up, you’ll be able to use the Ubuntu, Debian or
RHEL/CentOS packages to install both the base rsyslog and the rsyslog-
elasticsearch packages you need. Otherwise, you can always install from
sources:
– clone from the rsyslog github repository
– run `autogen.sh –enable-elasticsearch && make && make install`
(depending on your system, it might ask for some dependencies)
4
With omelasticsearch in place (the om part comes from output module, if
you’re wondering about the weird name), you can try the configuration
below to take all your logs from your local /dev/log and forward them to
Elasticsearch/Logsene:
# load needed input and output modules
module(load="imuxsock.so") # listen to /dev/log
module(load="omelasticsearch.so") # provides Elasticsearch output capability

# template that will build a JSON out of syslog


# properties. Resulting JSON will be in Logstash format
# so it plays nicely with Logsene and Kibana
template(name="plain-syslog"
type="list") {
constant(value="{")
constant(value="\"@timestamp\":\"")
property(name="timereported" dateFormat="rfc3339")
constant(value="\",\"host\":\"")
property(name="hostname")
constant(value="\",\"severity\":\"")
property(name="syslogseverity-text")
constant(value="\",\"facility\":\"")
property(name="syslogfacility-text")
constant(value="\",\"syslogtag\":\"")
property(name="syslogtag" format="json")
constant(value="\",\"message\":\"")
property(name="msg" format="json")
constant(value="\"}")
}

# send resulting JSON documents to Elasticsearch


action(type="omelasticsearch"
template="plain-syslog"
# Elasticsearch index (or Logsene token)
searchIndex="YOUR-LOGSENE-TOKEN-GOES-HERE"
# bulk requests
bulkmode="on"
queue.dequeuebatchsize="1 00"
# buffer and retry indefinitely if Elasticsearch is unreachable
action.resumeretrycount="-1 "
# Elasticsearch/Logsene endpoint
server="logsene-receiver.sematext.com"
serverport="443"
usehttps="on")

5
Exploring Your Data
After restarting rsyslog, you should be able to see your logs flowing in the
Logsene UI, where you can search and graph them:

Wrapping Up
If you’re using Logsene, all you need to do is to make sure you add your
Logsene application token as the Elasticsearch index name in rsyslog’s
configuration.
If you’re running your own Elasticsearch cluster, there are some nice
tutorials about setting up reverse HTTPS proxies with Nginx and Apache
respectively. You can also try Elasticsearch plugins that support HTTPS,
such as the jetty and security plugins.
Feel free to contact us if you need any help. We’d be happy to answer any
Logsene questions you may have, as well as help you with your local setup
through professional services and production support. If you just find this
stuff exciting, you may want to join us, wherever you are.
Continue reading for part 2, which will show you how to use RFC-5425 TLS
syslog to encrypt your messages from one syslog daemon to the other.

6
Encrypting Logs on Their Way to Elasticsearch --
Part 2: TLS Syslog

In part 1 of the “encrypted logs” series we discussed sending logs to


Elasticsearch over HTTPS. This second part is about TLS syslog.
If you wonder what this has to do with Elasticsearch, the point is that TLS
syslog is a standard (RFC-5425): any decent version of rsyslog, syslog-ng
or nxlog works with it. So you can forward logs over TLS to a recent,
“intermediary” rsyslog. Then, you can either use omelasticsearch with
HTTPS to ship your logs to Elasticsearch, or you can install rsyslog on an
Elasticsearch node (and index logs over HTTP to localhost).
Such a setup will give you the following benefits:
• it will work with most syslog daemons, because TLS syslog is so
widely supported
• the “intermediate” rsyslog can act as a buffer, taking that
pressure off your application servers
• the “intermediate” rsyslog can be used for processing, like parsing
CEE-formatted JSON over syslog. Again, taking load off your
applicaton servers
Our log analytics SaaS, Logsene, gives you all the benefits listed above
through the syslog endpoint:

7
Client Setup
Before you start, you’ll need a Certificate Authority’s public key, which will
be used to validate the encryption certificate from the syslog destination
(more about the server side later).
If you’re using Logsene, you can download the CA certificates directly. If
you’re on a local setup, or you just want to consolidate your logs before
shipping them to Logsene, you can use your own certificates or generate
self-signed ones.
With the CA certificate(s) in hand, you can start configuring your syslog
daemon.

# listens for local logs on /dev/log


module(load="imuxsock")

# use TLS driver when it comes to transporting over TCP


global ( # global settings
defaultNetstreamDriver="gtls"
# CA certificate. Concatenate if you have more
defaultNetstreamDriverCAFile="/opt/rsyslog/ca_bundle.pem"
)

# Forward them
action( # how to send logs
type="omfwd"
# to Logsene's syslog endpoint
target="logsene-receiver-syslog.sematext.com"
port="1 051 4" # on port X
protocol="tcp" # over TCP
# using the RFC-5424 syslog format
template="RSYSLOG_SyslogProtocol23Format"
# via the TLS mode of the driver defined above.
StreamDriverMode="1 "
# Request the machine certificate of the server

StreamDriverAuthMode="x509/name"
# and based on it, just allow Sematext hosts
StreamDriverPermittedPeers="*.sematext.com"
)

8
Server Setup
This is the new-style configuration format for rsyslog, that works with
version 6 or above. For the pre-v6 format (BSD-style), check out the
Logsene documentation. You can also find the syslog-ng equivalent
there.

Explore
Once you start logging, the end result should be just like in part 1 . You can
use Logsene’s hosted Kibana, your own Kibana or the Logsene UI to
explore your logs:

As always, feel free to contact us if you need any help::


• Logsene questions and feedback are always welcome
• if you need help with your local setup, we’d be glad to offer you
logging consulting and production support

9
Recipe: rsyslog + Elasticsearch + Kibana
In this post you’ll see how you can take your logs with rsyslog and ship
them directly to Elasticsearch (running on your own servers, or the one
behind Logsene Elasticsearch API) in such a way that you can use Kibana
to search, analyze and make pretty graphs out of them.
This is especially useful when you have a lot of servers logging [a lot of
data] to their syslog daemons and you want a way to search them quickly
or do statistics on the logs. You can use rsyslog’s Elasticsearch output to
get your logs into Elasticsearch, and Kibana to visualize them. The only
challenge is to get your rsyslog configuration right, so your logs end up
where Kibana is expecting them. And this is exactly what we’re doing here.
Note: that if this sort of stuff excites you, we are both hiring (from devops
and core product engineering to marketing and sales) and working on
Logsene – a log and data analytics product/service to complement SPM.
Getting all the Ingredients
Here’s what you’ll need:
• a recent version of rsyslog (v7+, if you ask me. The Elasticsearch
output is available since 6.4.0). You can download and compile it
yourself, or you can get it from the RHEL/CentOS or Ubuntu
repositories provided by the maintainers
• the Elasticsearch output plugin for rsyslog. If you compile rsyslog
from sources, you’ll need to add the –enable-elasticsearch
parameter to the configure script. If you use the repositories, just
install the rsyslog-elasticsearch package
• Elasticsearch :). You have a DEB and a RPM there, which should
get you started in no time. If you choose the tar.gz archive, you
might find the installation instructions useful
• Kibana 3 and a web server to serve it. There are installation
instructions on the GitHub page. To get started quickly, you can
just clone the repository somewhere, then go into the “kibana”
directory:

wget https://download.elasticsearch.org/kibana/kibana/kibana-3.1 .1 .tar.gz


tar zxf kibana-3.1 .1 .tar.gz
cd kibana-3.1 .1

10
Then, you’ll probably need to edit config.js to change the Elasticsearch host
name from “localhost” to the actual FQDN of the host that’s running
Elasticsearch. This applies even if Kibana is on the same machine as
Elasticsearch. “localhost” only works if your browser is on the same
machine as Elasticsearch, because Kibana talks to Elasticsearch directly
from your browser.
Finally, you can serve the Kibana page with any HTTP server you prefer. If
you want to get started quickly, you can try SimpleHTTPServer, which
should be embedded to any recent Python, by running this command from
the “kibana” directory:

python -m SimpleHTTPServer

Putting them all together


Kibana is, by default, expecting Logstash to send logs to Elasticsearch. So
“putting them all together” here means “configuring rsyslog to send logs to
Elasticsearch in the same manner Logstash does”. And Logstash, by
default, has some particular ways when it comes to naming the indices and
formatting the logs:
• indices should be formatted like logstash-YYYY.MM.DD. You can
change the pattern Kibana is looking for, but we won’t do that here
• logs must have a timestamp, and that timestamp must be stored in
the @timestamp field. It’s also nice to put the message part in the
message field – because Kibana shows it by default
To satisfy the requirements above, here’s a rsyslog configuration that
should work for sending your local syslog logs to Elasticsearch in a
Logstash/Kibana-friendly way:

11
module(load="imuxsock") # for listening to /dev/log
module(load="omelasticsearch") # for outputting to Elasticsearch
# this is for index names to be like: logstash-YYYY.MM.DD
template(name="logstash-index"
type="list") {
constant(value="logstash-")
property(name="timereported" dateFormat="rfc3339" position.from="1 " position.to="4")
constant(value=".")
property(name="timereported" dateFormat="rfc3339" position.from="6" position.to="7")
constant(value=".")
property(name="timereported" dateFormat="rfc3339" position.from="9" position.to="1 0")
}
# this is for formatting our syslog in JSON with @timestamp
template(name="plain-syslog"
type="list") {
constant(value="{")
constant(value="\"@timestamp\":\"") property(name="timereported" dateFormat="rfc3339")
constant(value="\",\"host\":\"") property(name="hostname")
constant(value="\",\"severity\":\"") property(name="syslogseverity-text")
constant(value="\",\"facility\":\"") property(name="syslogfacility-text")
constant(value="\",\"tag\":\"") property(name="syslogtag" format="json")
constant(value="\",\"message\":\"") property(name="msg" format="json")
constant(value="\"}")
}
# this is where we actually send the logs to Elasticsearch (localhost:9200 by default)
action(type="omelasticsearch"
template="plain-syslog"
searchIndex="logstash-index"
dynSearchIndex="on")

12
After restarting rsyslog, you can go to http://host-serving-Kibana:8000/ in
your browser and start searching and graphing your logs:

Digging into syslog with Kibana


Tips
Now that you got the essentials working, here are some tips that might help
you go even further with your centralized logging setup:
• you might not want to put the new rsyslog and omelasticsearch on
all your servers. In this case you can forward them over the
network to a central rsyslog that has omelasticsearch, and push
your logs to Elasticsearch from there. Some information on
forwarding logs via TCP can be found on rsyslog website
• you might want rsyslog to buffer your logs (in memory, on disk, or
some combination of the two), in case Elasticsearch is not
available for some reason. Buffering will also help performance, as
you can send messages in bulks instead of one by one. There’s a
reference on buffers with rsyslog&omelasticsearch here
• you might want to parse JSON-formatted (CEE) syslog messages. If
you’re using them, check our earlier post on the subject
If you don’t want to worry about any of that, you might want to check out
Logsene. This is our new data & log analytics service, where you can just
send your syslog messages (CEE-formatted or not) and not worry about
running and maintaining a logging cluster in house. We’ll index them for
you, and provide a nice interface to search and graph those logs. We also
expose an Elasticsearch HTTP API, so Logsene plays nicely with Logstash,
rsyslog+omelasticsearch, Kibana, and virtually any other logging tool that
can send logs to Elasticsearch.
13
Structured Logging with Rsyslog and Elasticsearch
As more and more organizations are starting to use our Performance
Monitoring and Search Analytics services, we have more and more logs
from various components that make up these applications. So what do we
do? Do we just keep logging everything to files, rotating them, and
grepping them when we need to troubleshoot something?
There must be something better we can do! And indeed, there is – so
much so, that we built Logsene – a Log Analytics service to complement
SPM. When your applications generate a lot of logs, you’d probably want
to make some sense of them by searching and/or statistics. Here’s when
structured logging comes in handy, and I would like to share some
thoughts and configuration examples of how you could use a popular
syslog daemon like rsyslog to handle both structured and unstructured
logs.
Then I’m going to look at how you can take those logs, format them in
JSON, and index them with Elasticsearch – for some fast and easy
searching and statistics.
On structured logging
If we take an unstructured log message, like:
Joe bought 2 apples
And compare it with a similar one in JSON, like:
{"name": "Joe", "action": "bought", "item": "apples", "quantity": 2}

We can immediately spot a couple of advantages and disadvantages of


structured logging:
if we index these logs, it will be faster and more precise to search for
“apples” in the “item” field, rather than in the whole document. At the same
time, the structured log will take up more space than the unstructured one.
But in most use-cases there will be more applications that would log the
same subset of fields. So if you want to search for the same user across
those applications, it’s nice to be able to pinpoint the “name” field

14
everywhere. And when you add statistics, like who’s the user buying most
of our apples, that’s when structured logging really becomes useful.
Finally, it helps to have a structure when it comes to maintenance. If a new
version of the application adds a new field, and your log becomes:

Joe bought 2 red apples

it might break some log-parsing, while structured logs rarely suffer from the
same problem.

Enter CEE & Lumberjack: structured logging within syslog


With syslog, as defined by RFC31 64, there is already a structure in the
sense that there’s a priority value (severity*8 + facility), a header (timestamp
and hostname) and a message. But this usually isn’t the structure we’re
looking for.
CEE and Lumberjack are efforts to introduce structured logging to syslog in
a backwards-compatible way. The process is quite simple: in the message
part of the log, one would start with a cookie string “@cee:”, followed by an
optional space and then a JSON or XML. From this point on I will talk about
JSON, since it’s the format that both rsyslog and Elasticsearch prefer.
Here’s a sample CEE-enhanced syslog message:

@cee: {"foo": "bar"}

This makes it quite easy to use CEE-enhanced syslog with existing syslog
libraries, although there are specific libraries like liblumberlog, which make
it even easier. They’ve also defined a list of standard fields, and applications
should use those fields where they’re applicable – so that you get the same
field names for all applications. But the schema is free, so you can add
custom fields at will.
CEE-enhanced syslog with rsyslog
rsyslog has a module named mmjsonparse for handling CEE-enhanced
syslog messages. It checks for the “CEE cookie” at the beginning of the
message, and then tries to parse the following JSON. If all is well, the fields
from that JSON are loaded and you can then use them in templates to
extract whatever information seems important. Fields from your JSON can
be accessed like this: $!field-name.
15
To get started, you need to have at least rsyslog version 6.6.0, and I’d
recommend using version 7 or higher. If you don’t already have that, check
out Adiscon’s repositories for RHEL/CentOS and Ubuntu.
Also, mmjsonparse is not enabled by default. If you use the repositories,
install the rsyslog-mmjsonparse package. If you compile rsyslog from
sources, specify –enable-mmjsonparse when you run the configure script.
In order for that to work you’d probably have to install libjson and
liblognorm first, depending on your operating system.
For a proof of concept, we can take this config:
#load needed modules
module(load="imuxsock") # provides support for local system logging
module(load="imklog") # provides kernel logging support
module(load="mmjsonparse") #for parsing CEE-enhanced syslog messages

#try to parse structured logs


action(type="mmjsonparse")

#define a template to print field "foo"


template(name="justFoo" type="list") {
property(name="$!foo")
constant(value="\n") #we'll separate logs with a newline
}

#and now let's write the contents of field "foo" in a file


action(type="omfile"
template="justFoo"
file="/var/log/foo")

To see things, better, you can start rsyslog in foreground and in debug
mode:
rsyslogd -dn

And in another terminal, you can send a structured log, then see the value
in your file:
# logger '@cee: {"foo":"bar"}'
# cat /var/log/foo
bar

16
If we send an unstructured log, or an invalid JSON, nothing will be added

# logger 'test'
# logger '@cee: test2'
# cat /var/log/foo
bar

But you can see in the debug output of rsyslog why:

mmjsonparse: no JSON cookie: 'test'


[...]
mmjsonparse: toParse: ' test2'
mmjsonparse: Error parsing JSON ' test2': boolean expected

Indexing Logs in Elasticsearch


To index our logs in Elasticsearch, we will use an output module of rsyslog
called omelasticsearch.
Like mmjsonparse, it’s not compiled by default, so you will have to add the
–enable-elasticsearch parameter to the configure script to get it built when
you run make. If you use the repositories, you can simply install the rsyslog-
elasticsearch package.
omelasticsearch expects a valid JSON from your template, to send it via
HTTP to Elasticsearch. You can select individual fields, like we did in the
previous scenario, but you can also select the JSON part of the message
via the $!all-json property. That would produce the message part of the log,
without the “CEE cookie”.
The configuration below should be good for inserting the syslog message
to an Elasticsearch instance running on localhost:9200, under the index
“system” and type “events“. These are the default options, and you can
take a look at this tutorial if you need some info on changing them.

17
#load needed modules
module(load="imuxsock") # provides support for local system logging
module(load="imklog") # provides kernel logging support
module(load="mmjsonparse") #for parsing CEE-enhanced syslog messages
module(load="omelasticsearch") #for indexing to Elasticsearch
#try to parse a structured log
action(type="mmjsonparse")
#define a template to print all fields of the message
template(name="messageToES" type="list") {
property(name="$!all-json")
}
#write the JSON message to the local ES node
action(type="omelasticsearch"
template="messageToES")

After restarting rsyslog, you can see your JSON will be indexed:

# logger '@cee: {"foo": "bar", "foo2": "bar2"}'


# curl -XPOST localhost:9200/system/events/_search?q=foo2:bar2 2>/dev/null |
sed s/.*_source//
" : { "foo": "bar", "foo2": "bar2" }}]}}

As for unstructured logs, $!all-json will produce a JSON with a field named
“msg”, having the message as a value:
# logger test
# curl -XPOST localhost:9200/system/events/_search?q=test 2>/dev/null | sed
s/.*_source//
" : { "msg": "test" }}]}}

It’s “msg” because that’s rsyslog’s property name for the syslog message.

Including other properties


But the message isn’t the only interesting property. I would assume most
would want to index other information, like the timestamp, severity, or host
which generated that message.
To do that, one needs to play with templates and properties. In the future it
might be made easier, but at the time of this writing (rsyslog 7.2.3), you
need to manually craft a valid JSON to pass it to omelasticsearch.

18
For example, if we want to add the timestamp and the syslogtag, a working
template might look like this:

template(name="customTemplate" type="list") {
#- open the curly brackets,
#- add the timestamp field surrounded with quotes
#- add the colon which separates field from value
#- open the quotes for the timestamp itself
constant(value="{\"timestamp\":\"")
#- add the timestamp from the log,
# format it in RFC-3339, so that ES detects it by default
property(name="timereported" dateFormat="rfc3339")
#- close the quotes for timestamp,
#- add a comma, then the syslogtag field in the same manner
constant(value="\",\"syslogtag\":\"")
#- now the syslogtag field itself
# and format="json" will ensure special characters
# are escaped so they won't break our JSON
property(name="syslogtag" format="json")
#- close the quotes for syslogtag
#- add a comma
#- then add our JSON-formatted syslog message,
# but start from the 2nd position to omit the left
# curly bracket
constant(value="\",")
property(name="$!all-json" position.from="2")
}

Summary
If you’re interested in searching or analyzing lots of logs, structured logging
might help. And you can do it with the existing syslog libraries, via CEE-
enhanced syslog. If you use a newer version of rsyslog, you can parse
these logs with mmjsonparse and index them in Elasticsearch with
omelasticsearch. If you want to use Logsene, it will consume your
structured logs as described in this post.

Like what you see in here? Then you’ll love Logsene Log Management &
Analytics. With Logsene, Centralized Logging, Log Management &
Analytics has never been easier.

19
Correlate Performance Metrics and Logs for Better IT & Business
Decisions

Sematext has recently combined the power of SPM Performance


Monitoring and Logsene to make the integration of performance metrics,
logs, events and anomalies more robust for those looking for a single pane
of glass.
Together, SPM and Logsene tell you not only WHEN something happened
-- via performance metrics graphs and alerts -- but they also show you
exactly WHAT and WHERE it happened by providing immediate access to
all relevant event logs and metrics right there! Now engineers and
operations can spend less time finding problems and more time fixing them.

20
sematext.com
blog.sematext.com
twitter.com/sematext
info@sematext.com
P: +1 (347) 480-1 61 0
F: +1 (71 8) 679-91 90
540 President Street, 3rd Floor
Brooklyn, NY 1 1 21 5 USA

Single Pane of Glass for Monitoring, Logging & Analytics


Search and Big Data Consulting
Production Support for Solr and Elasticsearch

© Sematext Group - All rights reserved

You might also like