You are on page 1of 24

VERSION 1.

0
AUGUST 11, 2017

ELASTIC STACK
Elasticsearch Logstash and Kibana

PRESENTED BY: VIKRAM SHINDE


CONTENTS
Elastic Stack ...................................................................................................................................................... 2
1. Overview .............................................................................................................................................. 2
2. COMPONENTS ...................................................................................................................................... 2
3. Architecture ......................................................................................................................................... 3
4. How can I use the Elastic stack to manage my log data ....................................................................... 4
5. Elasticsearch ......................................................................................................................................... 4
5.1. Features........................................................................................................................................ 4
5.2. Elasticsearch Terminology ............................................................................................................ 4
5.3. Elasticsearch RESTful API.............................................................................................................. 5
5.4. ElasticSearch CRUD Operations.................................................................................................... 6
5.5. Installation.................................................................................................................................... 6
5.6. Configuration................................................................................................................................ 6
5.7. Elasticsearch Basic commands ..................................................................................................... 7
4.8 Who Uses Elasticsearch? ............................................................................................................ 11
6. Logstash ............................................................................................................................................. 12
6.1. Features...................................................................................................................................... 12
6.2. Plug-ins ....................................................................................................................................... 13
6.3. Installation.................................................................................................................................. 14
6.4. Configuration.............................................................................................................................. 14
6.5. Logstash Basic Commands ......................................................................................................... 15
7. Kibana ................................................................................................................................................ 17
7.1. Features...................................................................................................................................... 17
7.2. Installation.................................................................................................................................. 17
7.3. Configuration.............................................................................................................................. 17
7.4. Discover ...................................................................................................................................... 18
7.5. Visualize ..................................................................................................................................... 19
7.6. Dashboard .................................................................................................................................. 19
8. Beats................................................................................................................................................... 20
8.1. Metricbeat.................................................................................................................................. 20
8.2. Filebeat....................................................................................................................................... 22
9. References .......................................................................................................................................... 23

8/11/2017 Elastic Stack 1


ELASTIC STACK
1. OVERVIEW
Elastic Stack is a group of open source products from Elastic designed to help users take data
from any type of source and in any format and search, analyze, and visualize that data in real
time. It uses Logstash for log aggregation, Elasticsearch for searching, and Kibana for visualizing
and analyzing data.

2. COMPONENTS

Elastic Stack components:

 Elasticsearch is a real-time distributed and open source full-text search and analytics engine.
It is a RESTful distributed search engine built on top of Apache Lucene and released under
an Apache license. It is Java-based and can search and index document files in diverse
formats.

 Logstash is a data collection engine that unifies data from disparate sources, normalizes it
and distributes it. The product was originally optimized for log data but has expanded the
scope to take data from all sources.

 Beats are “data shippers” that are installed on servers as agents used to send different types
of operational data to Elasticsearch either directly or through Logstash, where the data might
be enhanced or archived.

 Kibana is an open source data visualization and exploration tool from that is specialized for
large volumes of streaming and real-time data. The software makes huge and complex data
streams more easily and quickly understandable through graphic representation.

8/11/2017 Elastic Stack 2


3. ARCHITECTURE

Packetbeat is a network packet analyzer that ships information about the transactions exchanged
between your application servers.
Filebeat ships log files from your servers.
Metricbeat is a server monitoring agent that periodically collects metrics from the operating systems and
services running on your servers.
Winlogbeat ships Windows event logs.
The Beats are open source data shippers that you install as agents on your servers to send different types
of operational data to Elasticsearch. Beats can send data directly to Elasticsearch or send it to
Elasticsearch via Logstash, which you can use to parse and transform the data.
Then this data is available to Kibana for visualization.

8/11/2017 Elastic Stack 3


4. HOW CAN I USE THE ELASTIC STACK TO MANAGE MY LOG DATA
Your critical business questions have answers in logs of your applications and systems, but most potential
users of the data in those logs assume that the accessibility barrier is too high. But there are answers to
questions such as:

 How many account signups this week?

 What is the effectiveness of our ad campaign?

 What is the best time to perform system maintenance?

 Why is my database performance slow?

5. ELASTICSEARCH
Elasticsearch is a highly available and distributed search engine.
• Built on top of Apache Lucene
• NoSQL Datastore
• Schema-free
• JSON Document
• RESTful APIs

5.1. FEATURES
• Distributed
• Scalable
• Highly available
• Near Real Time (NRT) search
• Full Text Search
• Java, .NET, PHP, Python, Curl, Perl, Ruby
• HADOOP & SPARK -- Elasticsearch-Hadoop (ES-Hadoop)

Elasticsearch is distributed, which means that indices can be divided into shards and each shard can have
zero or more replicas. By default, an index is created with 5 shards and 1 replica per shard (5/1).
Rebalancing and routing of shards are done automatically.
5.2. ELASTICSEARCH TERMINOLOGY
We will discuss few important ElasticSearch Terminology: Index, Type, Document, Key, Value etc
Analogy of Elasticsearch with RDMS

8/11/2017 Elastic Stack 4


Index: In Elasticsearch, an Index is a collection of Documents. For instance, “library” is an index. Index is
used for indexing, searching, updating and deleting Documents. It must be in lower case.An Index is similar
to Database in Relation Database World.
Type: In Elasticsearch, a Type is a category of similar Documents. That means we can group a set of similar
Documents into a Type. A Type is similar to Table in Relation Database World.

Document: In Elasticsearch, a Document is an instance of a Type. It contains Data with Key and Value pairs.
A Document is similar to a Row in a Table in Relation Database World. Key is Column name and value is
Column value.

5.3. ELASTICSEARCH RESTFUL API


Elasticsearch provides a very comprehensive and powerful REST API that you can use to interact with
your cluster. This allow us to do following operations
 Check your cluster, node, and index health, status, and statistics
 Administer your cluster, node, and index data and metadata
 Perform CRUD (Create, Read, Update, and Delete) and search operations against your indexes
 Execute advanced search operations such as paging, sorting, filtering, scripting, aggregations, and
many others

It is recommended to use index name and type in lower case.

8/11/2017 Elastic Stack 5


5.4. ELASTICSEARCH CRUD OPERATIONS

Operation CURL command

Create curl –XPUT “http://localhost:9200/<index>/<type>/<id>”

Read curl –XGET “http://localhost:9200/<index>/<type>/<id>”

Update curl –XPOST “http://localhost:9200/<index>/<type>/<id>”

Delete curl –XDELETE “http://localhost:9200/<index>/<type>/<id>”

5.5. INSTALLATION
As of now, Elastic stack 5.5.1 is the latest version.
On Ubuntu

sudo apt-get install openjdk-8-jre

curl -L -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.5.1.deb

sudo dpkg -i elasticsearch-5.5.1.deb


sudo service elasticsearch start

On Centos

sudo yum install java-1.8.0-openjdk

curl -L -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.5.1.rpm

sudo rpm -i elasticsearch-5.5.1.rpm

sudo service elasticsearch start

5.6. CONFIGURATION
Open the file configuration file /etc/elasticsearch/elasticsearch.yml
Change the cluster.name to use descriptive name for your cluster (optional)
Change the node.name to use descriptive name for your node (optional)
Change the network.host to bind the address to specific IP (localhost).
And then restart the Elasticsearch using following command.

sudo service elasticsearch restart

8/11/2017 Elastic Stack 6


5.7. ELASTICSEARCH BASIC COMMANDS
Once Elasticsearch is installed, configured and restarted let’s see some basic commands.
** Important note: Please change the localhost with IP address you mentioned in network.host property
a) Test it out by running the following:
curl 'http://localhost:9200/?pretty'
You should see response below

That means you have Elasticsearch up and running.


b) Cluster health
curl -XGET 'localhost:9200/_cat/health?v&pretty'

We can see that our cluster named "my-application" is up with a green status.
c) Node status
curl –XGET 'localhost:9200/_cat/nodes?v&pretty'

d) List of all indices


curl -XGET 'localhost:9200/_cat/indices?v&pretty'
Since we have not created any index yet, the response will be empty.
Let’s create some index now.
curl -XPUT 'localhost:9200/customer?pretty&pretty'
curl -XGET 'localhost:9200/_cat/indices?v&pretty'
The first command will just create the customer and second command will get the indeces.

8/11/2017 Elastic Stack 7


The response shows that the Acknowledge True that means the index is created.
Second response shows we now have 1 index named customer and it has 5 primary shards and 1 replica
(the defaults) and it contains 0 documents in it. Since our cluster is single node, it could not create
replicas hence the health is Yellow. Let’s add one document in the index.

e) Index the document. (CREATE)


curl -XPUT 'localhost:9200/customer/external/1?pretty&pretty' -H
'Content-Type: application/json' -d'
{ "name": "Vikram Shinde"}'
The response will be like below

Created true means new document is added in the index customer, type external and document id is 1.
All above information is called metadata of the document.
f) Get the document (READ)
curl -XGET 'localhost:9200/customer/external/1?pretty&pretty'

It returns metadata and _source shows the actual document content.


It is important to note that Elasticsearch does not require you to explicitly create an index first before you
can index documents into it. In the previous example, Elasticsearch will automatically create the
customer index if it didn’t already exist beforehand. – Elasticsearch is schema-free.
curl -XPUT 'http://localhost:9200/product/fruits/1?pretty' -d'
{
"Product": "Apple",

8/11/2017 Elastic Stack 8


"Price": "10.00"
}'

g) Modify the document (UPDATE)


curl -XPOST
'localhost:9200/customer/external/1/_update?pretty&pretty' -H
'Content-Type: application/json' -d'
{
"doc": { "name": "Vikram Shinde", "age": 30 }
}'

Response shows that the document has been ‘Updated’ and new version is created.

h) Delete the Document (DELETE)


curl -XDELETE 'localhost:9200/customer/external/1?pretty&pretty'

8/11/2017 Elastic Stack 9


The response says, found true and result = deleted.
i) Batch processing
Elasticsearch also provided ability to perform any operation in bulk.
First download the file account.json
wget
https://raw.githubusercontent.com/elastic/elasticsearch/master/docs/src
/test/resources/accounts.json
We can load the file using bulk api
curl -H "Content-Type: application/json" -XPOST
'localhost:9200/bank/account/_bulk?pretty&refresh' --data-binary
"@accounts.json"
Now you can query the index
curl 'localhost:9200/_cat/indices?v'

j) Search the Document


Using _search api, you can search the document based on the content.

curl -XGET 'http://localhost:9200/bank/_search?pretty' -d'


{
"query": { "match": { "account_number": 20 } }
}'

8/11/2017 Elastic Stack 10


4.8 WHO USES ELASTICSEARCH?
In the past few years, implementations of Elastic Stack have been increasing very rapidly. In this section,
we will consider a few case studies to understand how Elastic Stack has helped this development
Elasticsearch is used for full-text search, structured search, analytics, and all three in combination:
 Wikipedia uses Elasticsearch to provide full-text search with highlighted search snippets, and search-as-
you-type and did-you-mean suggestions.
 The Guardian uses Elasticsearch to combine visitor logs with social -network data to provide real-time
feedback to its editors about the public’s response to new articles.
 Stack Overflow combines full-text search with geolocation queries and uses more-like-this to find related
questions and answers.
 GitHub uses Elasticsearch to query 130 billion lines of code.

8/11/2017 Elastic Stack 11


6. LOGSTASH
Logstash helps you to collect data from multiple systems into a central system wherein data can be
parsed and processed as required. Also, Logstash helps you to gather the data from multiple systems and
store the data in a common format, which is easily used by Elasticsearch and Kibana.
Logstash allows you to pipeline data, which can be extracted, cleansed, transformed, and loaded to gain
valuable insights from the data. In such a way, Logstash does the work of Extract, Transform, and Load
(ETL), a popular term used in data warehousing
6.1. FEATURES
Logstash can dynamically unify data from disparate sources and normalize the data into destinations of
your choice. Cleanse and democratize all your data for diverse advanced downstream analytics and
visualization use cases.

Source : Logstash can collect the data from variety of source using Input plugins e.g.
Syslogfile, Websites, Databases (RDBMS & NoSQL), Sensors and IoT.
Target: Logstash can send the data to variety of targets using output plugins e.g.
For Analysis:
we can use various datastores like Elasticsearch, MangoDB
for Archiving, we can use HDFS, S3, Google cloud storage
for Monitoring, we can use Nagios, graphite, Ganglia
for Alerting, we can use Email, Slack, HipChat, Watcher (Elastic Stack)

8/11/2017 Elastic Stack 12


6.2. PLUG-INS
INPUT PLUGIN
An input plugin is used to get data from a source or multiple sources and to feed data into Logstash. It
acts as the first section, which is required in the Logstash configuration file.
 file: reads from a file on the filesystem,
 beats: processes events sent by Filebeat
 stdin
The stdin is a fairly simple plugin, which reads the data from a standard input. It reads the data we enter
in the console, which then acts as an input to Logstash. This is mostly used to validate whether the
installation of Logstash is done properly and whether we are able to access Logstash.
The basic configuration for stdin is as follows:

stdin {
}

FILTER PLUGIN (OPTONAL)


A filter plugin is used to perform transformations on the data. If your input fetches data based on what
you want to process the data, then a filter plugin will help you to do so before sending the output. It acts
as the intermediate section between input and output, which is required in the Logstash configuration
file.
Let's have a look at few of the filter plugins.
grok
The grok plugin is the most commonly used filter in Logstash and has powerful capabilities to transform
your data from unstructured to structured data. Even if your data is structured, you can streamline the
data using this pattern. Due to the powerful nature of the grok pattern, Logstash is referred to as a Swiss
Army Knife. Grok is used to parse the data and structure the data in the way you want. It is used to parse
any type of log that is in human-readable format.

filter {
grok { match => { "message" => "Duration: %{NUMBER:duration}" } }
}

OUTPUT PLUGIN
The output plugin is used to send data to a destination. It acts as the final section required in the
Logstash configuration file. Some of the most used output plugins are as follows.
 elasticsearch: send event data to Elasticsearch.
 file: write event data to a file on disk
 stdout
This is a fairly simple plugin, which outputs the data to the standard output of the shell. It is useful for
debugging the configurations used for the plugins. This is mostly used to validate whether Logstash is
parsing the input and applying filters (if any) properly to provide output as required.

8/11/2017 Elastic Stack 13


The basic configuration for stdout is as follows:

stdout {
}

CODEC PLUGIN
Codec plugins are used to encode/decode the data. The input data can come in various formats, hence,
to read and store the data of different formats, we use codec. Some of the codec plugins are as follows.
rubydebug
The rubydebug codec is a fairly simple plugin that outputs the data to the standard output of the shell,
which prints the data using the Ruby Awesome Print library.
The basic configuration for rubydebug is as follows:

rubydebug {
}
6.3. INSTALLATION

On Ubuntu

sudo apt-get install openjdk-8-jre

curl -L -O https://artifacts.elastic.co/downloads/logstash/logstash-5.5.1.deb

sudo dpkg -i logstash-5.5.1.deb

On Centos

sudo yum install java-1.8.0-openjdk

curl -L -O https://artifacts.elastic.co/downloads/logstash/logstash-5.5.1.rpm
sudo rpm -i logstash-5.5.1.rpm

6.4. CONFIGURATION
Basic configuration of Logstash is shown as below. Input and Output plugin is mandatory. The input
plugins consume data from a source, the filter plugins modify the data as you specify, and the output
plugins write the data to a destination.
Let's see example configuration of logstash pipeline
Here we are taking input as File using file plugin. path of the file is ... and start position of the log access
is from begining. However you’ll notice that the format of the log messages is not ideal. You want to
parse the log messages to create specific, named fields from the logs. To do this, you’ll use the grok filter
plugin. The grok filter plugin is one of several plugins that are available by default in Logstash.
Output of the pipeline is Elasticsearch. Logstash uses http protocol to connect to Elasticsearch. The above
example assumes that Logstash and Elasticsearch are running on the same instance. You can specify a
remote Elasticsearch instance by using the hosts configuration

8/11/2017 Elastic Stack 14


6.5. LOGSTASH BASIC COMMANDS
Logstash takes the data from source, transform it (optional) and stash it to target. This process is called
Pipeline
a) First, let’s test our Logstash installation by running the most basic Logstash pipeline. Collecting
the data from command line standard input (stdin) and sending it to command line standard output
(stdout)

cd /use/share/logstash
bin/logstash -e 'input { stdin { } } output { stdout{ } }'
The -e flag enables you to specify a configuration directly from the command line. The pipeline in the
example takes input from the standard input, stdin, and moves that input to the standard
output, stdout, in a structured format.

8/11/2017 Elastic Stack 15


Press CTRL+D to stop the pipeline.
b) Stashing to Elasticsearch

bin/logstash -e 'input { stdin { } } output { stdout{ }


elasticsearch { hosts => ["35.202.15.47:9200"] index =>
"testing" }}'

This will take the data from standard input and send it to both
standout output and Elasticsearch database with the index as
‘testing’ default type is logs and autogenerated document id.

c) Collecting from file and stashing to file

Create a file first_logstash.conf and add following content in it


'input {
file {
path => "/tmp/input.log"
}
}

output {
file {
path => "/tmp/output.log"
}
}'

Execute the file as below

bin/logstash –f first_logstash.conf

8/11/2017 Elastic Stack 16


7. KIBANA
Kibana is an open source tool for visualization of logging statistics stored in the Elasticsearch database.
Statistical graphs like histograms, line graphs, pie charts, sunbursts are core capabilities of Kibana.
You use Kibana to search, view, and interact with data stored in Elasticsearch indices.
7.1. FEATURES
 Discover: To explore the data interactively
 Visualize: Visualize enables you to create visualizations of the data in your Elasticsearch indices.
 Dashboard: A Kibana dashboard displays a collection of saved visualizations.
 Timelion : Timelion is a time series data visualizer that enables you to combine totally
independent data sources within a single visualization
 DevTools: This is a development tool
 Management: This tab allows you to change the Kibana configuration runtime.

7.2. INSTALLATION

On Ubuntu

sudo apt-get install openjdk-8-jre

curl -L -O https://artifacts.elastic.co/downloads/kibana/kibana-5.5.1-amd64.deb

sudo dpkg -i kibana-5.5.1-amd64.deb

On Centos

sudo yum install java-1.8.0-openjdk

curl -L -O https://artifacts.elastic.co/downloads/kibana/kibana-5.5.1-x86_64.rpm

sudo rpm --install kibana-5.5.1-x86_64.rpm

7.3. CONFIGURATION
Open the kibana configuration file : /etc/kibana/kibana.yml
Change the server.host to your IP address
Change the elasticsearch.url to the IP address of elasticsearch

Restart the Kibana service using following command


sudo service kibana restart

Once Kibana is started, you have to tell it about the Elasticsearch indices that you want to explore by
configuring one or more index patterns.
To create an index pattern to connect to Elasticsearch:

8/11/2017 Elastic Stack 17


 Go to the Settings > Indices tab.
 Specify an index pattern that matches the name of one or more of your Elasticsearch indices
In this case we have created bank index, search for that.
 Click Create to add the index pattern.
 To designate the new pattern as the default pattern to load when you view the Discover tab, click
the favorite button.

7.4. DISCOVER
Time Filter: This filters the data for a specific time range
Search Box: This is used to search and query the data
Toolbar: This contains options such as new search, save search, open saved search, and share
Index Name: This displays the name of the selected index
Fields List: This displays the name of all the fields that are present within the selected index
Number of Hits: This displays the total number of documents as per the time interval specified and
corresponding to the search query matching
Filter: You can filter the search results to display only those documents that contain a particular value in a
field. You can also create negative filters that exclude documents that contain the specified field value
Viewing the data stats: From the Fields list, you can see how many of the documents in the Documents
table contain a particular field, what the top 5 values are, and what percentage of documents contain
each value.

8/11/2017 Elastic Stack 18


7.5. VISUALIZE
Visualize enables you to create visualizations of the data in your Elasticsearch indices. You can then build
dashboards that display related visualizations. Following are the different visualization types available by
default.

You can add the other visualization using kibana-plugin

bin/kibana-plugin install <package name or URL>

e.g. Kibana Sankey plugin


cd KIBANA_FOLDER_PATH/plugins/
git clone https://github.com/chenryn/kbn_sankey_vis.git

7.6. DASHBOARD
A Kibana dashboard displays a collection of saved visualizations.
In edit mode you can arrange and resize the visualizations as needed and save dashboards so they be
reloaded and shared.
Building the Dashboard:

8/11/2017 Elastic Stack 19


a. Click on Dashboard on side navigation panel. Click on + sign to add the existing saved visualization
on to kibana dashboard.
b. Enter a name for the dashboard.
c. To store the time period specified in the time filter with the dashboard, select Store time with
dashboard.
d. Click the Save button to store it as a Kibana saved object.

You can Edit, delete, resize, move the visualizations within the dashboard.

Also you can share the Dashboard with your colleague or outside world using link.

8. BEATS
The Beats are open source data shippers that you install as agents on your servers to send different types
of operational data to Elasticsearch. Beats can send data directly to Elasticsearch or send it to
Elasticsearch via Logstash, which you can use to parse and transform the data.

8.1. METRICBEAT
Metricbeat helps you monitor your servers and the services they host by collecting metrics from the
operating system and services.
INSTALLATION
On Ubuntu:

curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-5.5.1-amd64.deb

sudo dpkg -i metricbeat-5.5.1-amd64.deb

On Centos

curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-5.5.1-x86_64.rpm

sudo rpm -vi metricbeat-5.5.1-x86_64.rpm

CONFIGURATION
To configure Metricbeat, you edit the configuration file. /etc/metricbeat/metricbeat.yml
Metricbeat uses modules to collect metrics. Following modules can be used, please uncomment /
comments the metrics you require for your monitoring

8/11/2017 Elastic Stack 20


Sending the output. Change the IP of Elasticsearch accordingly.

output.elasticsearch:
hosts: ["localhost:9200"]

Once the configuration is done, restart the metricbeat using following command.

sudo service metricbeat restart


IMPORT THE DASHBOARD
The sample dashboard is available with Metricbeat. Import those dashboards using following command

cd /usr/share/metricbeat
./script/import_dashboards

This will import following dashboards for metricbeats


 Metricbeat Docker
 Metricbeat MongoDB
 Metricbeat MySQL
 Metricbeat filesystem per Host
 Metricbeat system overview
 Metricbeat-cpu
 Metricbeat-filesystem
 Metricbeat-memory
 Metricbeat-network
 Metricbeat-overview
 Metricbeat-processes

8/11/2017 Elastic Stack 21


8.2. FILEBEAT
Filebeat is a log data shipper for local files. Installed as an agent on your servers, Filebeat monitors the
log directories or specific log files, tails the files, and forwards them either to Elasticsearch or Logstash
for indexing.

INSTALLATION
On Ubuntu

curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.5.1-amd64.deb

sudo dpkg -i filebeat-5.5.1-amd64.deb

On Centos

curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.5.1-x86_64.rpm

sudo rpm -vi filebeat-5.5.1-x86_64.rpm

CONFIGURATION
To configure Filebeat, you edit the configuration file, /etc/filebeat/filebeat.yml
Define the paths of your log file

Define the output location , change the IP of the elasticsearch

output.elasticsearch:
hosts: ["localhost:9200"]

If you want to use Logstash to perform additional processing on the data collected by Filebeat, you need
to configure Filebeat to use Logstash.

output.logstash:
hosts: ["127.0.0.1:5044"]

Once the configuration is done, restart the filebeat using following command.

sudo service filebeat restart

IMPORT THE DASHBOARD


The sample dashboard is available with Filebeat. Import those dashboards using following command

8/11/2017 Elastic Stack 22


cd /usr/share/filebeat
./script/import_dashboards

Following dashboards will be imported


 Filebeat Apache2 Dashboard
 Filebeat Auditd
 Filebeat Auth - Sudo commands
 Filebeat MySQL Dashboard
 Filebeat New users and groups
 Filebeat Nginx Dashboard
 Filebeat SSH login attempts
 Filebeat syslog dashboard

9. REFERENCES

References
https://www.elastic.co/learn
Elastic Stack
https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html
Elasticsearch
https://www.elastic.co/guide/en/logstash/current/index.html
Logstash
https://www.elastic.co/guide/en/kibana/current/index.html
Kibana
https://www.elastic.co/guide/en/beats/libbeat/current/index.html
Beats

Contact me @vikshinde

8/11/2017 Elastic Stack 23

You might also like