You are on page 1of 275

Splunk Overview Training

Duration: 3 days
Skill Level: Introductory and beyond
Hands-On Format: This hands-on class is approximately 50% hands-on lab to 50% lecture ratio, combining engaging lecture, demos, group activities and
discussions with machine-based practical student labs and project work.

Course Overview
Are you in charge of creating Splunk knowledge objects for your organization? Then you will benefit from this course that walks you through the various
knowledge objects and how to create them. Working with Splunk is a comprehensive hands-on course that teaches students how to search, navigate, tag,
build alerts, create simple reports and dashboards in Splunk, and how to Splunk's Pivot interface.
Working in a hands-on learning environment, students will learn how to use Splunk Analytics to provide an efficient way to search large volumes of data.
Students will learn how to run Basic Searches, Save and Share Search Results, Create Tags and Event Types, Create Reports, Create Different Charts, Perform
Calculations and Format Search Data, and Enrich Data with Lookups. Examples will center around financial institution examples.

What You’ll Learn: Course Objectives
After completion of this Splunk course, you will be able to:
 Get insight into Splunk Search App
 Learn to save and share Search Results
 Understand the use of fields in searching
 Learn Search Fundamentals using Splunk
 Explore the available visualizations on the software
 Create Reports and different Chart Types
 Perform Data Analysis, Calculation and Formatting
 Understand and execute various techniques of enriching data lookups

1

Recommended Audience & Pre-Requisites
This is a technical class for technical people, geared for Users, Administrators, Architects, Developers & Support Engineers who are new to Splunk. This course
is ideal for anyone in your organization who need to examine and use IT data.
Ideal attendees would include:
 Beginners in Splunk who want to enhance their knowledge about this Software usage
 System Administrators and Software Developers
 Professionals who are eager to learn to search and analyze machine-generated data using a faster and agile software

Course Topics & Agenda
Course Modules 1-4 Day 1 - Morning
Module 1 - Basic Understanding of Architecture (Overview)
 What are the components?
 Discussion on Forwarders- UF/HF
 Common ports for the set up
 License Master/Slave relationship
 Understanding of Deployment Server and Indexer
Module 2 - Introduction to Splunk's User Interface
 Understand the uses of Splunk
 Define Splunk Apps
 Learn basic navigation in Splunk
 Hands on Lab covering: Basic Navigation
 End of Module Hands-on Quiz
Module 3 - Searching
 Run basic searches
 Set the time range of a search
 Hands on Lab covering: Run basic searches,
Set the time range of a search
 Identify the contents of search results
 Refine searches

2








Hands on Lab covering: Identify the contents of search results, Refine searches
Use the timeline
Work with events
Hands on Lab covering: Use the timeline, Work with events
Control a search job
Save search results
Hands on Lab covering: Control a search job, Save search results
End of Module Hands-on Quiz

Module 4 - Using Fields in Searches
 Understand fields
 Use fields in searches
 Use the fields sidebar
 Hands on Lab covering: Understand Fields, Use fields in searches, Use the fields sidebar
 End of Module Hands-on Quiz

Course Modules 5-7 Day 1 - Afternoon
Module 5- Creating Reports and Visualizations
 Save a search as a report
 Edit reports
 Create reports that include visualizations such as charts and tables
 Hands on Lab covering: Save a search as a report, Edit Reports, Create reports that include visualizations such as charts and tables.
 Add reports to a dashboard
 Create an instant pivot from a search
 Hands on Lab covering: Add reports to a dashboard, Create an instant pivot from a search.
 End of Module Hands on Quiz
Module 6 - Working with Dashboards
 Create a dashboard
 Add a report to a dashboard
 Hands on Lab covering: Create a dashboard, Add a report to a dashboard
 Add a pivot report to a dashboard

3



Edit a dashboard
Hands on Lab covering: Add a pivot report to a dashboard, Edit a dashboard.
End of Module Hands on Quiz

Module 7 - Search Fundamentals
 Review basic search commands and general search practices
 Examine the anatomy of a search
 Use the following commands to perform searches:
 Fields
 Table
 Rename
 Rex
 Multikv
 Hands on Lab covering: Review basic search commands and general search practices, Examine the anatomy of a search, Use the following commands to
perform searches: Fields, Table, Rename, Rex, Multikv.
 End of Module Hands on Quiz.
Course Modules 8-10 Day 2 – Morning (Deep Dive Topics)
Module 8 - Reporting Commands, Part 1
 Use the following commands and their functions:
 Top
 Rare
 Hands on Lab covering: Top, Rare
 Stats
 Add coltotals
 Hands on Lab covering: Stats, Add Coltotals
 End of Module Hands on Quiz
Module 9 - Reporting Commands, Part 2
 Explore the available visualizations
 Create a basic chart
 Split values into multiple series
 Hands on Lab covering: Explore the available visualizations, Create a basic chart, Split values into multiple series
 Omit null and other values from charts

4

Further filter calculated results  End of Module Hands on Quiz Course Modules 11-12 Day 2 – Afternoon (Deep Dive Topics) Module 11 . Create a time chart.Creating Field Extractions  Perform field extractions using Field Extractor  Hands on Lab covering: Perform field extractions using Field Extractor  End of Module Hands on Quiz 5 . Calculating. Create and use field aliases. Format values  Use conditional statements  Further filter calculated results  Hands on Lab covering: Use conditional statements.       Create a time chart Chart multiple values on the same timeline Hands on Lab covering: Omit null and other values from charts. and Formatting Results  Using the eval command  Perform calculations  Convert values  Hands on Lab covering: Using the eval command.Analyzing. Perform calculations. Explain when to use each type of reporting command. Convert values. End of Module hands on Quiz Module 10 . Create and use calculated fields. Chart multiple values on the same timeline Format charts Explain when to use each type of reporting command Hands on Lab covering: Format Charts.Creating Field Aliases and Calculated Fields  Define naming conventions  Create and use field aliases  Create and use calculated fields  Hands on Lab covering: Define naming conventions.  Round values  Format values  Hands on Lab covering: Round values.  End of Module Hands on Quiz Module 12 .

Create alerts.Afternoon Module 16 . Create a SEARCH workflow action  End of Module Hands on Quiz Module 15 .Creating and Managing Alerts  Describe alerts  Create alerts  View fired alerts  Hands on Lab covering: Describe alerts. Describe event types and their uses.Creating Workflow Actions  Describe the function of a workflow action  Create a GET workflow action  Hands on Lab covering: Describe the function of a workflow action. Add and use arguments with a macro. Create and use a basic macro.Creating and Using Macros  Describe macros  Manage macros  Create and use a basic macro  Hands on Lab covering: Describe macros. 6 .  Define arguments and variables for a macro  Add and use arguments with a macro  Hands on Lab covering: Define arguments and variable for a macro. Manage macros. create and event type.Creating Tags and Event Types  Create and use tags  Describe event types and their uses  Create an event type  Hands on Lab covering: Create and use tags.Morning Module 13 .Course Modules 13-15 Day 3 . Create a GET workflow action  Create a POST workflow action  Create a Search workflow action  Hands on Lab covering: Create a POST workflow action.  End of Module Hands on Quiz Module 14 . View fired alerts  End of Module Hands on Quiz Course Modules 16-17 Day 3 .

7 . Select a data model object. End of Module Hands on Quiz Module 17 .  Create a pivot report  Save pivot report as a dashboard  Hands on Lab covering: Create a pivot report. each attendee will take a Post Course Quiz that will gauge the student’s retention of the skills and topics covered throughout the course.Using Pivot  Describe Pivot  Understand the relationship between data models and pivot  Select a data model object  Hands on Lab covering: Describe Pivot. Post Course Final Quiz At the end of class. The quiz will be distributed either on paper or online at the end of class and graded promptly. Save pivot report as a dashboard. Understand the relationship between data models and pivot.  End of Module Hands on Quiz.

Basic Understanding of Architecture (Overview)      What are the components? Discussion on Forwarders.UF/HF Common ports for the set up License Master/Slave relationship Understanding of Deployment Server and Indexer 8 .Module 1 .

and one or more instances that handle search requests.) Finally. several other instances that index the data. (Actually. we consider parsing to be part of the indexing process. The indexer does the heavy lifting. you can deploy lightweight versions of Splunk Enterprise. First. it first parses and then indexes the data. for example. called forwarders. called the indexer. the network. These specialized instances are known collectively as components. 9 . but for purposes of this discussion. for example. it runs interactive or scheduled searches on the indexed data. Then it indexes the data. create a deployment with many instances that only consume data. it consumes data from files. ranging in number from just a few to thousands. The forwarders consume data locally and then forward the data across the network to another Splunk Enterprise component. It should reside on a machine by itself. on the other hand. on the machines where the data originates. The forwarders. You can split this functionality across multiple specialized instances of Splunk Enterprise. depending on the quantity of data you're dealing with and other variables in your environment. You might. There are several types of components.Section 1-What are the components? Splunk Enterprise performs three key functions as it moves data through the data pipeline. because the data-consuming function has minimal impact on machine performance. can easily co-exist on the machines generating the data. For a typical mid-size deployment. it indexes the data and runs searches. or elsewhere.

10 . For a larger deployment. The forwarders automatically switch to sending their data to any indexers that remain alive. Not only does load balancing help with scaling. You can use load balancing on the forwarders. you can add more forwarders and indexers. so that they distribute their data across some or all of the indexers. but it also provides a fail-over capability if one of the indexers goes down. you might have hundreds of forwarders sending data to a number of indexers.This diagram shows several forwarders sending data to a single indexer: As you scale up.

In this diagram. each forwarder load-balances its data across two indexers: 11 .

Forwarders. It also searches the indexed data in response to search requests. Search heads. transforming raw data into events and placing the results into an index. The indexer also frequently performs the other fundamental Splunk Enterprise functions: data input and search management. Similarly. a specialized Splunk Enterprise instance. In larger deployments. forwarders handle data input and forward the data to the indexer for indexing. Forwarder A Splunk Enterprise instance that forwards data to another Splunk Enterprise instance. handles search management and coordinates searches across multiple indexers. Indexer A Splunk Enterprise instance that indexes data. such as an indexer or another forwarder. or to a third-party system. called a search head. although indexers always perform searches across their own data.These are the fundamental components and features of a Splunk Enterprise distributed environment:     Indexers. There are three types of forwarders: 12 . in larger deployments. Deployment server.

13 . with some features disabled to achieve a smaller footprint. To send event-based data to indexers. you must use a heavy forwarder. A search head that performs only searching. A Splunk Enterprise instance can function as both a search head and a search peer. Search head clusters are groups of search heads that coordinate their activities. A universal forwarder is a dedicated. The universal forwarder supersedes the light forwarder for nearly all purposes. Instances that are remotely configured by deployment servers are called deployment clients. grouping together and collectively managing any number of Splunk Enterprise instances. The deployment server downloads updated content. The light forwarder has been deprecated as of Splunk Enterprise version 6. Deployment Server A Splunk Enterprise instance that acts as a centralized configuration manager. and not any indexing. a Splunk Enterprise instance that handles search management functions. Units of such content are known as deployment apps. such as configuration files and apps. Its main limitation is that it forwards only unparsed data.0. with most features disabled to achieve a small footprint.  A light forwarder is a full Splunk Enterprise instance. directing search requests to a set of search peers and then merging the results back to the user.  A heavy forwarder is a full Splunk Enterprise instance. Search Heads In a distributed search environment. is referred to as a dedicated search head. The universal forwarder is the best tool for forwarding data to indexers.0. streamlined version of Splunk Enterprise that contains only the essential components needed to send data. to deployment clients.

Much of its default functionality. you cannot use the universal forwarder to index or search data. you might have reason (legacy-based or otherwise) to use heavy forwarders as well. You can also forward data to another forwarder. it has several limitations:   The universal forwarder has no searching. as an intermediate step before sending the data onwards to an indexer. A heavy forwarder parses data before forwarding it and can route data based on criteria such as source or type of event. or alerting capability. Unlike the universal forwarder. such as Splunk Web. can be disabled. to reduce the size of its footprint. A heavy forwarder (sometimes referred to as a "regular forwarder") has a smaller footprint than a Splunk Enterprise indexer but retains most of the capability.UF/HF The universal forwarder The universal forwarder is Splunk's new lightweight forwarder. You use it to gather data from a variety of inputs and forward the data to a Splunk Enterprise server for indexing and searching. Heavy and light forwarders While the universal forwarder is generally the preferred way to forward data. if necessary. 14 . except that it lacks the ability to perform distributed searches. indexing. The universal forwarder's sole purpose is to forward data. streamlined executable. which is an entirely separate. To achieve higher performance and a lighter footprint. both heavy and light forwarders are actually full Splunk Enterprise instances with certain features disabled. Unlike a full Splunk Enterprise instance.Section 2-Discussion on Forwarders. The universal forwarder does not parse data.

by setting indexAndForward attribute in outputs.2+) Load balancing? Yes Yes Data cloning? Yes Yes Per-event filtering? No Yes Event routing? No Yes Event parsing? No Yes Local indexing? No Optional.conf Searching/alerting? No Optional Splunk Web? No Optional 15 .This table summarizes the similarities and differences among the three types of forwarders: Features and capabilities Universal forwarder Heavy forwarder Type of Splunk Enterprise instance Dedicated executable Full Splunk Enterprise. CPU load) Smallest Medium-to-large (depending on enabled features) Bundles Python? No Yes Handles data inputs? All types (but scripted inputs might require Python installation) All types Forwards to Splunk Enterprise? Yes Yes Forwards to 3rd party systems? Yes Yes Serves as intermediate forwarder? Yes Yes Indexer acknowledgment (guaranteed delivery)? Optional Optional (version 4. with some features disabled Footprint (memory.

Please remember your machine number throughout the training session. This port provides the socket for Splunk Web. The login details : username: admin password: admin 16 . as does the command line interface and any distributed connections from other servers. This port defaults to 8089. The management port.com/guacamole User name: admin Password: admin  Your instructor will give you your machine number.Common ports for the set up Splunk configures two ports at installation time:   The HTTP/HTTPS port. It defaults to 8000.    Then please go to Start > All Programs > Splunk Enterprise > Splunk Enterprise The Splunk web interface should come up. Splunk Web talks to splunkd on this port. This port is used to communicate with the splunkd daemon.Section 3. Let's login to our lab environment Please go to: http://www.uxcreate.

When a license master instance is configured. We call this process indexing. Any host in your Splunk Enterprise infrastructure that performs indexing must be licensed to do so. Splunk Enterprise licenses specify how much data you can index per calendar day (from midnight to midnight by the clock on the license master). You can either run a standalone indexer with a license installed locally. search is blocked on the license slave (although indexing continues). can draw. the license slave starts a 72 hour timer. If the license slave cannot reach the license master for 72 hours. and license slaves are added to it. Users cannot search data in the indexes on the license slave until that slave can reach the license master again. configured as license slaves. If the license master is unreachable for any reason. or you can configure one of your Splunk Enterprise instances as a license master and set up a license pool from which other indexers.Section 4-License Master/Slave relationship Splunk Enterprise takes in data from sources you designate and processes it so that you can analyze it. 17 . the license slaves communicate their usage to the license master every minute.

In those cases. but the search heads manage the overall search process across all the indexers and present the consolidated search results to the user. the indexer might reside on its own machine and handle only indexing. you might have a set of Windows and Linux machines generating events. on each of the event-generating machines. In single-machine deployments consisting of just one Splunk Enterprise instance. along with searching of its indexed data. For instance. other Splunk Enterprise components take over the non-indexing roles. indexing is split out from the data input function and sometimes from the search management function as well. Usually the best way to do this is to install a lightweight instance of Splunk Enterprise. which need to go to a central indexer for consolidation.Section 5-Understanding of Deployment Server and Indexer The indexer is the Splunk Enterprise component that creates and manages indexes. The indexers still perform the actual searching of their own indexes. one or more search heads distribute search requests across multiple indexers. the indexer also handles the data input and search management functions. These forwarders handle data input and send the data across the network to the indexer residing on its own machine. 18 . Similarly. known as a forwarder. distributed deployments. The primary functions of an indexer are:   Indexing incoming data. In this type of scenario. For larger-scale needs. in cases where you have a large amount of indexed data and numerous concurrent users searching on it. Searching the indexed data. In these larger. known as distributed search. it can make sense to split off the search management function from indexing.

Here's an example of a scaled-out deployment: A deployment server uses server classes to determine what content to deploy to groups of deployment clients. The forwarder management interface offers an easy way to create. 19 . and manage server classes. edit.

Introduction to Splunk's User Interface      Understand the uses of Splunk Define Splunk Apps Learn basic navigation in Splunk Hands on Lab covering: Basic Navigation End of Module Hands-on Quiz 20 .Module 2 .

By monitoring and analyzing everything from customer clickstreams and transactions to security events and network activity. visualization and pre-packaged content for use-cases. Just point your raw data at Splunk Enterprise and start analyzing your world. cloud and hybrid environments Delivers the scale. security and availability to suit any organization Available as a software or SaaS ( Software as a Solution) solution 21 . security systems and business applications—giving you the insights to drive operational performance and business results. And with a full range of powerful search. Splunk Enterprise helps you gain valuable Operational Intelligence from your machine-generated data. analyze and act upon the untapped value of the big data generated by your technology infrastructure. business analysis and more Enables visibility across on premise.Section 1-Understand the uses of Splunk Splunk Enterprise makes it simple to collect. analysis and visualization capabilities empower users of all types Apps provide solutions for security. any user can quickly discover and share insights. IT ops.       Collects and indexes log and machine data from any source Powerful search.

22 . As an alternative to using Splunk for searching and exploring. you can use Splunk Apps to gain the specific insights you need from your machine data. or from the Apps section of Settings. panels and UI elements powered by saved searches and packaged for a specific technology or use case to make Splunk immediately useful and relevant to different roles. Apps can be opened from the Splunk Enterprise Home Page. thus providing a level of control when you are deploying and sharing Apps across your organization. You can also apply user/role based permissions and access controls to Splunk Apps. from the App menu.Section 2-Define Splunk Apps A Splunk App is a prebuilt collection of dashboards.

the Explore Splunk Enterprise panel. the Apps menu. You can do two actions on this panel:  Click the gear icon to view and manage the apps that are installed in your Splunk instance.Section 3-Learn basic navigation in Splunk About SplunkHome Splunk Home is your interactive portal to the data and apps accessible from this Splunk instance. 23 . Apps The Apps panel lists the apps that are installed on your Splunk instance that you have permission to view. you can drag and drop the apps within the workspace to rearrange them. and a custom default dashboard (not shown here). When you have more than one app. Select the app from the list to open it. The main parts of Home include the Splunk Enterprise navigation bar. you see one App in the workspace: Search & Reporting. For an out-of-the-box Splunk Enterprise installation.

open the Splunk Enterprise Documentation. such as the Search & Reporting app's Search view. You can use it to switch between apps. The Splunk bar in another view. browse for new apps. Explore Splunk Enterprise The options in the Explore Splunk Enterprise panel help you to get started using Splunk Enterprise. also includes an App menu next to the Splunk logo. It appears on every page in Splunk Enterprise. manage and edit your Splunk configuration. The following screenshot shows the Splunk bar in Splunk Home. view system-level messages. or open Splunk Answers. 24 . and monitor the progress of search jobs. Click the plus icon to browse for more apps to install. Click on the icons to open the Add Data view. About the Splunk bar Use the Splunk bar to navigate your Splunk instance.

System and licensing. Data.Return to Splunk Home Click the Splunk logo on the navigation bar to return to Splunk Home from any other view in Splunk Web. you do not have the permissions to view or edit them. Distributed environment settings. and Authentication settings. If you do not see some of these options. Settings menu The Settings menu lists the configuration pages for Knowledge objects. 25 .

 Click Jobs to open the search jobs manager window. and change the account's password. You can also edit the time zone settings.User menu The User menu here is called "Administrator" because that is the default user name for a new installation. Messages menu All system-level error messages are listed here. When there is a new message to review. The User menu is also where you Logout of this Splunk installation. a notification displays as a count next to the Messages menu. Click the X to remove the message. and System Activity views. Activity menu The Activity menu lists shortcuts to the Jobs. select a default app for this account. Triggered alerts. where you can view and manage currently running searches. 26 . You can change this display name by selecting Edit account and changing the Full name.

and descriptions in saved objects. and Data models. The results appear in the list separated by the categories where they exist. Splunk Answers. Find performs non-case sensitive matches on the ID. the Splunk Support Portal. and online Documentation. if you type in "error". 27 . Dashboards. See "About alerts" in the Alerting Manual. These saved objects include Reports.  Click System Activity to see Dashboards about user activity and status of the system. labels. For example. Help Click Help to see links to Video Tutorials. Find Use Find to search for objects within your Splunk Enterprise instance. Alerts. it returns the saved objects that contain the term "error". Click Triggered Alerts to view scheduled alerts that are triggered. This tutorial does not discuss saving and scheduling alerts.

Hands on Lab covering: Basic Navigation Take your time exploring the Splunk Web interface 28 .You can also run a search for error in the Search & Reporting app by clicking Open error in search.

End of Module Hands-on Quiz Please refer to your virtual machine for test 29 .

Save search results End of Module Hands-on Quiz 30 . Work with events Control a search job Save search results Hands on Lab covering: Control a search job.Searching              Run basic searches Set the time range of a search Hands on Lab covering: Run basic searches. Refine searches Use the timeline Work with events Hands on Lab covering: Use the timeline.Module 3 . Set the time range of a search Identify the contents of search results Refine searches Hands on Lab covering: Identify the contents of search results.

Transforming searches Transforming searches are searches that perform some type of statistical calculation against a set of results. These searches will always require fields and at least one of a set of statistical commands. and analyzing failures. itself). counting the number of times a specific user has logged in. Generally.Run basic searches Types of searches Before delving into the language and syntax of search. Some examples include: getting a daily count of error events. 31 . you should ask what you are trying to accomplish. correlating events. Summarize your search results into a report. Some examples of these searches include: checking error codes. you want to:   Investigate to learn more about the data you just indexed or to find the root cause of an issue. or calculating the 95th percentile of field values. you might hear us refer to two types of searches: Raw event searches and Report-generating searches. and the results are typically a list of raw events. Raw event searches Raw event searches are searches that just retrieve events from an index or indexes and are typically done when you want to analyze a problem. whether tabular or other visualization format. investigating security issues. Because of this. These are searches where you first retrieve events from an index and then pass them into one or more search commands. after getting data into Splunk. These searches do not usually include search commands (except search.

events. You've probably heard these referred to as 'needle in a haystack' or "rare term" searches. you can use. Dense searches are searches that scan through and report on many events. Search and knowledge As you search. and transactions to your event data. You can configure Splunk to recognize these new fields as you index new data or you can create new fields as you search. 32 . This capturing of knowledge helps you to construct more efficient searches and build more detailed reports. you may begin to recognize patterns and identify more information that could be useful as searchable fields.Information density Whether you're retrieving raw events or building a report. Some examples of these searches include: searching for a specific and unique IP address or error code. add. Some examples of these searches include: counting the number of errors that occurred or finding all events from a specific host. Whatever you learn. and edit this knowledge about fields. you should also consider whether you are running a search for sparse or dense information:   Sparse searches are searches that look for single event or an event that occurs infrequently within a large set of data.

Then. For example. count. and percentage.percent The Disk represents all of your indexed data and it's a table of a certain size with columns represent fields and rows representing events. Each search command redefines the shape of your table.percent" removes the column that shows the percentage. it helps to visualize all your indexed data as a table. "fields . "top user". The first intermediate results table shows fewer rows--representing the subset of events retrieved from the index that matched the search terms "sourcetype=syslog ERROR". so you are left with a smaller final results table. The second intermediate results table shows fewer columns. let's take a look at the following search. which summarizes the events into a list of the top 10 users and displays the user. 33 . sourcetype=syslog ERROR | top user | fields .The anatomy of a search To better understand how search commands act on your data. representing the results of the top command.

| search "error | stats count" would return the raw events containing error. pipes.. in the search string \\s will be available as \s to the command. and itself. stats. 34 . The sequence \" will send a literal quote to the command. For example:   A search such as error | stats count will find the number of events containing the string error. quotes. you need quotes around phrases and field values that include white spaces. and count. Quotes must be balanced. For example:   A search for the keyword AND without meaning the Boolean operator: error "AND" A search for this field/value phrase: error "startswith=foo" The backslash character (\) is used to escape quotes. For example:    The sequence \| as part of a search will send a pipe character to the command. However. pipes. If Splunk does not recognize a backslash sequence.Quotes and escaping characters Generally. A search such as . for example for searching for a literal quotation mark or inserting a literal quotation mark into a field using rex. an opening quote must be followed by an unescaped closing quote. commas. because \s is not a known escape sequence. Backslash escape sequences are still expanded inside quotes. you want to use quotes around keywords and phrases if you don't want to search for their default meaning.   For example \s in a search string will be available as \s to the command.. a pipe. instead of having the pipe split between commands. in that order. because \\ is a known escape sequence that is converted to \. such as Boolean operators and field/value pairs. Additionally. it will not alter it. The \\ sequence will be available as a literal backslash in the command. and/or brackets.

| eval myfield="\\" Example 4: This would produce an error because of unbalanced quotes.. .*\*. . *.Asterisks.. you will need to run a postfiltering regex search on your data: index=_internal | regex "...*" Examples Example 1: myfield is created with the value of 6. it will never be in the index.. ... | eval myfield="6" Example 2: myfield is created with the value of ". If you want to search for the asterisk character. can not be searched for using a backslash to escape the character. Because of this. | eval myfield="\"" Example 3: myfield is created with the value of \.. Splunk treats the asterisk character as a major breaker. | eval myfield="\" 35 . .

These options are described in the following sections. create custom time ranges. You often know when something happened. Note: If you are located in a different timezone. time-based searches use the timestamp of the event from the instance that indexed the data. 36 . Looking at events that happened around the same time can help correlate results and find the root cause. Searches run with overly-broad time range wastes system resources and produces more results than you can handle. specify time ranges based on date or date and time. Select time ranges to apply to your search Use the time range picker to set time boundaries on your searches. You can restrict a search with preset time ranges. or work with advanced features in the time range picker. if not exactly what happened.Set the time range of a search Time is crucial for determining what went wrong.

Select from a list of Preset time ranges 37 .

Define custom Relative time ranges Use custom Relative time range options to specify a time range for your search that is relative to Now. "Minutes ago". You can select from the list of time range units. "Seconds ago". 38 . and so on.

39 . The preview boxes below the fields update to the time range as you set it.The labels for Earliest and Latest update to match your selection.

Define custom Real-time time ranges The custom Real-time option enables you to specify the start time for your real-time time range window. 40 .

you can type the date into the text box or select the date from a calendar: 41 . You can choose among options to return events: Between a beginning and end date. and Since a date.Define custom Date ranges Use the custom Date Range option to specify calendar dates in your search. Before a date. For these fields.

42 .

43 .Define custom Date & Time ranges Use the custom Date & Time Range option to specify calendar dates and times for the beginning and ending of your search. You can type the date into the text box or select the date from a calendar.

You can write the times in Unix (epoch) time or relative time notation.Use Advanced time range options Use the Advanced option to specify the earliest and latest search times. 44 . The epoch time value you enter is converted to local time. This timestamp is displayed under the text field so that you can verify your entry.

and the event is classified by matched against eventtype definitions (e.. "my machine").. for later retrieval with a search.). "syslog". .). 'login'. which can be a single-line or multiple lines. Processing at the time the data is processed: Splunk reads data from a source.. "access_combined". on a host (e. see the “Concepts” section near the end of this document. then extracts timestamps. log events.. such as a file or port. breaks up the source into individual events (e. fields (e. code=404. The events returned from a search can then be powerfully transformed using Splunk's search language to generate reports that live on dashboards.. so try to pay attention.. 'error'. I’ll cover them in a few sentences. "apache_error". classifies that source into a sourcetype (e. alerts. matching indexed events are retrieved from disk.Hands on Lab Part 1 .. If you want more details.g.Basic Concepts There are a few concepts in the Splunk world that will be helpful for you to understand.g. user=david....g. . 45 ..) are extracted from the event's text. …).g. Processing at the time the data is searched: When a search starts.g. and writes each event into an index on disk.

Accept all the default values and just click Submit. Click Settings in the upper right-hand corner of Splunk Web.Adding Data Splunk can eat data from just about any source. keeping track of changes to them as they happen. Browse and find "websample.log is now indexed. Click Start Searching Assuming all goes well. We're going to start simple and just tell Splunk to index a particular file and not monitor it for updates: 1. 3. http://localhost:8000) and log in. 8. 7. 46 . Click Select File. and scripts. click Add Data. Click Upload Data to upload file.Part 2 . ports.log" on your Desktop that we previously saved. directories.g. 2. 6. websample. 4. if you haven’t already. Go to the Splunk Web interface (e. 5. and all the events are timestamped and searchable. including files. Under Settings.

type in terms you might expect to find in your data. We'll start out simple and work our way up. (More apps can be downloaded and advanced users can built them themselves. which is the interface for generic searching.e.) After logging into Splunk.Part 3 -Basic Searching Splunk comes with several Apps. For example. 47 . webpage not found). type in the keywords: http 404 You'll get back all the events that have both HTTP and 404 in their text. To begin your Splunk search. select the Search app and let's get started in searching. but the only relevant one now is the 'Search' app.. if you want to find events that might be HTTP 404 errors (i.

The search was the same as "http AND 404". Let's make the search narrower: http 404 "like gecko" 48 .Notice that search terms are implicitly AND'd together.

. not 200 error code). which returns more specific results than just searching for “like” and “gecko” because they must be adjacent as a phrase.e. To get all HTTP error events (i. you could try: http (40* OR 50*) 49 . not including 403 or 404. use this: http NOT (200 OR 403 OR 404) Again. Splunk supports the Boolean operators AND. to retrieve events that has 40x and 50xx classes of HTTP status codes.Using quotes tells Splunk to search for a literal phrase “like gecko”. the AND operator is implied. as well as parentheses to enforce grouping. the previous search is the same as http AND NOT (200 OR 403 OR 404) Splunk supports the asterisk (*) wildcard for searching. and NOT (must be capitalized). For example. OR.

attributes) to each of your events. The “404” has to be found where a status code is expected on the event and not just anywhere. >=.When you index data.e.e. just add attribute=value to your search: sourcetype=access_combined status=404 This search shows a much more precise version of our first search (i. and <= for numeric fields.Now here’s your turn on your own: 1. which is different than just having a 404 somewhere in the text.. Search for entries that are divorced and renting 50 . >.. In addition to <attribute>=<value>. It does this based on some text patterns commonly found in IT data.csv located on your desktop 2. webserver events) and that have a status code of 404. and <. you can also do != (not equals). and intermediate users can add their own extraction rules for pulling out additional fields.. Splunk automatically adds fields (i. Part 4. Upload the file LoanStats3a. Search for entries that contain the word " divorced" 3. To narrow results with a search.e. "http 404") because it will only return events that come from access_combined sources (i.

Part 5 - Search App
Now click on Search on the Main toolbar

You will get the following screen:

51

Click on the Data Summary button, you will get:

Click on the Sources tab, you will get:

52

Now you can choose websample.log, you will get:

53

Part 6 - Let’s upload another sample file:
1.
2.
3.
4.

Please upload sampledata.zip, whichh is located on the Desktop
Notice there is no preview.
Please take the defaults and start Searching
On the Sourcetypes panel, click access_combined_wcookie

54

You are a member of the Customer Support team for the online Flower & Gift shop. This is your first day on the job. You want to
learn some more about the shop. Some questions you want answered are:




What does the store sell?
How much does each item cost?
How many people visited the site?
How many bought something today?
What is the most popular item that is purchased each day?

It's your first day of work with the Customer Support team for the online Flower & Gift shop. You're just starting to dig into the Web
access logs for the shop, when you receive a call from a customer who complains about trouble buying a gift for
his girlfriend--he keeps hitting a server error when he tries to complete a purchase. He gives you his IP address, 10.2.1.44.
55

or contextual matches and completions for each keyword as you type it into the search bar.1.44 As you type into the search bar. These contextual matches are based on what's in your data. Splunk's search assistant opens. Search assistant shows you typeahead. Type the customer's IP address into the search bar: sourcetype="access_combined_wcookie" 10. 56 . The entries under matching terms update as you continue to type because the possible completions for your term change as well.2.1.

Part 7 .Time Ranges Try different time ranges like the previous week within the search toolbar 57 .

the operators have to be capitalized. and NOT. When you include Boolean expressions in your search. OR.Identify the contents of search results and refine searches Splunk supports the Boolean operators: AND. Also you can mouse over results to refine searches 58 .

Search for the word : Status 3. Without the use of fields.Hands on Lab 1.csv. find the status of Not Paid and Not Mortgage 59 . Remember click on Search on the Toolbar and then click on the Data Summary Button. Please choose the Data Source LoanStats3a. 2. Then click on the word Paid and add to the search 4. Click on the word : RENT and exclude from search BONUS LAB: 1.

Drilling down in this way does not run a new search. hours. the timeline represents the sliding time range window covered by the real-time search. You can use the timeline to highlight patterns or clusters of events or investigate peaks (spikes in activity) and lows (possible server downtime) in event activity. minutes. Here. Mouseover a bar to see the count of events. 60 .Use the timeline The timeline is a visual representation of the number of events returned by a search over a selected time range. Click on a bar to drill-down to that time range. where the range is broken up into smaller time intervals (such as seconds. and the count of events for each interval appears in column form. Change the timeline format The timeline is located in the Events tab above the events listing. It shows the count of events over the time range that the search was run. When you use the timeline to display the results of real-time searches. or days). it just filters the results from the previous search. the timeline shows web access events over the Previous business week. The timeline is a type of histogram.

Format options are located in the Format Timeline menu: You can hide the timeline (Hidden) and display a Compact or Full view of it. 61 . You can also toggle the timeline scale between linear (Linear Scale) or logarithmic (Log Scale).

When Full is selected, the timeline is taller and displays the count on the y-axis and time on the x-axis.
Zoom in and zoom out to investigate events
Zoom and selection options are located above the timeline. At first, only the Zoom Out option is available.

The timeline legend is on the top right corner of the timeline. This indicates the scale of the timeline. For example, 1 minute per
column indicates that each column represents a count of events during that minute. Zooming in and out changes the time scale. For
example, if you click Zoom Out the legend will indicate that each column now represents an hour instead of a minute.
When you mouse over and select bars in the timeline, the Zoom to Selection or Deselect options become available.

62

Mouse over and click on the tallest bar or drag your mouse over a cluster of bars in the timeline. The events list updates to display
only the events that occurred in that selected time range. The time range picker also updates to the selected time range. You can cancel
this selection by clicking Deselect.
When you Zoom to Selection, you filter the results of your previous search for your selected time period. The timeline and events list
update to show the results of the new search.

63

You cannot Deselect after you zoomed into a selected time range. But, you can Zoom Out again.

64

Work with events
An event is a single piece of data in Splunk software, similar to a record in a log file or other data input. When data
is indexed, it is divided into individual events. Each event is given a timestamp, host, source, and source type.
Often, a single event corresponds to a single line in your inputs, but some inputs (for example, XML logs) have
multiline events, and some inputs have multiple events on a single line. When you run a successful search, you get
back events.

65

Hands on Lab

Back at the Flower & Gift shop, let's continue with the customer (10.2.1.44) you were assisting. He reported an error while
purchasing a gift for his girlfriend. You confirmed his error, and now you want to find the cause of it.
Continue with the last search, which showed you the customer's failed purchase attempts.
1. Type purchase into the search bar and run the search:
sourcetype="access_combined_wcookie" 10.2.1.44 purchase

When you search for keywords, your search is not case-sensitive and Splunk retrieves the events that contain those
keywords anywhere in the raw text of the event's data
Use Boolean operators
If you're familiar with Apache server logs, in this case the access_combined format, you'll notice that
most of these events have an HTTP status of 200, or Successful. These events are not interesting for
you right now, because the customer is reporting a problem.

Splunk supports the Boolean operators: AND, OR, and NOT. When you include
Boolean expressions in your search, the operators have to be capitalized.
2. Use the Boolean NOT operator to quickly remove all of these Successful page
requests. Type in:
66

sourcetype="access_combined_wcookie" 10.2.1.44 purchase NOT 200

The AND operator is always implied between search terms. So the search in Step 5 is
the same as:
sourcetype="access_combined_wcookie" AND 10.2.1.44 AND purchase NOT 200

You notice that the customer is getting HTTP server (503) and client (404) errors. But, he specifically
mentioned a server error, so let's quickly remove events that are irrelevant.
Another way to add Boolean clauses quickly and interactively to your search is to use your search
results. Splunk lets you highlight and select any segment from

67

Mouse over one of the bars. which showed you the customer's failed purchase attempts. Now. sourcetype="access_combined_wcookie" 10. 1. no events were found then. 2. you really just focused on the search results listed in the events viewer area of this dashboard. Search for: In the last topic.Timeline Usage Continue with the last search. let's take a look at the timeline.1.44 purchase NOT 200 NOT 404 The location of each bar on the timeline corresponds to an instance when the events that match your search occurred. If there are no bars at a time period. A tooltip pops up and displays the number of events that Splunk found during the time span of that bar (1 bar = 1 hour). 68 .2.

3.The taller the bar. 69 . Often seeing spikes in the number of events or no events is a good indication that something has happened. the more events occurred at that time. Double-click on the same bar. for example the tallest bar. 4. it gives you a preview of the results zoomed-in at the time range. One hour is still a wide time period to search. Splunk runs the search again and retrieves only events during that one hour span you selected. Instead. This updates your search results to show you only the events at the time span. so let's narrow the search down more. Click one of the bars. Splunk does not run the search when you click on the bar. You can still select other bars at this point.

5.) Also. Now. if anything. Without changing the time range. replace your previous search in the search bar with: * Splunk supports using the asterisk (*) wildcard to search for "all" or to retrieve events based on parts 70 . each bar now represents one minute of time (1 bar = 1 min). Each bar represents the number of events for one second of time. Double-click another bar. Once again. 6. this updates your search to now retrieve events during that one minute span of time. notice that the search overrides the time range picker and it now shows "Custom time". you want to expand your search to see everything else. (You'll see more of the time range picker later. that happened during this second. but.You should see the same search results in the Event viewer.

you've just searched for Web access logs. Up to now.of a keyword. This search tells Splunk that you want to see everything that occurred at this time range: 71 .

paused.Control search job progress After you launch a search. where you can change the job read permissions. Inspect the job. After you have deleted the job you can still save the search as a report. You can select this action while the search is running or after it completes. you can access and manage information about the search's job without leaving the Search page. or which has finalized. Use this to delete a job that is currently running. and get a URL for the job that you can use to share the job with others or put a link to the job in your browser's bookmark bar. You can:     Edit the job settings. Opens a separate window and display information and metrics for the search job using the Search Job Inspector. 72 . Once your search is running. click Job and choose from the available options there. or finalized. Send the job to the background. extend the job lifetime. is paused. Delete the job. Select this to open the Job Settings dialog. Select this if the search job is slow to complete and you would like to run the job in the background while you work on other Splunk activities (including running a new search job).

You can set it to speed up searches by cutting down on the event data it returns (Fast mode). In Smart mode (the default setting) it automatically toggles search behavior based on the type of search you're running. 73 .Change the search mode The Search mode controls the search experience. or you can set it to return as much event information as possible (Verbose mode).

74 .     Report: If you would like to make the search available for later use. Alerts run saved searches in the background (either on a schedule or in real time).. and Event type. you can use this to save it as an event type.. Dashboard Panel. Event Type Event types let you classify events that have common characteristics.: Click this if you'd like to generate a dashboard panel based on your search and add it to a new or existing dashboard. Alert Click to define an alert based on your search. When the search returns results that meet a condition you have set in the alert definition. Export. Alert. and Print the results of a search. Dashboard Panel.Save the results The Save as menu lists options for saving the results of a search as a Report. Other search actions Between the job progress controls and search mode selector are three buttons which enable you to Share. If the search doesn't include a pipe operator or a subsearch . the alert is triggered. You can run the report again on an ad hoc basis by finding the report on the Reports listing page and clicking its name. you can save it as a report.

You can select to output to CSV. XML. or JSON and specify the number of results to export.   Click Share to share the job. raw events. Click Export to export the results. When you select this. 75 . the job's lifetime is extended to 7 days and read permissions are set to Everyone. Click Print to send the results to a printer that has been configured.

csv. Using your file LoanStats3a. and click on event types to view your saved event type 76 . save your last search as an event type Go to Settings.Hands on Lab 1. 2.

End of Module Hands-on Quiz Please refer to your virtual machine for test 77 .

Module 4 . Use fields in searches. Use the fields sidebar End of Module Hands-on Quiz 78 .Using Fields in Searches      Understand fields Use fields in searches Use the fields sidebar Hands on Lab covering: Understand Fields.

One of the more common examples of multivalue fields is email address fields. 79 . and host for domain name of a server. it can appear more than once in an event and has a different value for each appearance. In Splunk Enterprise. Some examples of fields are clientip for IP addresses accessing your Web server. While the From field will contain only a single email address. where there is a single value to each field name.Understand fields Fields exist in machine data in many forms. that is. Often. _time for the timestamp of an event. the To and Cc fields have one or more email addresses associated with them. Fields let you write more tailored searches to retrieve the specific events that you want. a field is a value (with a fixed. A field can be multivalued. fields are searchable name and value pairings that distinguish one event from another because not all events will have the same fields and field values. delimited position on the line) or a name and value pair.

If you are familiar with the access_combined format of Apache logs. such as:     IP addresses for the users accessing the website.Use fields in searches Use the following syntax to search for a field: fieldname="fieldvalue" . Field names are case sensitive. GET or POST page request methods. or access_combined_wcookie. you recognize some of the information in each event. Apache web access logs are formatted as access_common. 80 . 2. HTTP status codes for each page request. In the Events tab. Go to the Search dashboard and type the following into the search bar: sourcetype="access_*" This indicates that you want to retrieve only events from your web access logs and nothing else. 1. URIs and URLs for the pages requested and referring pages. but field values are not. scroll through the list of events. sourcetype is a field name and access_* is a wildcarded field value used to match any Apache web access event. access_combined.

Use the fields sidebar To the left of the events list is the Fields sidebar. and location (index). Selected Fields are the fields that appear in your search results. source. The default fields host. As Splunk Enterprise retrieves the events that match your search. 3. respectively. The Select Fields dialog box opens. You can hide and show the fields sidebar by clicking Hide Fields and Show Fields. punctuation (punct). 81 . These are the fields that Splunk Enterprise extracted from your data. Click All Fields. and sourcetype are selected. You see the default fields that Splunk defined. Some of these fields are based on each event's timestamp (everything beginning with date_*). the Fields sidebar updates with Selected fields and Interesting fields. where you can edit the fields to show in the events list.

In this set of search results. and that the action field appears in 49. This opens the field summary for the action field. and status. For example. there are clientip. These are not default fields.9% of your search results. 82 . method. Splunk Enterprise found five values for action.Other field names apply to the web access logs. They are extracted at search time.

HTTP status codes for each page request. Select Other > Yesterday from the time range picker: sourcetype="access_*" You were actually using fields all along! Each time you searched for sourcetype=access_*. Go back to the Search dashboard and search for web access activity. the wildcarded value is used to match all field values beginning with access_ (which would include access_common. access_combined. Here. sourcetype Note: Field names are case sensitive. Page request methods. you told Splunk to only retrieve events from your web access logs and nothing else. If you're familiar with the access_combined format of Apache logs. but field values are not! 2. 83 . such as: • • • • IP addresses for the users accessing the website. specify the field name and value: fieldname="fieldvalue" is a field name and access_combined_wcookie is a field value. you will recognize some of the information in each event. URIs and URLs for the page request and referring page. and access_combined_wcookie) . To search for a particular field.Hands on Lab 1. Scroll through the search results.

Click the Edit link in the fields sidebar. they have (most likely) been extracted at search time. there's clientip. method. For example. source. These are not default fields. Notice that default fields host. the Fields sidebar updates with selected fields and interesting fields. Scroll through interesting fields to see what else Splunk extracted. • Selected Fields are the fields you picked (from the available fields) to show in 84 . The Fields dialogue opens and displays all the fields that Splunk extracted.As Splunk retrieves these events. You should recognize the field names that apply to the Web access logs. and sourcetype are selected fields and are displayed in your search results: 3. and status. These are the fields that Splunk extracted from your data. 4. • Available Fields are the fields that Splunk identified from the events in your current search (some of these fields were listed under interesting fields).

there are action.your search results (by default. you should also notice other extracted fields that are related to the online store. category_id. From conversations with your coworker. You're already familiar with the fields that Splunk extracted from the Web access logs based on your search. punctuation (punct). source. 85 . But. For example. You should also see other default fields that Splunk defined--some of these fields are based on each event's timestamp (everything beginning with date_*). 5. and sourcetype are selected). you may know that these fields are: Field name action Description what a user does at the online shop. and location (index). Scroll through the list of Available Fields. host. and product_id.

Different events will have different fields. and product_id. the fields you selected will be included in your search results if they exist in that particular event.category_id the type of product a user is viewing or buying. 86 . 7. Click Save. category_id. product_id the catalog number of the product the user is viewing or buying. select action. When you return to the Search view. 6. From the Available fields list.

Under selected fields.The fields sidebar doesn't just show you what fields Splunk has captured from your data. there are 2 for action. This opens the field summary for the action field. This doesn't mean that these are all the values that exist for each of the fields--these are just the values that Splunk knows about from the results of your search. For the fields you just selected. Also. This 87 . This window tells you that. click action for the action field. in this set of search results. and 9 for product_id. 5 for category_id. It also displays how many values exist for each of these fields. it tells you that the action field appears in 71% of your search results. Splunk found two values for action and they are purchase and update. What are some of these values? 8.

perhaps). and balloons. category_id (what types of products the shop sells) and product_id (specific catalog names for products). 9. Close this window and look at the other two fields you selected. category_id and product_id. The online shop sells a selection of flowers. Let's use these fields. Example 1 Return to the search you ran to check for errors in your data. candy.means that three-quarters of the Web access events are related to the purchase of an item or an update (of the item quantity in the cart. plants. to see what people are buying. Select Other > Yesterday from the time range picker: error OR failed OR severe OR (sourcetype=access_* (404 OR 500 OR 503)) 88 . Now you know a little bit more about the information in your data relating to the online Flower and Gift shop. Use fields to run more targeted searches These next two examples compares the results when searching with and without fields. gifts.

use fields in your search. search assistant shows you both "flower" and "flowers' in the typeahead. you might have run this search to see how many times flowers were purchased from the online shop: sourcetype=access_* purchase flower* As you typed in "flower". Splunk looks for events that have those specific field/value pairs. the second search returns fewer events. Example 2 Before you learned about the fields in your data. When you add fields to your search. but this time. Splunk matches the raw text of your data.Run this search again. Since 89 . The HTTP error codes are values of the status field. When you run simple searches based on arbitrary keywords. Now your search looks like this: error OR failed OR severe OR (sourcetype=access_* (status=404 OR status=500 OR status=503)) Notice the difference in the count of events between the two searches--because it's a more targeted search.

even though you still used the wildcarded word "flower*". Select Other > Yesterday from the time range picker: sourcetype=access_* action=purchase category_id=flower* For the second search. there is only one value of category_id that it matches (FLOWERS). These are not events that you wanted! Run this search instead. 90 . you'll see that some of the events have action=update and category_id that have a value other than flowers.you don't know which is the one you want. If you scroll through the (many) search results. you use the wildcard to match both.

Refine the search for the field emp_title where it equals Walmart 91 . Use addr_state for state 3. Searches with fields are more targeted and retrieves more exact matches against your data. the second search returns significantly fewer events. Now on your own: 1. Using fields find entries that annual salary is less than 20.Notice the difference in the number of events that Splunk retrieved for each search. Bring up the Loan data file 2.000 and they live in the state of CA.

End of Module Quiz Please refer to your virtual machine for test 92 .

Module 5. End of Module Hands on Quiz 93 . Add reports to a dashboard Create an instant pivot from a search Hands on Lab covering: Add reports to a dashboard. Edit Reports. Create an instant pivot from a search.Creating Reports and Visualizations         Save a search as a report Edit reports Create reports that include visualizations such as charts and tables Hands on Lab covering: Save a search as a report. Create reports that include visualizations such as charts and tables.

94 . click on the Report link. 3. This opens the Save As Report dialog: From here. 2. Enter a Title (or name) for your report.Save a search as a report To save your search as a report. Enter an optional Description to remind users what your report does. Indicate if you'd like to include the Splunk Time Range Picker as a part of your report. you need to do the following: 1.

Acceleration. Schedule. or Continue Editing the search: 95 . Splunk prompts you to either review Additional Settings for your newly created report (Permissions. and Embed).Once you click Save. View the report. Add (the report) to Dashboard.

an interval like every week. dashboards. Acceleration: Not all saved reports qualify for acceleration and not all users (not even admins) have the ability to accelerate reports. 96 . you can make the report read only or writeable (can be edited). Splunk Enterprise will build a report acceleration summary for the report if it determines that the report would benefit from summarization (acceleration). or for all apps. Embed: Report embedding lets you bring the results of your reports to large numbers of report stakeholders. and portals. In addition. charts. When you embed a saved report. With report embedding. you do this by copying a Splunk generated URL into an HTML-based web page. on Monday at 6 AM. They use the same formatting as the originating report. or any other visualization type. Generally speaking. Embedded reports can display results in the form of event views. Schedule: Allows you to schedule the report (for Splunk to run/refresh it based upon your schedule). For example. single values. and for a particular time range. maps. tables.The additional settings that can be made to the report are given as follows:     Permissions: Allows you to set how the saved report is displayed: by owner. you can embed scheduled reports in external (non-Splunk) websites. by app.

click Edit and select Open in Search or Open in Pivot (you'll see one or the other depending on which tool you used to create the report). pivot setup. Click this to save the report. schedule. You can edit a report's definition (its search string. locate the report you want to edit. go to the Actions column. depending on whether you're on the Reports listing page or looking at the report itself.   If you're on the Reports listing page. or report formatting. You also have the option of saving your edited search as a new report. a Save button will be enabled towards the upper right of the report. permissions. or result formatting). Edit the definition of a report opened in Search After you open a report in search.Edit reports You can easily edit an existing report. there are two ways to start. time range. 97 . If you've entered the report to review its results. To edit a report's definition If you want to edit a report's definition. and acceleration settings. You can also edit its description. After you rerun the report. and click Open in Search or Open in Pivot (you'll see one or the other depending on which tool you used to create the report). you can change the search string.

however. Most visualizations are graphical representations. a visualization can also be non-graphical.Create reports that include visualizations such as charts and tables A visualization is a representation of data returned from a search. Visualizations available for simple XML dashboards include:      chart event listing map table single value A chart visualization has several types:           area bar bubble column filler gauge line marker gauge pie radial gauge scatter 98 . In dashboards. a panel contains one or more visualizations.

Hands on Lab covering 1. Click Search on the Toolbar. Choose the SourceType Tab. and click on access_combined_wcookie: 99 . then click the Data Summary button: 2.

category_id . Select under Interesting Fields. top values: 100 . Then click under Reports.3.

101 .

It should yield a report: 102 .4.

5. under Format . Now click on Statistics. Under the Bar Chart drop.csv file. investigate all the different chart types as well Bonus Lab: Using the LoanStats3a. then investigate all the different options 7. notice the table of values: 6. Go back to the Visualization tab. create a report from the data that top values across all the states 103 .

you can easily add to the dashboard.Add reports to a dashboard Once you have created your reports. by clicking Add to Dashboard button 104 .

Create an instant pivot from a search From any search. Make sure to pick make interesting fields to be selected fields 105 . simply select the Statistics tab and click on the Pivot Icon Let's take a walkthrough: 1.

Click the Statistics tab after you have the search you want: 3. Then click the Pivot Icon 106 .2.

4. Then you can choose the fields you have selected to Pivot. and click OK : 107 .

Then you can choose a field like annual_inc with a default of Sum to be part of your Pivot column values: 108 .5.

6. And then pick a field like addr_state to the row column 109 .

Finally pick a bar chart on the left side 110 .7.

NY 2.csv source that looks into the annual income < 70000 and the addr_state of CA . Create a report out of LoanStats3a. Create an instant pivot out of the search from #1 above.FL.Hands on Lab 1. 111 .

End of Module Hands on Quiz Please refer to your virtual machine for test 112 .

Add a report to a dashboard Add a pivot report to a dashboard Edit a dashboard Hands on Lab covering: Add a pivot report to a dashboard. End of Module Hands on Quiz 113 . Edit a dashboard.Working with Dashboards        Create a dashboard Add a report to a dashboard Hands on Lab covering: Create a dashboard.Module 6 .

Create a dashboard You can create a dashboard from the search OR you can click on the Dashboard option on the Toolbar OR 114 .

Add a report to a dashboard Click on Add to Dashboard from your report 115 .

FL. 3. 4. NY from the last module and create a dashboard 116 . Flowers Dashboard Bonus Lab: The report out of LoanStats3a. Let's save the report of this search as Flowers Category Click on the view button to view the report Click Add to DashBoard to add report to Dashboard Name the Dashboard. 2. you might have run this search to see how many times flowers were purchased from the online shop: sourcetype=access_* purchase flower*| top limit=20 category_id 1.csv source that looks into the annual income < 70000 and the addr_state of CA .Hands on Lab: Let's use the flower shop transactions to create a dashboard and add a report to it Before you learned about the fields in your data.

Add a pivot report to a dashboard From your pivot . you can save as a dashboard panel 117 .

for example edit Panels 118 .Edit a dashboard From your dashboard. you can edit your dashboard from the menu And then you could.

Edit the dashboard panels and add titles to your panels. Create another instant pivot or report and add to the existing dashboard 119 . Create an instant pivot. Then add that pivot report to the dashboard 3. Create another report that looks at ALL the annual incomes in the states of CA. NY 2. like the one from the previous module out of LoanStats3a. NY 4. Add that report to the dashboard created in exercise #1 5.csv source that looks into the annual income < 70000 and the addr_state of CA .FL.FL.Hands on Lab: 1. Bonus Lab: 1.

End of Module Hands on Quiz Please refer to your virtual machine for test 120 .

Search Fundamentals  Review basic search commands and general search practices  Examine the anatomy of a search  Use the following commands to perform searches:  Fields  Table  Rename  Rex  Multikv 121 .Module 7 .

but it's very different from a database. mary error. Error.Review basic search commands and general search practices To successfully use Splunk. and ErRoR are all the same thing. ERROR. Search terms are additive: Given the search item. only events that contain both words will be found. There are Boolean and grouping operators to change this behavior. we will discuss in this chapter under Boolean and grouping operators. and the reports you create will run faster for you and for others. we will cover the following topics:     How to write effective searches How to search using fields Understanding time Saving and sharing searches Using search terms effectively The key to creating an effective search is to take advantage of the index. including parts of words: A search for foo will also match foobar. Using the index efficiently will make your initial discoveries faster. Since each index is sliced into new buckets over time. The Splunk index is effectively a huge word index. it is vital that you write effective searches. In this chapter. Search terms are words. sliced by time. which would always have a single index across all events in a table. The following few key points should be committed to memory:     Search terms are case insensitive: Searches for error. Only the time frame specified is queried: This may seem obvious. The single most important factor for the performance of your searches is how many events are pulled from the disk. only the buckets that contain events for the time frame in question need to be queried. 122 .

3]. There are different ways to deal with this behavior. Using the index as it is designed is the best way to build fast searches. Regular expressions can then be used to further filter results or extract fields. There are legitimate reasons to define indexed fields. and possibly a bit wasteful. Splunk does use regex internally to extract fields. but most of what you would do with regular expressions is available in other ways. 07T01. 31. but in the vast majority of cases it is unnecessary and is actually wasteful. as the value of foo is not known until it has been parsed out of the event at search time.104-0600 INFO AuthClass Hello world.2. and 3. 03. 1. 0600. Numbers are not numbers until after they have been parsed at search time: This means that searching for foo>5 will not use the index. but this is what Splunk's index is really. Bobby. though:  A word is anything surrounded by whitespace or punctuation: For instance. AuthClass. world. Field names are case sensitive: When searching for host=myhost. ip. you can write fairly effective searches.  Host=myhost will not work  host=myhost will work  host=MyHost will work Fields do not have to be defined before indexing data: An indexed field is a field that is added to the metadata of an event at index time. any extracted or configured fields have case sensitive field names. really good at—dealing with huge numbers of words across a huge number of events. Technically. host must be lowercase.With just these concepts. Splunk is not grep with an interface: One of the most common questions is whether Splunk uses regular expressions for your searches. but the values are case insensitive. ip=1. given the log line 2012-0207T01:03:31. [user=Bobby. the answer is no.3. 3. the "words" indexed are 2012. including the auto generated fields. This     may seem strange. Let's dig a little deeper. user. 123 . Likewise. 104. 2. Hello. INFO. depending on the question you're trying to answer.02.

For instance. For instance. NOT applies to the next term or group.Examine the anatomy of a search Boolean and grouping operators There are a few operators that you can use to refine your searches (note that these operators must be in uppercase to not be considered search terms):      AND is implied between terms. Searching for an equal sign can be accomplished by wrapping it in quotes. Brackets ( [ ] ) are used to perform a subsearch. For instance. error mary (two words separated by a space) is the same as error AND mary. error OR mary means find any event that contains either word. "Out of this world" will find this exact sequence of words. \= is the same as "=". Parentheses can help avoid confusion in logic. The quote marks ("") identify a phrase. OR allows you to specify multiple values. You can also escape characters to search for them. these two statements are equivalent:     bob error OR warn NOT debug bob AND (error OR warn)) AND NOT debug The equal sign (=) is reserved for specifying fields. 124 . error NOT mary would find events that contain error but do not contain mary. Parentheses ( ( ) ) is used for grouping terms. For example. For example. Out of this world would find any event that contains all of these words. but not necessarily in that order.

or even to find multiple sets of events in a single query.You can use these operators in fairly complicated ways if you want to be very specific. The following are a few examples:    error mary NOT jacky error NOT (mary warn) NOT (jacky error) index=myapplicationindex ( sourcetype=sourcetype1 AND ( (bob NOT error) OR (mary AND warn) ) ) OR ( sourcetype=sourcetype2 (jacky info) ) This can also be written with some whitespace for clarity: index=myapplicationindex ( sourcetype=security AND ( (bob NOT error) OR (mary AND warn) ) ) OR ( sourcetype=application (jacky info) ) 125 .

Clicking to modify your search Though you can probably figure it out by just clicking around.  Clicking on any word or field value will give you the option to Add to search or Exclude from search (the existing search) or (create a) New search:  Clicking on a word or a field value that is already in the query will give you the option to remove it (from the existing query) or. (create a) new (search): 126 . as above. it is worth discussing the behavior of the GUI when moving your mouse around and clicking.

2.log" appears under your event. it is not accessible through the web interface/options dialog in this version.Event segmentation In previous versions of Splunk. For instance. to start a new search. or in the field value widgets underneath an event. if source="C:\Test Data\TM1ProcessError_20140623213757_temp. clicking on that value and selecting Add to search will append source="C:\\Test Data\\TM1ProcessError_20140623213757_temp. will again give us an option to append (add to) or exclude (remove from) our search or.log" to your search: 127 . In version 6. Field widgets Clicking on values in the Select Fields dialog (the field picker). event segmentation was configurable through a setting in the Options dialog. the options dialog is not present – although segmentation (discussed later in this chapter under field widgets section) is still an important concept. as before.

you can click on the link All Fields (see the following image): Expand the results window by clicking on > in the far-left column. Clicking on a result will append that item to the current search: 128 .To use the field picker.

but not for other events that actually contain the same field value extracted in a different way. Depending on your event segmentation setting.If a field value looks like key=value in the text of an event. instead. 129 . you will want to use one of the field widgets instead of clicking on the raw text of the event. The latter will work for events that contain the exact quoted text. it will simply search for the word. The former will not take advantage of the field definition. clicking on the word will either add the value or key=value.

minutes. days. or weeks: 130 . and will also have the following choices:    Before this time After this time At this time In addition. a number of seconds (the default). or plus or minus. milliseconds.Time Clicking on the time next to an event will open the _time dialog (shown in the following image) allowing you to change the search to select Events Before or After a particular time period. minus. you can select Nearby Events within plus. hours.

and then use the Zoom out (above the timeline) until the appropriate time frame is reached. select At this time. 131 .One search trick is to click on the time of an event.

internal fields _raw and _time are included in output. If neither is specified.is specified. | fields . .. such as _raw and _time.. only the fields that match one of the fields in the list are removed... 132 . with: . Description: Comma-delimited list of fields to keep (+) or remove (-). cannot display date or time information without the _time field._time Note: Be cautious removing the _time field. such as timechart and chart. If .. <string>. You can use wild card characters in the field names._* or more explicitly. | fields .. only the fields that match one of the fields in the list are kept. The fields command does not remove internal fields unless explicitly specified with: . Important: The leading underscore is reserved for all internal Splunk Enterprise field names.Fields command Description Keeps (+) or removes (-) fields from search results based on the field list criteria. By default. defaults to +. If + is specified. Statistical commands._raw. Syntax fields [+|-] <wc-field-list> Required arguments <wc-field-list> Syntax: <string>.

| fields source.._* Example 3: Keep only the fields 'source'. .. The internal fields begin with an underscore character.Examples Example 1: Remove the "host" and "ip" fields. 'sourcetype'. sourcetype. ip Example 2: Keep only the host and ip fields.. ip | fields . . host. .host.. error* 133 . for example _time. and all fields beginning with 'error'. | fields . | fields host. Remove all of the internal fields.. 'host'..

you should use the fields command because it always retains all the internal fields. Otherwise. You can use wild card characters in the field names. Field renaming: The table command doesn't let you rename fields. only specify the fields that you want to show in your tabulated results. use the fields command. you should not use it for charts (such as chart or timechart) because the UI requires the internal fields (which are the fields beginning with an underscore. _*) to render the charts. If you're going to rename a field. Each row represents an event. 134 .. Command type: The table command is a non-streaming command. do it before piping the results to table. Rows are the field values. Columns are displayed in the same order that fields are specified. Description: A list of field names. Usage The table command returns a table formed by only the fields specified in the arguments. and the table command strips these fields out of the results by default. Instead.Table command Description The table command is similar to the fields command in that it lets you specify the fields you want to keep in your results. Use table command when you want to retain data in tabular format. The table command can be used to build a scatter plot to show trends in the relationships between discrete values of your data. Column headers are the field names. If you are looking for a streaming command similar to the table command. Syntax table <wc-field-list> Arguments <wc-field-list> Syntax: <wc-field> <wc-field> ..

.Rename command Description Use the rename command to rename a specified field or multiple fields. | rename pid AS product_id would not merge the pid values into the product_id field. first(host) AS report Note: You cannot use this command to merge multiple fields into one field because null. | stats first(host) AS site.. Use quotes to rename a field to a phrase: . This command is useful for giving fields more meaningful names.... such as "Product ID" instead of "pid". | rename SESSIONID AS sessionID Use wildcards to rename multiple fields: . below. A as C" in one string. For example if you had a field A. you can use wildcards. you cannot do "A as B. . or non-present. fields are brought along with the values. See Example 2. if you had events with either product_id or pid fields. Note: You cannot rename one field with multiple names... If you want to rename multiple fields. the renaming will carry over the wildcarded portions to the destination expression. It overwrites product_id with Null values where pid does not exist for the event. 135 . | rename *ip AS *IPaddress If both the source and destination fields are wildcard expressions with the same number of wildcards. For example. ..

Required arguments wc-field Syntax: <string> Description: The name of a field and the name to replace it. You can use wild card characters in the field names.. 136 .Syntax rename <wc-field> AS <wc-field>. Names with spaces must be enclosed in quotation marks..

Note: Running rex against the _raw field might have a performance impact. the given sed expression used to replace or substitute characters is applied to the value of the chosen field. If a field is not specified. sed-expression Syntax: "<string>" Description: When mode=sed. Use the rex command for search-time field extraction or string replacement and character substitution. the regular expression is applied to the _raw field. When mode=sed. mode Syntax: mode=sed Description: Specify to indicate that you are using a sed (UNIX stream editor) expression. Syntax rex [field=<field>] ( <regex-expression> [max_match=<int>] [offset_field=<string>] ) | (mode=sed <sed-expression>) Required arguments regex-expression Syntax: "<string>" Description: The PCRE regular expression that defines the information to match and extract from the specified field. Quotation marks are required. the sed expression is applied to _raw. This sed-syntax is also used to mask sensitive data at index-time.Rex command Description Use this command to either extract fields using regular expression named groups. The rex command matches the value of the specified field against the unanchored regular expression and extracts the named groups into fields of the corresponding names. If a field is not specified. specify whether to replace strings (s) or substitute characters (y) in the 137 . or replace or substitute characters in a field using sed expressions.

<replacement> is a string to replace the regex match. 138 . Default: 1. the resulting fields are multivalued fields. where "n" is a single digit. Default: _raw max_match Syntax: max_match=<int> Description: Controls the number of times the regex is matched.matching regular expression. a field is created with the name specified by <string>. For example.{10})". Default: unset Sed expression When using the rex command in sed mode. this matches the first ten characters of the field. If greater than 1. use 0 to mean unlimited. where N is a number that is the character location in the string. Use \n for backreferences. No other sed commands are implemented. if the rex expression is "(?<tenchars>. Optional arguments field Syntax: field=<field> Description: The field that you want to extract information from. which can include capturing groups. offset_field Syntax: offset_field=<string> Description: If provided. Sed mode supports the following flags: global (g) and Nth occurrence (N). or a number to replace a specified match. The syntax for using sed to replace (s) text in your data is: "s/<regex>/<replacement>/<flags>"    <regex> is a PCRE regular expression. Quotation marks are required. <flags> can be either: g to replace all matches. you have two options: replace (s) or character substitution (y). and the offset_field contents is "0-9". This value of the field has the endpoints of the match in terms of zero-offset characters into the matched field.

Examples Example 1: Extract "from" and "to" fields using regular expressions.(?<app>\w+). app=search and SavedSearchName=my_saved_search . you need to be aware of how characters such as pipe ( | ) and backslash ( \ ) are handled.(?<SavedSearchName>\w+)" 139 . When you use regular expressions in searches. | rex field=_raw "From: (?<from>.. If a raw event contains "From: Susan To: Bob".. Usage Splunk Enterprise uses perl-compatible regular expressions (PCRE). ..*) To: (?<to>.*)" Example 2: Extract "user". "app" and "SavedSearchName" from a field called "savedsearch_id" in scheduler.my_saved_search then user=bob . If savedsearch_id=bob..log events. | rex field=savedsearch_id "(?<user>\w+). then from=Susan and to=Bob.The syntax for using sed to substitute characters is: "y/<string1>/<string2>/"  This substitutes the characters that match <string1> with the characters in <string2>.search.

sourcetype=linux_secure port "failed password" | rex "\s+(?<ports>port \d+)" | top src_ip ports showperc=0 This search used rex to extract the port field and values. 140 . | rex field=ccnumber mode=sed "s/(\d{4}-){3}/XXXX-XXXX-XXXX-/g" Example 4: Display IP address and ports of potential attackers...Example 3: Use sed syntax to match the regex to a series of numbers and replace them with an anonymized string. . it displays a table of the top source IP addresses (src_ip) and ports the returned with the search for potential attackers. Then.

ps. It works more easily with the fixedalignment though can sometimes handle merely ordered fields. An example of the type of data multikv is designed to handle: Name Josh Francine Samantha Age 42 35 22 Occupation SoftwareEngineer CEO ProjectManager The key properties here are:    Each line of text represents a conceptual record. and then determine which components of subsequent lines should be included into those field names. offsets. and are ALLCAPS or Capitalized. The first line of text provides the names for the data in the columns. The multikv command creates a new event for each table row and assigns field names from the title row of the table. and so on. netstat. 141 .Multikv command Description Extracts field-values from table-formatted events. multikv can transform this table from one event into three events with the relevant fields. Multiple tables in a single event can be handled (if multitable=true). but may require ensuring that the secondary tables have capitalized or ALLCAPS names in a header row. The general strategy is to identify a header. and field counts. Auto-detection of header rows favors rows that are text. such as the results of top. The columns are aligned.

such as "multiple words" or "trailing_space ". This means that the events will have no _time field and the UI will not know how to display them. <multikv-option> Syntax: copyattrs=<bool> | fields <field-list> | filter <field-list> | forceheader=<int> | multitable=<bool> | noheader=<bool> | rmorig=<bool> Description: Options for extracting fields from tabular events. use this argument to reference the stanza in your search. Default: true fields Syntax: fields <field-list> Description: Limit the fields set by the multikv extraction to this list. filter Syntax: filter <term-list> Description: If specified. multikv skips over table rows that do not contain at least one of the strings in the filter list..Syntax multikv [conf=<stanza_name>] [<multikv-option>. forceheader 142 . Quoted expressions are permitted. multikv copies all fields from the original event to the events generated from that event.] Optional arguments conf Syntax: conf=<stanza_name> Description: If you have a field extraction defined in multikv. Descriptions for multikv options copyattrs Syntax: copyattrs=<bool> Description: When true. Ignores any fields in the table which are not on this list. When false.conf.. no fields are copied from the original event.

multitable Syntax: multitable=<bool> Description: Controls whether or not there can be multiple tables in a single _raw in the original events. Default: true Examples Example 1: Extract the "COMMAND" field when it occurs in rows that contain "splunkd".. the original events are retained in the output results.. and fields will be named Column_1. noheader=true implies multitable=false.. . Default: true noheader Syntax: noheader=<bool> Description: Handle a table without header row identification. . | multikv fields pid command 143 . Default: The multikv command attempts to determine the header line automatically. When false.Syntax: forceheader=<int> Description: Forces the use of the given line number (1 based) as the table's header. Does not include empty lines in the count. The size of the table will be inferred from the first row. .. Default: false rmorig Syntax: rmorig=<bool> Description: When true. with each original emitted after the batch of generated results from that original. the original events will not be included in the output results... | multikv fields COMMAND filter splunkd Example 2: Extract the "pid" and "command" fields. Column_2.

csv and the table command on the same fields in #1 3. Use the source LoanStats3a. Use the source LoanStats3a.Hands-on Lab 1. and then click on all_util field to demonstrate the rex results 144 .csv and use the rename command to rename fields in #1 4. Use the source LoanStats3a.csv" annual_inc=60000 | rex "Does not meet the credit policy.*)" b. source="LoanStats3a. Use the source LoanStats3a.csv and only take a look at some fields out of the data 2.(?<all_util>.csv and use the rex command for: a.

End of Module Quiz Please refer to your virtual machine for test 145 .

Module 8 . Rare  Stats  Add coltotals  Hands on Lab covering: Stats. Part 1  Use the following commands and their functions:  Top  Rare  Hands on Lab covering: Top.Reporting Commands. Add Coltotals  End of Module Hands on Quiz 146 .

] <field-list> [<by-clause>] Required arguments <field-list> Syntax: <field>.. Description: Comma-delimited list of field names. ..Top command Description Displays the most common values of a field. <field>. If the optional by-clause is included. Syntax top [<N>] [<top-options>. See Top options. along with a count and percentage. <top-options> Syntax: countfield=<string> | limit=<int> | otherstr=<string> | percentfield=<string> | showcount=<bool> | showperc=<bool> | useother=<bool> Description: Options for the top command. Finds the most frequent tuple of values of all fields in the field list. the command finds the most frequent values for each distinct tuple of values of the group-by fields.. Optional arguments <N> Syntax: <int> Description: The number of results to return.. <by-clause> 147 .

Syntax: BY <field-list> Description: The name of one or more fields to group by. specify the value that is written into the row representing all other values. "0" returns all values. Default: "OTHER" percentfield Syntax: percentfield=<string> Description: Name of a new field to write the value of percentage. Default: "10" otherstr Syntax: otherstr=<string> Description: If useother is true. Default: "count" limit Syntax: limit=<int> Description: Specifies how many tuples to return. Default: true showperc Syntax: showperc=<bool> Description: Specify whether to create a field called "percent" (see "percentfield" option) with the relative prevalence of that tuple. Top options countfield Syntax: countfield=<string> Description: The name of a new field that the value of count is written to. Default: "percent" showcount Syntax: showcount=<bool> Description: Specify whether to create a field called "count" (see "countfield" option) with the count of that tuple. Default: true 148 .

Default: false Examples Example 1: Return the 20 most common values of the "referer" field. sourcetype=access_* | top limit=20 referer Example 2: Return top "action" values for each "referer_domain". 149 .useother Syntax: useother=<bool> Description: Specify whether or not to add a row that represents all values not included due to the limit cutoff.

this returns all the combinations of values for "action" and "referer_domain" as well as the counts and percentages 150 .sourcetype=access_* | top action by referer_domain Because a limit is not specified.

Do not show the percent field. sourcetype=access_* status=200 action=purchase | top 1 productName by categoryId showperc=f countfield=total 151 . Rename the count field to "total".Example 3: Return the top product purchased for each category.

] <field-list> [<by-clause>] Required arguments <field-list> Syntax: <string>.Rare command Description Displays the least common values of a field. this command returns rare tuples of values for each distinct tuple of values of the group-by fields. except that the rare command finds the least frequent instead of the most frequent. If the <by-clause> is specified. This command operates identically to the top command... Description: Comma-delimited list of field names. Syntax rare [<top-options>. These are the same <topoptions> used by the top command... Finds the least frequent tuple of values of all fields in the field list. Optional arguments <top-options> Syntax: countfield=<string> | limit=<int> | percentfield=<string> | showcount=<bool> | showperc=<bool> Description: Options that specify the type and number of values to display. 152 .. <by-clause> Syntax: BY <field-list> Description: The name of one or more fields to group by.

Default: "percent" showcount Syntax: showcount=<bool> Description: Specify whether to create a field called "count" (see "countfield" option) with the count of that tuple. If you specify >code>limit=0</code>. Default: "count" limit Syntax: limit=<int> Description: Specifies how many tuples to return. Default: 10 percentfield Syntax: percentfield=<string> Description: Name of a new field to write the value of percentage. Default: true 153 . all values up to maxresultrows are returned. See Limits section. Default: true showperc Syntax: showperc=<bool> Description: Specify whether to create a field called "percent" (see "percentfield" option) with the relative prevalence of that tuple. Specifying a value larger than maxresultrows produces an error.Top options countfield Syntax: countfield=<string> Description: The name of a new field to write the value of count into.

. | rare user by host 154 .000 by default. Examples Example 1: Return the least common values of the "url" field. By default this limit is 10... . maxresultrows. in the [rare] stanza. This ceiling is 50.. but other values can be selected with the limit option up to a further constraint expressed in limits.conf. and effectively keeps a ceiling on the memory that rare will use. . | rare url Example 2: Find the least common "user" value for a "host".Limits There is a limit on the number of results which rare returns.

Run another search on your own demonstrating your use of the top and rare functions 155 .Hands on Lab covering: Top. Rare 1.csv"| top limit=20 addr_state Now . Run source="C:\\LoanStats3a. show the rare addr_state 2.

) [<byclause>] Required arguments stats-agg-term Syntax: <stats-function>(<evaled-field> | <wc-field>) [AS <wc-field>] Description: A statistical aggregation function. count. For more information on eval expressions. [BY field-list] Complete: stats [partitions=<num>] [allnum=<bool>] [delim=<string>] ( <stats-agg-term>.Stats command Description Calculates aggregate statistics over the results set. The function can be applied to an eval expression.. Optional arguments allnum syntax: allnum=<bool> Description: If true. Use the AS clause to place the result into a new field with a name that you specify. and sum. Use the AS clause to place the result into a new field with a name that you specify... This is similar to SQL aggregation. such as average.. | <sparkline-agg-term>. sparkline-agg-term Syntax: <sparkline-agg> [AS <wc-field>] Description: A sparkline aggregation function.. Syntax Simple: stats (stats-function(field) [AS field]). which is the aggregation over the entire incoming result set. You can use wild card characters in the field name. If stats is used without a by clause only one row is returned. see Types of eval expressions in the Search Manual. computes numerical statistics on each field if and only if all of the values of that field 156 . If you use a by clause one row is returned for each distinct value specified in the by clause. or to a field or set of fields.. You can use wild card characters in field names.

You cannot use a wildcard character to specify multiple fields with similar names. Default: false delim Syntax: delim=<string> Description: Specifies how the values in the list() or values() aggregation are delimited. 157 . Default: 1 Stats function options stats-function Syntax: avg() | c() | count() | dc() | distinct_count() | earliest() | estdc() | estdc_error() | exactperc<int>() | first() | last() | latest() | list() | max() | median() | min() | mode() | p<in>() | perc<int>() | range() | stdev() | stdevp() | sum() | sumsq() | upperperc<int>() | values() | var() | varp() Description: Functions used with the stats command. Usage The stats command does not support wildcard characters in field values in BY clauses. For example.are numerical. Each time you invoke the stats command. partitions Syntax: partitions=<num> Description: If specified. Default: a single space by-clause Syntax: BY <field-list> Description: The name of one or more fields to group by. partitions the input data based on the split-by fields for multithreaded reduce. you cannot specify | stats count BY source*. However. You must specify each field separately. you can use more than one function. you can only use one by clause.

relay. | stats dc(host) 158 . Calculate the average time for each hour for similar fields using wildcard characters Return the average. .. of any unique field that ends with the string "lay". Remove duplicates in the result set and return the total count for the unique results Remove duplicates of results with the same "host" value and return the total count of the remaining results. For example. Return the average transfer rate for each host sourcetype=access* | stats avg(kbps) by host 2. . sourcetype=access_combined | top limit=100 referer_domain | stats sum(count) AS total 3.. for each hour. and return the total number of hits from the top 100 values of "referer_domain".. | stats avg(*lay) BY date_hour 4. Search the access logs. The "top" command returns a count and percent value for each "referer_domain". xdelay. etc.Basic Examples 1.. and return the total number of hits from the top 100 values of "referer_domain" Search the access logs. delay.

a column is added to the statistical results table with the name specified. Default: Calculates the sum for all of the fields. If the labelfield argument is absent.. The addcoltotals command calculates the sum only for the fields in the list you specify. Default: none label Syntax: label=<string> Description: Used with the labelfield argument to add a label in the summary event. labelfield Syntax: labelfield=<fieldname> Description: Specify a field name to add to the result set. the label argument has no effect. Description: A space delimited list of valid field names. You can use the asterisk ( * ) as a wildcard in the field names. If the labelfield argument is specified. Results are displayed on the Statistics tab. The result contains the sum of each numeric field or you can specify which fields to summarize..Addcoltotals command Description The addcoltotals command appends a new result to the end of the search result set. Syntax addcoltotals [labelfield=<field>] [label=<string>] [<fieldlist>] Optional arguments <fieldlist> Syntax: <field> . Default: Total 159 .

| addcoltotals labelfield=change_name label=ALL Example 2: Add a column total for two specific fields in a table.log" group=pipeline |stats avg(cpu_seconds) by processor |addcoltotals labelfield=processor 160 . index=_internal source=*metrics.. and put the sums in a summary event called "change_name". | fields user*. sourcetype=access_* | table userId bytes avgTime duration | addcoltotals bytes duration Example 3: Filter fields for two name-patterns. and get totals for one of them.Examples Example 1: Compute the sums of all the fields.. . *size | addcoltotals *size Example 4: Augment a chart with a total of the values present. ...

stats functions with Loan file to get the count: source="C:\\LoanStats3a.Hands on Lab 1. source="C:\\LoanStats3a.csv" addr_state=CA | stats count 3.  sourcetype=access_* | table userId bytes avgTime duration | addcoltotals bytes duration Come up with your own example for the Loans file 161 . Run a search query that uses the top.csv"| top limit=20 addr_state | stats count 2. Try running: 4.

End of Module Hands on Quiz Please refer to your virtual machine for test details 162 .

Explain when to use each type of reporting command. Chart multiple values on the same timeline Format charts Explain when to use each type of reporting command Hands on Lab covering: Format Charts.Reporting Commands. Part 2             Explore the available visualizations Create a basic chart Split values into multiple series Hands on Lab covering: Explore the available visualizations. End of Module hands on Quiz 163 . Create a time chart. Create a basic chart.Module 9 . Split values into multiple series Omit null and other values from charts Create a time chart Chart multiple values on the same timeline Hands on Lab covering: Omit null and other values from charts.

After running a search. after you run the search click Save As > Dashboard Panel.Explore the available visualizations Accessing Splunk's visualization definition features Splunk provides user interface tools to create and modify visualizations. Dashboard panel visualizations When you base a new dashboard panel on search results. To create a dashboard panel from search results. You can then use the Visualization Editor to fine-tune the way the panel visualization displays. select the Visualization tab. The search must be a reporting search that returns results that can be formatted as a visualization. then select the type of visualization to display and specify formatting options for the selected visualization. You can access these tools from various places in Splunk Web. 164 .      Search Dashboards Dashboard visual editor Pivot Reports Visualizations from Splunk Search You can modify how Splunk displays search results in the Search page. you can choose the visualization that best represents the data returned by the search.

scatter. or rare. but the most interesting tables are generated by searches that include transform operations. but provides options for zoom levels and cells for mapping. Maps Splunk provides a map visualization that lets you plot geographic coordinates as interactive markers on a world map. 165 .Events visualizations Events visualizations are essentially raw lists of events. The geostats command is similar to the stats command. Events generated from the geostats command include latitude and longitude coordinates for markers. You can get events visualizations from any search that does not include a transform operation. timechart. line. Tables You can pick table visualizations from just about any search. Charts Splunk provides a variety of chart visualizations. Searches for map visualizations should use the geostats search command to plot markers on a map. These visualizations require transforming searches (searches that use reporting commands) whose results involve one or more series. top. or rare. and pie charts. area. such as a search that uses reporting commands like stats. such as a search that uses reporting commands like stats. timechart. chart. such as column. chart. top.

or you can set them up so the results provide data for multiple series. line. A "single series" search would produce a table with only two columns. line. area. or scatter chart visualization. such as column. All of the chart visualizations can handle single-series searches. and pie chart visualizations are usually best for such searches. area. or statistical calculation of a field value. Every column in the table after the first one represents a different series. you'll want to go with a bar. except that the x-axis and y-axis values are reversed. line.Create a basic chart Charts Splunk provides a variety of chart visualizations. These visualizations require transforming searches (searches that use reporting commands) whose results involve one or more series. pie charts can only display data from single series searches. each line plotted on a line chart represents an individual series. column. In a column chart. On the other hand. Column and bar charts Use a column chart or bar chart to compare the frequency of values of fields in your data. A series is a sequence of related data points that can be plotted on a chart. For example. count of values. Bar charts are exactly the same. the x-axis values are typically field values (or time. 166 . It may help to think of the tables that can be generated by transforming searches. though you'll find that bar. especially if your search uses the timechart reporting command) and the y-axis can be any other field value. scatter. and pie charts. In fact. while a "multiple series" search would produce a table with three or more columns. You can design transforming searches that produce a single series. if your search produces multiple series. column.

we've also demonstrated how you can roll over a single bar or column to get detail information about it. which uses internal Splunk metrics. It finds the total sum of CPU_seconds by processor in the last 15 minutes. set the minimum y-axis values for the y-axis (for example. as well as the titles of the x-axis and y-axis.The following bar chart presents the results of this search. you can:   set the chart titles. When you define the properties of your bar and column charts. 167 . if all the y-axis values of your search are above 100 it may improve clarity to have the chart start at 100). and then arranges the processors with the top ten sums in descending order: index=_internal "group=pipeline" | stats sum(cpu_seconds) as totalCPUSeconds by processor | sort 10 totalCPUSeconds desc Note that in this example.

or 20. turn their drilldown functionality on or off. In a column chart set to a Stack mode of Stacked. 168 . to reference that example again). If you are formatting bar or column charts in dashboards with the Visualization Editor you can additionally:    set the major unit for the y-axis (for example. all of the series columns for a single datapoint (such as a specific month in the chart described in the preceding paragraph) are stacked to become segments of a single column (one column per month.whatever works best). Bar and column charts are always unstacked by default. Stacked column and bar charts When your base search involves more than one data series.  set the unit scale to Log (logarithmic) to improve clarity of charts where you have a mix of very small and very large y-axis values.. This may be fine if your chart is relatively simple--total counts of sales by month for two or three items in a store over the course of a year. determine the position of the chart legend and the manner in which the legend labels are truncated. the columns for different series are placed alongside each other. determine whether charts are stacked. The total value of the column is the sum of the segments. or 45. you can use stacked column charts and stacked bar charts to compare the frequency of field values in your data. See the following subsection for details on stacking bar and column charts. and unstacked. In an unstacked column chart. for example--but when the series count increases it can make for a cluttered. confusing chart. 100% stacked. Note: You use a stacked column or bar chart to highlight the relative weight (importance) of the different types of data that make up a specific dataset. you can arrange to have tick marks appear in units of 10..

broken out by product category over a 7 day period: Here's the search that built that stacked chart: 169 . a hypothetical web-based flower store.The following chart illustrates the customer views of pages in the website of MyFlowerShop.

This chart is based on a simple search that reports on internal Splunk metrics: index=_internal | timechart count by sourcetype 170 . it ensures that the chart only displays counts of events with a product category ID. The third Stack mode option. Stacked 100% can help you to better see data distributions between segments in a column or bar chart that contains a mix of very small and very large stacks when Stack mode is just set to Stacked.sourcetype=access_* method=GET | timechart count by categoryId | fields _time BOUQUETS FLOWERS GIFTS SURPRISE TEDDY Note the usage of the fields command. events without one (categorized as null by Splunk) are excluded. Line and area charts Line and area charts are commonly used to show data trends over time. If your chart includes more than one series. enables you to compare data distributions within a column or bar by making it fit to 100% of the length or width of the chart and then presenting its segments in terms of their proportion of the total "100%" of the column or bar. each series will be represented by a differently colored line or area. Stacked 100%. though the x-axis can be set to any field value.

log group=search_concurrency "system total" NOT user=* | timechart max(active_hist_searches) as "Historical Searches" max(active_realtime_searches) as "Real-time Searches" 171 . The following area chart is derived from this search. which also makes use of internal Splunk metrics: index=_internal source=*metrics.The shaded areas in area charts can help to emphasize quantities.

set the unit scale to Log (logarithmic) to improve clarity of charts where you have a mix of very small and very large y-axis values. or just connect to the next positive datapoint. determine what Splunk does with missing (null) y-axis values. if all the y-axis values of your search are above 100 it may improve clarity to have the chart start at 100). you can:      set the chart titles. Bar and column charts are always unstacked by default. 172 . and unstacked. If you choose to leave gaps.When you define the properties of your line and area charts. as well as the titles of the x-axis and y-axis. 100% stacked. determine whether charts are stacked. have connect to zero datapoints. See the following subsection for details on stacking bar and column charts. set the minimum y-axis values (for example. Splunk will display markers for datapoints that are disconnected because they are not adjacent to other positive datapoints. You can have the system leave gaps for null datapoints.

. The search used to create it is: index=_internal per_sourcetype_thruput | timechart sum(kb) by series useother=f 173 .whatever works best). Stacked line and area charts can help readers when several series are involved. Stacked line and area charts Stacked line and area charts operate along the same principles of stacked column and row charts (see above). you can arrange to have tick marks appear in units of 10. it makes it easier to see how each data series relates to the entire set of data as a whole. or 45. The following chart is another example of a chart that presents information from internal Splunk metrics.If you are formatting line or area charts in dashboards with the Visualization Editor you can additionally:    set the major unit for the y-axis (for example. or 20. turn their drilldown functionality on or off.. determine the position of the chart legend and the manner in which the legend labels are truncated.

The following pie chart presents the views by referrer domain for a hypothetical online store for the previous day. The size of a slice in a pie graph is determined by the size of a value of part of your data as a percentage of the total of all values.Pie chart Use a pie chart to show the relationship of parts of your data to the entire set of data as a whole. 174 . Note that you can get metrics for individual pie chart wedges by mousing over them.

175 . turn pie chart drilldown functionality on or off. If you are formatting pie charts in dashboards with the Visualization Editor you can additionally:   determine the position of the chart legend.When you define the properties of pie charts you can set the chart title.

Scatter chart Use a scatter chart ( or "scatter plot") to show trends in the relationships between discrete values of your data. This is different from a line graph. a scatter plot shows discrete values that do not occur at regular intervals or belong to a series. None of the quakes exceeded a magnitude of 4. with the exception of one quake that was around 27 meters deep. It looks at USGS earthquake data (in this case a CSV file that presents all magnitude 2. which usually plots a regular series of points.5+ quakes recorded over a given 7-day period. pulls out just the Californian quakes. and then color-codes them by region. As you can see the majority of quakes recorded during this period were fairly shallow--10 or fewer meters in depth. Here's an example of a search that can be used to generate a scatter chart.0. 176 . plots out the quakes by magnitude and quake depth. worldwide). Generally.

determine the position of the chart legend and the manner in which the legend labels are truncated. When you define the properties of your scatter charts. or 20. If you are formatting bar or column charts in dashboards with the Visualization Editor you can additionally:    set the major unit for the y-axis (for example. which leaves the third field (Depth) to be the y-axis value. The first field is what appears in the legend (Region). set the minimum y-axis values for the y-axis (for example. but the field names and format will be slightly different from the example shown here.whatever works best). you can:    set the chart titles. Note that when you use table the latter two fields must be numeric in nature.To generate the chart for this example. or 45. 177 .. source=usgs Region=*California | table Region Magnitude Depth | sort Region You can download a current CSV file from the USGS Earthquake Feeds and add it as an input to Splunk. if all the y-axis values of your search are above 100 it may improve clarity to have the chart start at 100). followed by three fields. set the unit scale to Log (logarithmic) to improve clarity of charts where you have a mix of very small and very large y-axis values.. turn their drilldown functionality on or off. we've used the table command. as well as the titles of the x-axis and y-axis. you can arrange to have tick marks appear in units of 10. The second field is the x-axis value (Magnitude).

Make sure to select Line Chart 178 .Split values into multiple series Run for example: sourcetype=access_* | timechart count(eval(method="GET")) AS GET. count(eval(method="POST")) AS POST Then click the visualization tab to see the result of this having two series.

Upload a data file called: ImplementingSplunkDataGenerator. Switching the fields (by rearranging our search statement a bit) turns the data the other way. Instead of a row per combination. we can see these results in a chart: 179 .tgz:*" host="WIN-SQM8ERRKEIJ"| chart count over date_month by date_wday If you look back at the results from stats.Hands on Lab : 1. By simply clicking on the Visualization tab (to the right of the Statistics tab). the data is presented as one row per combination. chart generates the intersection of the two fields.tgz located on the desktop Run: source="ImplementingSplunkDataGenerator. You can specify multiple functions. but you may only specify one field each for over and by.

This is an Area chart. Within the chart area. Multi-series Mode. Column. and so on) or Format to change the format options (Stack. Bonus Lab: Create a chart from the Loan file csv on your desktop 180 . and Drilldown). you can click on Area to change the chart type (Line. Bar. Null Values. Area. with particular format options set.

then the following snippet removes null fields: | stats values(*) as * by Id The reason is that "stats values won't show fields that don't have at least one non-null value". If your records don't have a unique Id field.Omit null and other values from charts Sometimes Splunk has extra null fields floating around. If your records have a unique Id field. then you should create one first using streamstats: | streamstats count as Id | stats values(*) as * by Id 181 .

Examples Example 1: This report uses internal Splunk log data to visualize the average indexing thruput (indexing kbps) of Splunk processes over time. with the option of splitting the data with another field as a separate series in the chart. broken out by processor: index=_internal "group=thruput" | timechart avg(instantaneous_eps) by processor 182 . Timechart visualizations are usually line. area. or column charts. Use timechart to display statistical trends over time.Create a time chart The timechart command The timechart command generates a table of summary statistics which can then be formatted as a chart visualization where your data is plotted against an x-axis that is always a time field.

Chart multiple values on the same timeline Refer to lesson above for multiple series: Run for example: sourcetype=access_* | timechart count(eval(method="GET")) AS GET. Make sure to select Line Chart 183 . count(eval(method="POST")) AS POST Then click the visualization tab to see the result of this having two series.

count(eval(method="POST")) AS POST Create another example of a timechart with the Loan csv file 184 .Hands on Lab Run sourcetype=access_* | timechart count(eval(method="GET")) AS GET.

under Legend. Values greater than the Max Value do not appear on the chart. and set the rotation of the text for your chart labels. useful for minimizing the display of large peak values. you have the option to set the Stack Model (which indicates how Splunk will display your chart columns for different series (alongside each other or as a single column). Values less than the Min Value do not appear on the chart. Chart Overlay: Here you can set the following options:  Overlay: Select a field to show as an overlay. depending on your search results and the visualization options that you select. and the min and max values.  Max Value: The maximum value to display. set the Multi-series mode (Yes or No). These options are grouped as:      General: Under general.Format charts Let's go ahead and take a look at the (chart) Format options. determine how to handle Null Values (you can leave gaps for null data points. Log provides a logarithmic scale. allow truncation of label captions. Y-Axis: Here you can set not just a custom title.  Min Value: The minimum value to display. or just connect to the next positive data point). Inherit uses the scale for the base chart. Linear. X-Axis: Is mostly visual. 185 . you may or may not get a useable result. or Log. you can set Position (where to place the legend (or to not include the legend) in the visualization. the interval. connect to zero data points.  Scale: Select Inherit. Legend: Finally. Some experimentation with the various options is recommended. you can set a custom title.) and Truncation (set how to represent names that are too long to display). Keep in mind that.  Interval: Enter the units between tick marks in the axis.  View as Axis: Select On to map the overlay to a second Y-axis.  Title: Specify a title for the overlay. but also the scale (linear or log). and turn Drilldown (active or inactive) on or off.

which means that _time is always the x-axis. eventstats. rare: creates charts that display the least common values of a field. stats. and differences between fields in your data. associate. You can decide what field is tracked on the x-axis of the chart. The primary reporting commands are:       chart: used to create charts that can display any series of data that you want to plot. and diff: create reports that enable you to see associations. and streamstats: generate reports that display summary statistics. 186 . correlate. timechart: used to create "trend over time" reports. correlations. top: generates charts that display the most common values of a field.Explain when to use each type of reporting command A reporting command primer This subsection covers the major categories of reporting commands and provides examples of how they can be used in a search.

Note: As you'll see in the following examples. you always place your reporting commands after your search commands. linking them with a pipe operator ("|"). The list of available statistical functions includes:       count. max. range. percentiles standard deviation. distinct count mean. timechart. eventstats. last occurrence 187 . chart. mode min. and streamstats are all designed to work in conjunction with statistical functions. variance sum first occurrence. median. stats.

Hands on Lab Please format your chart from the last lab exercise 188 .

End of Module hands on Quiz 189 .

and Formatting Results            Using the eval command Perform calculations Convert values Hands on Lab covering: Using the eval command. Convert values. Further filter calculated results End of Module Hands on Quiz 190 . Format values Use conditional statements Further filter calculated results Hands on Lab covering: Use conditional statements. Calculating.Module 10 . Round values Format values Hands on Lab covering: Round values. Perform calculations.Analyzing.

For example. and saves the result in a field named. This eval expression uses the pi and pow functions to calculate the area of each circle and then adds them together. eval can concatenate the two operands if they are both strings. they often can be quite complex. 2) The area of circle is πr^2. or a call to one of the eval functions. a comparison expression. The eval command is immensely versatile and useful. A and B. a boolean expression. .. | eval sum_of_areas = pi() * pow(radius_a.'. fields. arithmetic operations may not produce valid results if the values are not numerical. Types of eval expressions An eval expression is a combination of literals. eval treats both values as strings. sum_of_areas. For circles A and B. For addition. But while some eval expressions are relatively simple. where r is the radius. Example 1: Use eval to define a field that is the sum of the areas of two circles.Using the eval command and perform calculations Use the eval command and functions The eval command enables you to devise arbitrary expressions that use automatically extracted fields to create a new field that takes the value that is the result of the expression's evaluation. a string concatenation. 191 . with the exception of addition. operators. Eval expressions require that the field's values are valid for the type of operation. regardless of their actual type. respectively. When concatenating values with '. 2) + pi() * pow(radius_b. the radii are radius_a and radius_b. and functions that represent the value of your destination field.. The expression can involve a mathematical operation.

location="Philadelphia. "..". . 192 . For example.state This eval expression is a simple string concatenation. PA". if the city=Philadelphia and state=PA.Example 2: Use eval to define a location field using the city and state fields. | eval location=city..

Change the sendmail duration format of delay and xdelay to seconds. into seconds.Convert values The convert command converts field values into numerical values. minutes (MM). The xdelay is the total amount of time the message took to be transmitted during final delivery. The delay is the total amount of time a message took to deliver or bounce. The delay is expressed as "D+HH:MM:SS". the original values are replaced by the new values. which indicates the time it took in hours (HH). delay and xdelay. delay and xdelay. 193 . and seconds (SS) to handle delivery or rejection of the message. Unless you use the AS clause. sourcetype=sendmail | convert dur2sec(delay) dur2sec(xdelay) This search pipes all the sendmail events into the convert command and uses the dur2sec() function to convert the duration times of the fields. The sendmail logs have two duration fields. the time expression is prefixed with the number of days and a plus character (D+). and its time is expressed as "HH:MM:SS". If the delay exceeds 24 hours. Example 1 This example uses sendmail email server logs and refers to the logs with sourcetype=sendmail.

Here. minutes. Example 2 This example uses syslog data. sourcetype=syslog | convert timeformat="%H:%M:%S" ctime(_time) AS c_time | table _time. which is renamed c_time: 194 . c_time The ctime() function converts the _time value of syslog (sourcetype=syslog) events to the format specified by the timeformat argument. and seconds. the table command is used to show the original _time value and the converted time.Here is how your search results look after you use the fields sidebar to add the fields to your events: You can compare the converted field values to the original field values in the events list. Convert a UNIX epoch time to a more readable time formatted to show hours. The timeformat="%H:%M:%S" arguments tells the search to format the _time value as HH:MM:SS.

Convert a time in MM:SS. seconds. Example 3 This example uses syslog data. This is useful for display in a report or for readability in your events list.SSS (minutes. 195 . and subseconds) to a number in seconds. sourcetype=syslog | convert mstime(_time) AS ms_time | table _time. ms_time The mstime() function converts the _time value of syslog (sourcetype=syslog) events from a minutes and seconds to just seconds.The ctime() function changes the timestamp to a non-numerical value.

. More examples Example 1: Convert values of the "duration" field into number value by removing string values in the field value. which is renamed ms_time: The mstime() function changes the timestamp to a numerical value. For example. the resulting value is "duration="212"". This is useful if you want to use it for more calculations. if "duration="212 sec"". if "delay="00:10:15"".. the resulting value is "delay="615"". 196 .. the table command is used to show the original _time value and the converted time. | convert rmunit(duration) Example 2: Change the sendmail syslog duration format (D+HH:MM:SS) to seconds.Here. For example.

.. | convert auto(*) none(foo) 197 .. | convert dur2sec(delay) Example 3: Change all memory values in the "virt" field to Kilobytes..... . . | convert memk(virt) Example 4: Convert every field value to a number value except for values in the field "foo" Use the "none" argument to specify fields to ignore.

tgz:*" host="WIN-SQM8ERRKEIJ" error | stats count by logger user | eventstats sum(count) as totalcount | eval percent=count/totalcount*100 | sort -count And explain the options 2. Run: source="ImplementingSplunkDataGenerator. Take the Loan csv file and develop some eval functions 198 .Hands on Lab 1.

Y) Description This function takes pairs of arguments X and Y. The X arguments are Boolean expressions that will be evaluated from first to last.. or "not local" if it does not: . Comparison and Conditional functions Function case(X. the second is the IP address to match. | eval description=case(error == 404. the corresponding Y argument will be returned. When the first X expression is encountered that evaluates to TRUE. error == 200. error == 500. "Not found". when IP address Y belongs to a particular subnet X. isLocal. The function defaults to NULL if none are true. Example(s) This example returns descriptions for the corresponding http status code: .) cidrmatch("X".. This function returns true. "Internal Server Error". "local".ip). "OK") This example uses cidrmatch to set a field..0/25".Round and format values functions Usage   All functions that accept strings can accept literal strings or any field. "not local") 199 ..32.. All functions that accept numbers can accept literal numbers or any numeric field.132. The function uses two string arguments: the first is the CIDR subnet.. to "local" if the field ip matches the subnet."Y". | eval isLocal=if(cidrmatch("123..

. It returns TRUE if and only if the first argument is like the SQLite pattern in Y.. PATTERN) This function takes two arguments.ipaddress) if(X. otherwise returns err=Error: X must be a Boolean expression.... depending on which is not NULL (exists in that event): . the result is the second argument Y. | eval is_a_foo=if(like(field. ip) coalesce(X. | where cidrmatch("123. as well as % characters This example returns islike=TRUE if the field value starts with foo: .. . X evaluates to FALSE. the result evaluates to the third argument Z. "Error") If X evaluates to TRUE.. "foo%"). | eval err=if(error == 200. The pattern language supports exact text match. a string to match TEXT and a match expression string PATTERN. "foo%") 200 .Z) This function takes three This example looks at the values of error and returns arguments.Y. The first argument err=OK if error=200. This example defines a new field called ip.0/25". Let's say you have a set of events where the IP address is extracted to either clientip or ipaddress.) This function takes an arbitrary number of arguments and returns the first value that is not null. | eval ip=coalesce(clientip... like(TEXT..This example uses cidrmatch as a filter: . | where like(field. "OK".32...132. that takes the value of either clientip or ipaddress.. "yes a foo". "not a foo") or . If.

The function returns true IF AND ONLY IF the event matches the search string. The evaluation engine uses NULL to represent "no value".3}$"). | eval n=nullif(fieldA. "^\d{1. The function takes two . 0) null() This function takes no arguments and returns NULL.) This function takes pairs of This example runs a simple check for valid ports: 201 . . validate(X.3}\. | eval n=if(match(field... Otherwise it returns X.. setting a field to NULL clears its value.fieldB) arguments.3}\. This example returns true IF AND ONLY IF field matches the basic pattern of an IP address. searchmatch(X) This function takes one argument X. | eval n=searchmatch("foo AND bar") string. match(SUBJECT. and returns NULL if X = Y. 1. "REGEX") This function compares the regex string REGEX to the value of SUBJECT and returns a Boolean value.for wildcards and _ characters for a single character match.Y..3}\.\d{1.\d{1.. which is a search ..\d{1.. X and Y.Y) This function is used to compare fields.. Note that the example uses ^ and $ to perform a full match. It returns true if the REGEX can find a match against any substring of SUBJECT.. nullif(X.

Use the trim function to remove leading or trailing spaces. for example if the value contains a leading and trailing space.. it returns an error. Conversion functions Function tonumber(NUMSTR. This function converts the input value to a string.arguments. "ERROR: Port returns the string Y is out of range") corresponding to the first expression X that evaluates to False and defaults to NULL if all are True. The function an integer". the function returns NULL. Boolean expressions .Y) Description Examples This function converts the input This example returns "164": string NUMSTR to a number. If the input value is a number..345. port >= 1 AND port <= 65535. | eval n=validate(isint(port).16) to define the base of the number to convert to. BASE can be 2 to 36. it reformats it 202 This example returns "True 0xF 12.BASE) tonumber(NUMSTR) tostring(X.. If tonumber cannot parse a literal string to a number.. and defaults to 10.68": . | eval n=tonumber("0A4". where BASE is optional and used . If tonumber cannot parse a field value to a number. "ERROR: Port is not X and strings Y.

. tostring(X. if the number includes decimals.. | eval n=md5(field) sha1(X) This function computes and returns the secure hash of a string value X based on the FIPS compliant SHA-1 hash function.. "True" or "False". This function requires at least one "duration") argument X. | eval foo=615 | eval foo2 = tostring(foo.. The underlying values are not changed with the fieldformat command. the second argument Y is optional and can This example formats the column totalSales to display values be "hex" "commas" or "duration": with a currency symbol and commas. tostring(X."commas") formats X with commas and."duration") converts seconds X to readable time format HH:MM:SS. Boolean value. | eval n=tostring(1==1) + " " + tostring(15. Note: When used with the eval command.. if X is a number.6789.| fieldformat hexadecimal. rounds to nearest two decimal places. it returns the "hex") + " " + tostring(12345. Cryptographic functions Function Description Example(s) md5(X) This function computes and returns the MD5 hash of a string value X. If the input value is a . | eval n=sha1(field) 203 .tostring(totalSales.. You must use a period between the currency value and the tostring function. "commas") corresponding string value. ."hex") converts X to . Use the fieldformat command with the tostring function to format the displayed values."commas") tostring(X. the values might not sort as expected because the values are converted to ASCII.as a string. This example returns foo=615 and foo2=00:10:15: .. .. totalSales="$"...

X. This example returns the hour and minute from the _time field: . | eval n=relative_time(now().. relative_time(X. parses it into a timestamp using the format specified by this returns it as a timestamp: Y. Y. as the first argument and a relative time specifier. "%H:%M") strptime(X. The time is represented in Unix time or in seconds since Epoch time.. X. "-1d@d") strftime(X... . The value of time() will be 204 . | eval n=sha256(field) sha512(X) This function computes and returns the secure hash of a string value X based on the FIPS compliant SHA-512 hash function. X. "%H:%M") time() This function returns the wall-clock time with microsecond resolution. "11:59". | eval n=strptime(timeStr. | eval n=strftime(_time. .. as the second argument and returns the epochtime value of Y applied to X.... and If timeStr is in the form.. .Y) This function takes an epochtime value. | eval n=sha512(field) Date and Time functions Function Description Example(s) now() This function takes no arguments and returns the time that the search was started.sha256(X) This function computes and returns the secure hash of a string value X based on the FIPS compliant SHA-256 hash function. as the first argument and renders it as a string using the format specified by Y.Y) This function takes an epochtime time.. .Y) This function takes a time represented by a string.

."yes"... | where isbool(field) isint(X) This function takes one argument X and returns TRUE if X is an integer. 205 .. | where isnull(field) isnum(X) This function takes one argument X and returns TRUE if X is a number.. | eval n=if(isnum(field). ."yes".. | eval n=if(isnull(field)."yes".. | eval n=if(isbool(field). This is a useful check for whether or not a field (X) contains a value. ..different for each event based on when that event was processed by the eval command. | eval n=if(isnotnull(field).. | where isint(field) isnotnull(X) This function takes one argument X and returns TRUE if X is not NULL.."no") or ...."yes". "int".. | where isnotnull(field) isnull(X) This function takes one argument X and returns TRUE if X is NULL. "not int") or ."no") ..."no") or ... Informational functions Function isbool(X) Description Example(s) This function takes one argument X and returns TRUE if X is Boolean."no") or . | eval n=if(isint(field). ..

This example returns "NumberStringBoolInvalid": . . whose values are the absolute values of the numeric field number: .. | eval n=typeof(12) + typeof("string") + typeof(1==2) + typeof(badfield) Mathematical functions Function abs(X) Description Examples This function takes a number X and returns its absolute value.....14 * num) . | eval n=if(isstr(field).."no") or . | where isstr(field) typeof(X) This function takes one argument and returns a string representation of its type. This example returns the absnum... | where isnum(field) isstr(X) This function takes one argument X and returns TRUE if X is a string. | eval absnum=abs(number) ceil(X). | eval n=exact(3... | eval n=ceil(1. exact(X) This function renders the result of a numeric eval calculation with a larger amount of precision in the 206 This example returns n=2: .or ...."yes".. ceiling(X) This function rounds a number X up to the next highest integer.9) .

. The default is to round to an integer.9) This example returns the natural log of the values of bytes: . | eval area_circle=pi()*pow(radius..Y) log(X) This function takes either one or two numeric arguments and returns the log of the first argument X . | eval n=round(3.. This example returns 1: This function takes a number X and returns its natural log. The following example returns y=e3: This function rounds a number X down to the nearest whole integer. exp(X) floor(X) ln(X) This function takes a number X and returns the exponential function eX. If the second argument Y is omitted.. | eval num=log(number....2) using the second argument Y as the base. 207 This example returns n=4: .Y) This function takes two numeric arguments X and Y and returns XY. | eval n=floor(1..formatted output. | eval area_circle=pi()*pow(radius. this function evaluates the log of number X with base 10. | eval y=exp(3) . . .Y) This function takes one or two numeric arguments X and Y. | eval lnBytes=ln(bytes) log(X. . returning X rounded to the amount of decimal places specified by Y.2) pow(X... pi() This function takes no arguments and returns the constant pi to 11 digits of precision..2) round(X...5) ..

| eval n=round(2.. 'stats'.... | eval n=mvcount(multifield) 208 . a number. sqrt(X) This example returns 3: . . X and returns a multivalued field containing a list of the commands used in X. mvappend(X. | eval fullName=mvappend(initial_values. The function returns the number of .log events.. or field that contains a search string.. but . | eval n=sigfig(1. and 'sort'. sigfig(X) 1. and rounds that number to the appropriate number of significant figures. This function takes one numeric argument X and returns its square root... The arguments can be strings. (This is generally not recommended for use except for analysis of audit...555.00*1111) returns n=1110. | eval x=commands("search foo | stats count | sort count") returns a multivalued field X.) Example(s) .56: ..This example returns n=2..) This function takes an arbitrary number of arguments and returns a multivalue result of all the values. 2) This function takes one argument X.00*1111 = 1111. last_values) mvcount(MVFIELD) This function takes a field MVFIELD... that contains 'search'.. | eval n=sqrt(9) Multivalue functions Function commands(X) Description This function takes a search string. "middle value". multivalue fields or single value fields.

in field email that end in . If a match exists.net or . This function takes two or three arguments... "\. "err\d+") Since indexes start at zero. | eval n=mvfind(mymvfield. | eval s=mvdedup(mvfield) This function filters a multivalue field based on an arbitrary Boolean expression X. and NULL otherwise. 1 if it is a single value field. and returns a subset of the multivalue field using the indexes provided. this example returns the third value in "multifield". the index of the first matching value is returned (beginning with zero). use the expression: mvfilter(x!=NULL). the field x as well.org$")) values.. | eval n=mvfilter(match(email.. .STARTINDEX. if it exists: . mvdedup(X) mvfilter(X) This function takes a multivalue field X and returns a multivalue field with its duplicate values removed."REGEX") mvindex(MVFIELD. 2) . The Boolean expression X This example returns all of the values can reference ONLY ONE field at a time..net$") OR match(email. If you don't want the NULL "\.values if it is a multivalue. 209 .STARTINDEX) This function tries to find a value in multivalue field X that matches the regular expression REGEX. mvfind(MVFIELD. | eval n=mvindex(multifield..org: Note:This function will return NULL values of . NULL is returned... If no values match. ENDINDEX) mvindex(MVFIELD. field MVFIELD and numbers STARTINDEX and ENDINDEX.

. | eval n=mvjoin(foo..2) . and an optional step increment Z.Z) mvsort(X) This function takes two arguments. multivalue field MVFIELD and string delimiter STR. the result is NULL.STR) mvrange(X. startindex. If the increment is a timespan such as '7'd. 5. [endindex]). If endindex is not specified. This function uses a multivalue field X and returns a multivalue field with the values sorted lexicographically. mvjoin(MVFIELD. where -1 is the last element. This function creates a multivalue field for a range of numbers. 3.. the starting and ending numbers are treated as epoch times.For mvindex(mvfield.. Both startindex and endindex can be negative. This function can contain up to three arguments: a starting number X.Y.11. If the indexes are out of range or invalid.. an ending number Y (exclusive). 9.") This example returns a multivalue field with the values 1. | eval s=mvsort(mvfield) . ". 210 This example joins together the individual values of "foo" using a semicolon as the delimiter: . endindex is inclusive and optional. it returns only the value at startindex.. . The function concatenates the individual values of MVFIELD with copies of STR in between as separators. 7. | eval mv=mvrange(1.

random() This function takes no arguments and returns a pseudo211 This example returns either "foo" or field. strings are greater than numbers. "foo".. depending on the value of field: . and related commands.."Z") This function takes two multivalue fields. The default delimiter is a comma. and combines them by stitching together the first value of X with the first value of field Y. Function Description Example(s) This function takes an arbitrary number of numeric or string max(X. chart.mvzip(X. The third argument.. Z.) arguments. then the second with the second. a comprehensive set of statistical functions is available to use with the stats.Y. | eval n=max(1. . 3... | eval nserver=mvzip(hosts.. 6.. strings are greater than numbers.. 7. "foo". and so on. field) . 7. X and Y. 6. is optional and is used to specify a delimiting character to join the two values. depending on the value of field: . This is similar to Python's zip command. and returns the min. and returns the max..) arguments. field) This example returns either 1 or field.ports) Statistical functions In addition to these functions.. | eval n=min(1.. This function takes an arbitrary number of numeric or string min(X.. 3.

. " Z") ...Y) ltrim(X) . The upper() function also exists for returning the uppercase version. rtrim(X. This function takes one or two arguments X and Y and returns X with the characters in Y trimmed from the right side...2})/".. If Y is not specified.Z) of regex string Y in string X.random integer ranging from zero to 231-1. | eval n=replace(date. "\2/\1/") This example returns n="ZZZZabc": .. ltrim(X. spaces 212 This example returns date with the month and day numbers switched. "^(\d{1.Y. | eval username=lower(username) This function takes one or two arguments X and Y and returns X with This example returns x="abcZZ": the characters in Y trimmed from the left side. This function returns a string formed by substituting string Z for every occurrence replace(X. for example: 0…2147483647 Text functions Function Description Examples len(X) This function returns the character length of a string X. " Z") are removed.Y) This example returns the value provided by the field username in lowercase. lower(X) This function takes one string argument and returns the lowercase version.2})/(\d{1.. | eval n=len(field) rtrim(X) . | eval n=rtrim(" ZZZZabcZZ ". | eval x=ltrim(" ZZZZabcZZ ". If Y is not specified. so if the input was 1/14/2015 the return value would be 14/1/2015: .. spaces and tabs . The third argument Z can also reference groups that are matched in the regex..

"vendorProductSet. ". If Z is not given. field X and delimiting character Y.locDesc") This example returns the hashtags from a twitter event: index=twitter | eval output=spath(_raw.Y."Y"). If Y is a field name (with values that are the location paths). spath(X. | eval n=split(foo. 1.and tabs are removed.. with the number of characters specified returning "string": by Z. This may result in a multivalued field. Read more about the spath search command. starting at the index specified by Y This example concatenates "str" and "ing" together. -3) The indexes follow SQLite semantics. This example returns the values of locDesc elements: .Y) split(X.. Negative indexes can be used to indicate a start from the end of the string. 213 . If Y is a literal string.") This function takes either two or three arguments..product.desc. "entities. spath(X.. it returns the rest . it needs quotes."Y") substr(X.. where X is a string and Y and Z are numeric. that is the XML or JSON formatted location path to the value that you want to extract from X. | eval n=substr("string".hashtags") . they start at 1. 3) + of the string. substr("string". It returns a substring of X. it doesn't need quotes.. It splits the value(s) of X on the delimiter Y and returns X as a multivalue field.Z) This function takes two arguments: an input source field X and an spath expression Y. | eval locDesc=spath(_raw. This function takes two arguments.

acosh(X) This function computes the arc hyperbolic cosine of X.com/download?r=header": .. trim(X. The lower() function also exists for returning the lowercase version. This example returns "abc": ..splunk.splunk. 214 .. | eval n=acos(0) ...... | eval n=acosh(2) . in radians.pi] radians. If Y is not specified.Y) trim(X) This function takes one string argument and returns the uppercase version. | eval n=trim(" ZZZZabcZZ ".com %2Fdownload%3Fr%3Dheader") Trigonometry and Hyperbolic functions Function Description Examples acos(X) This function computes the arc cosine of X. upper(X) urldecode(X) This function takes one URL string argument X and returns the unescaped or decoded URL string. | eval n=asin(1) . | eval n=urldecode("http%3A%2F%2Fwww.. | eval degrees=acos(0)*180/pi() .. in the interval [-pi/2. .. | eval n=upper(username) This example returns "http://www... | eval degrees=asin(1)*180/pi() . spaces and tabs are removed.. asin(X) This function computes the arc sine of X.+pi/2] radians.. in the interval [0. " Z") This example returns the value provided by the field username in uppercase.This function takes one or two arguments X and Y and returns X with the characters in Y trimmed from both sides..

| eval n=cos(pi()) cosh(X) This function computes the hyperbolic cosine of X radians... the function takes into account the sign of both arguments to determine the quadrant... .. in the interval [pi/2. | eval n=atanh(0. 0. in radians.500) ..asinh(X) This function computes the arc hyperbolic sine of X.50.. . atanh(X) This function computes the arc hyperbolic tangent of X. . in radians.. X) This function computes the arc tangent of Y.... hypot(X..+pi] radians. ... . | eval n=cosh(1) This function computes the hypotenuse of a right-angled triangle whose legs are X and Y. Y is a value that represents the proportion of the ycoordinate.+pi/2] radians.. X in the interval [-pi...Y) ... . | eval n=sinh(1) .. . | eval n=sin(1) sin(X) This function computes the sine..75) To compute the value.4) The function returns the square root of the sum of the squares of X and Y. .50) atan2(Y. | eval n=hypot(3. | eval n=cos(-1) cos(X) This function computes the cosine of an angle of X radians. as described in the Pythagorean theorem. | eval n=asinh(1) atan(X) This function computes the arc tangent of X. | eval n=sin(90 * pi()/180) 215 . | eval n=atan2(0. sinh(X) This function computes the hyperbolic sine. X is the value that represents the proportion of the xcoordinate. | eval n=atan(0.

... | eval n=tanh(1) 216 ..tan(X) This function computes the tangent. . . | eval n=tan(1) tanh(X) This function computes the hyperbolic tangent.

Use that file and some of the functions in the table in the manual Take a look at round and some other functions that are very popular 217 .Hands-on Lab Please take a look at Loan csv file.

End of Module Quiz Please refer to virtual machine for quiz 218 .

End of Module Hands on Quiz 219 .Creating Field Aliases and Calculated Fields      Define naming conventions Create and use field aliases Create and use calculated fields Hands on Lab covering: Define naming conventions. Create and use calculated fields.Module 11 . Create and use field aliases.

if it is a scheduled search). it's up to you to come up with a naming convention for the reports produced by your team. and as the knowledge manager for your Splunk Enterprise implementation.Set up a naming convention for reports You work in the systems engineering group of your company. Time interval: The interval over which the search runs (or on which the search runs. report.Define naming conventions Example . summary-index-populating) Platform: Corresponds to the platform subjected to the search Category: Corresponds to the concern areas for the prevailing platforms. Group Search type Platform SEG NEG OPS NOC Alert Report Summary Windows iSeries Network Category Time interval Description Disk <arbitrary> Exchange SQL Event log CPU Jobs Subsystems 220 <arbitrary> . Ensures the search name is unique. Search type: Indicates the type of search (alert. limited to one or two words if possible. In the end you develop a naming convention that pulls together:       Group: Corresponds to the working group(s) of the user saving the search. Description: A meaningful description of the context and intent of the search.

Services Security Possible reports using this naming convention:    SEG_Alert_Windows_Eventlog_15m_Failures SEG_Report_iSeries_Jobs_12hr_Failed_Batch NOC_Summary_Network_Security_24hr_Top_src_ip 221 .

This can be helpful if there are one or more fields in the lookup table that are identical to fields in your data. you can specify a lookup table based on a field alias. To alias fields: 1. <new_field_name> is the alias to assign to the field.Create and use field aliases You can create multiple aliases for a field. or your own custom app directory in $SPLUNK_HOME/etc/apps/. You add your field aliases to props. You can include multiple field alias renames in one stanza.conf. The original field is not removed. Important: Field aliasing is performed after key/value extraction but before field lookups.conf: FIELDALIAS-<class> = <orig_field_name> AS <new_field_name>    <orig_field_name> is the original name of the field. 222 . This process enables you to search for the original field using any of its aliases. Therefore. which you edit in $SPLUNK_HOME/etc/system/local/.) Note: Splunk Enterprise's field aliasing functionality does not currently support multivalue fields. Add the following line to a stanza in props. but have been named differentlyYou can define aliases for fields that are extracted at index time as well as those that are extracted at search time. (We recommend using the latter directory if you want to make it easy to transfer your data customizations to other index servers.

3}) FIELDALIAS-extract_ip = ip AS ipaddress When you set up the lookup in props.conf file where you've defined the extraction.\d{1. Example of field alias additions for a lookup Say you're creating a lookup for an external static table CSV file where the field you've extracted at search time as "ip" is referred to as "ipaddress.\d{1.\d{1." In the props." as follows: [accesslog] EXTRACT-extract_ip = (?<ip>\d{1.3}\. you can just use ipaddress where you'd otherwise have used ip: [dns] lookup_ip = dnsLookup ipaddress OUTPUT host 223 .3}\.2.3}\. Restart Splunk Enterprise for your changes to take effect. you would add a line that defines "ipaddress" as an alias for "ip.conf.

When you run the search. Region. you can cut out the eval expression entirely and reference the field like you would any other extracted field. Region.conf and evaluate it for every event that contains a Depth field. This is where calculated fields come to the rescue. Then. "Shallow". Depth. you may find that retyping the expression accurately in search after search is tedious business. take this example search . Description You can now search on Description as if it is any other extracted field. they often can be quite complex. when you're writing out a search. Depth.csv Description=Deep Note: In the next section we show you how the Description calculated field would be set up in props. you could define the eval expression for the Description field in props.conf and write the search as: source=eqs7day-M1. But while some eval expressions are relatively simple. Depth>70 AND Depth<=300. Depth>300 AND Depth<=700. the fields will be extracted at search time and will be added to the events that include the fields in the eval expressions. 224 .conf . Calculated fields enable you to define fields with eval expressions in props. "Deep") | table Datetime.csv | eval Description=case(Depth<=70.csv | table Datetime. For example.conf.Create and use calculated fields The eval command is immensely versatile and useful. If you find that you need to use a particularly long and complex eval expression on a regular basis. You can also run searches like this: source=eqs7day-M1. Description Using calculated fields. Splunk Enterprise will find the calculated field key in props. which examines earthquake data and classifies quakes by their depth by creating a new Description field: source=eqs7day-M1. "Mid".

Hands on Lab: To create a calculated field go to: Settings -> Fields -> Add new (under Calculated Fields sections)  Sourcetype : csv  Name of field : a_test  Eval function: annual_inc * 2 Save it When you bring up the csv sourcetype in search . if you like 225 . you will see the field a_test doubled the amount of annual_inc Now you can try other calculated fields.

End of Module Hands on Quiz Please refer to virtual machine for test 226 .

Creating Field Extractions    Perform field extractions using Field Extractor Hands on Lab covering: Perform field extractions using Field Extractor End of Module Hands on Quiz 227 .Module 12 .

228 . you might tie the extraction configuration for ip to sourcetype=access_combined. It also extracts fields that appear in your event data as key=value pairs. or host value. create additional field extractions. It automatically extracts host. source. Custom field extractions allow you to capture and track information that is important to your needs.Perform field extractions using Field Extractor As Splunk Enterprise processes events. Splunk Enterprise can extract user_id=johnz from the previous sample event. To get all of the fields in your data. timestamps. You can disable field discovery to improve search performance. This process is called field extraction. With a properly configured regular expression. but which is not automatically discovered and extracted by Splunk Enterprise. Any field extraction configuration you provide must include a regular expression that tells Splunk Enterprise how to find the field that you want to extract. All field extractions. This process of recognizing and extracting k/v pairs is called field discovery. Splunk Enterprise comes with several field extraction configurations that use regular expressions to identify and extract fields from event data. and several other default fields when it indexes incoming events. For example. it extracts fields from them. sourcetype. and sourcetype values. Splunk Enterprise automatically extracts some fields Splunk Enterprise extracts some fields from your events without assistance. create custom field extractions To use the power of Splunk Enterprise search. are tied to a specific source. When fields appear in events without their keys. if you create an ip field extraction. Splunk Enterprise uses pattern-matching rules called regular expressions to extract those fields as complete k/v pairs. including custom field extractions.

11.0)” 10.0 (compatible.org/book/nse. Trident/5.11. For contrast. 131..28 %ASA-6-302014: Teardown TCP connection 517934 for Outside:128.1" 200 75017 "-" "Mozilla/5.24. Reliable means that the method value is always followed by the URI value. the URI value is always followed by the status value.216. method.0 (compatible. you can create a field extraction that accurately captures multiple field values from them.36. look at this set of Cisco ASA firewall log events: Jul 15 20:10:27 10.0.Custom field extractions should take place at search time.194. ensure that you are familiar with the formats and patterns of the event data associated with the source.11 %ASA-7-710006: IGMP request discarded from 10.36. the status value is always followed by the bytes value. Nmap Scripting Engine.220. bytes. an apache server web access log.0.splunk.253. status.36 to 2 outside:87. and so on in a reliable order. When your events have consistent and reliable formats.14 .[03/Jun/2014:20:49:53 -0700] "GET /wp-content/themes/aurora/style. http://nmap.36.. Before you create custom field extractions. but in certain rare circumstances you can arrange for some custom field extractions to take place at index time. or host that you are working with. they are formatted in a consistent manner.html)" While these events contain different strings and characters. sourcetype. One way is to investigate the predominant event patterns in your data with the Patterns tab.1" 200 7464 "http://www.31 %ASA-6-113003: AAA group policy for user AmorAubrey is being set to 1 Acme_techoutbound Jul 15 20:12:42 10. Here are two events from the same source type.com/download" "Mozilla/5.82/1561 229 .[03/Jun/2014:20:49:33 -0700] "GET / HTTP/1.10. MSIE 9.36.1. get to know your data Before you begin to create field extractions.135 . and so on.. Windows NT 6. They both present values for fields such as clientIP.11.css HTTP/1. Trident/5.241.0.11.51 3 Jul 15 20:13:52 10.

to Inside:10.123.124.28/8443 duration 0:05:02 bytes 297 Tunnel has been torn down (AMOSORTILEGIO)
Apr 19 11:24:32 PROD-MFS-002 %ASA-4-106103: access-list fmVPN-1300 denied udp for user 'sdewilde7'
4 outside/12.130.60.4(137) -> inside1/10.157.200.154(137) hit-cnt 1 first hit [0x286364c7, 0x0] "

While these events contain field values that are always space-delimited, they do not share a reliable format like the preceding two
events. In order, these events represent:
1. A group policy change
2. An IGMP request
3. A TCP connection
4. A firewall access denial for a request from a specific IP
Because these events differ so widely, it is difficult to create a single field extraction that can apply to each of these event patterns and
extract relevant field values.
In situations like this, where a specific host, source type, or source contains multiple event patterns, you may want to define field
extractions that match each pattern, rather than designing a single extraction that can apply to all of the patterns. Inspect the events to
identify text that is common and reliable for each pattern.
Using required text in field extractions
In the last four events, the string of numbers that follows %ASA-#- have specific meanings. You can find their definitions in the Cisco
documentation. When you have unique event identifiers like these in your data, specify them as required text in your field extraction.
Required text strings limit the events that can match the regular expression in your field extraction.
Specifying required text is optional, but it offers multiple benefits. Because required text reduces the set of events that it scans, it
improves field extraction efficiency and decreases the number of false-positive field extractions.
230

The field extractor utility enables you to highlight text in a sample event and specify that it is required text.
Methods of custom field extraction in Splunk Enterprise
As a knowledge manager you oversee the set of custom field extractions created by users of your Splunk Enterprise implementation,
and you might define specialized groups of custom field extractions yourself. The ways that you can do this include:


The field extractor utility, which generates regular expressions for your field extractions.
Adding field extractions through pages in Settings. You must provide a regular expression.
Manual addition of field extraction configurations at the .conf file level. Provides the most flexibility for field
extraction.

The field extraction methods that are available to Splunk Enterprise users are described in the following sections. All of these methods
enable you to create search-time field extractions. To create an index-time field extraction, choose the third option: Configure field
extractions directly in configuration files.
Let the field extractor build extractions for you
The field extractor utility leads you step-by-step through the field extraction design process. It provides two methods of field
extraction: regular expressions and delimiter-based field extraction. The regular expression method is useful for extracting fields from
unstructured event data, where events may follow a variety of different event patterns. It is also helpful if you are unfamiliar with
regular expression syntax and usage, because it generates regular expressions and lets you validate them.
The delimiter-based field extraction method is suited to structured event data. Structured event data comes from sources like SQL
databases and CSV files, and produces events where all fields are separated by a common delimiter, such as commas, spaces, or pipe
characters. Regular expressions usually are not necessary for structured data events from a common source.
With the regular expression method of the field extractor you can:

Set up a field extraction by selecting a sample event and highlighting fields to extract from that event.
231






Create individual extractions that capture multiple fields.
Improve extraction accuracy by detecting and removing false positive matches.
Validate extraction results by using search filters to ensure specific values are being extracted.
Specify that fields only be extracted from events that have a specific string of required text.
Review stats tables of the field values discovered by your extraction.
Manually configure regular expression for the field expression yourself.

With the delimiter method of the field extractor you can:


Identify a delimiter to extract all of the fields in an event.
Rename specific fields as appropriate.
Validate extraction results.

The field extractor can only build search time field extractions that are associated with specific sources or source types in your data
(no hosts).
Define field extractions with the Field Extractions and Field Transformations pages
You can use the Field Extractions and Field Transformations pages in Settings to define and maintain complex extracted fields in
Splunk Web.
This method of field extraction creation lets you create a wider range of field extractions than you can generate with the field extractor
utility. It requires that you have the following knowledge.

Understand how to design regular expressions.
Have a basic understanding of how field extractions are configured in props.conf and transforms.conf.

If you create a custom field extraction that extracts its fields from _raw and does not require a field transform, use the field extractor
utility. The field extractor can generate regular expressions, and it can give you feedback about the accuracy of your field extractions
as you define them.
232

Use the Field Extractions page to create basic field extractions, or use it in conjunction with the Field Transformations page to define
field extraction configurations that can do the following things.


Reuse the same regular expression across multiple sources, source types, or hosts.
Apply multiple regular expressions to the same source, source type, or host.
Use a regular expression to extract fields from the values of another field.

The Field Extractions and Field Transformations pages define only search time field extractions.

233

Hands on Lab
Please refer to Lab on desktop

234

End of Module Hands on Quiz Please refer to quiz on Virtual Machine 235 .

 End of Module Hands on Quiz 236 .Creating Tags and Event Types  Create and use tags  Describe event types and their uses  Create an event type  Hands on Lab covering: Create and use tags. Describe event types and their uses.Module 13 . create and event type.

Create and use tags Settings  Tags List by tag name  Click Add new 237 .

An event type is applied to an event at search time if that event matches the event type definition in eventtypes. they're checked against known event types.conf. and create alerts and reports. Tag or save event types after indexing your data. Define event types via Splunk Web or through configuration files. An event type is a user-defined field that simplifies search by letting you categorize events. Events versus event types An event is a single record of activity within a log file. Event types let you classify events that have common characteristics. Event type classification There are several ways to create your own event types. find similar patterns. Event types let you sift through huge amounts of data. or you can save any search as an event type.Describe event types and their uses Event types are a categorization system to help you make sense of your data. 238 . An event typically includes a timestamp and provides information about what occurred on the system being monitored or logged. When your search results come back.

Create an event type You complete a search. then Click Save As  Event Type 239 .

Hands on Lab covering Please refer to Lab on desktop 240 .

End of Module Hands on Quiz Please refer to quiz on virtual machine 241 .

Create a SEARCH workflow action End of Module Hands on Quiz 242 .Creating Workflow Actions        Describe the function of a workflow action Create a GET workflow action Hands on Lab covering: Describe the function of a workflow action.Module 14 . Create a GET workflow action Create a POST workflow action Create a Search workflow action Hands on Lab covering: Create a POST workflow action.

Define workflow actions using Splunk Web You can set up all of the workflow actions described in the bulleted list at the top of this chapter and many more using Splunk Web. Or you can click Add new to create a new workflow action. you can define workflow actions that:    Are targeted to events that contain a specific field or set of fields.Describe the function of a workflow action Workflow actions have a wide variety of applications. Appear either in field menus or event menus in search results. In addition. You can also set them up to only appear in the menus of specific fields. If you're creating a new workflow action. On the Workflow actions page you can review and update existing workflow actions by clicking on their names. Perform an external search (using Google or a similar web search application) on the value of a specific field found in an event. There are three kinds of workflow actions that you can set up: 243 . you need to give it a Name and identify its Destination app. Use the field values in an HTTP error event to create a new entry in an external issue management system. For example. navigate to Settings > Fields > Workflow actions. where you define individual workflow actions. or in all field menus in a qualifying event. Both methods take you to the workflow action detail page. To begin. or which belong to a particular event type. open either in the current window or in a new one. you can define workflow actions that enable you to:     Perform an external WHOIS lookup based on an IP address found in an event. When selected. Launch secondary searches that use one or more field values from selected events.

POST workflow actions. which launch secondary searches that use specific field values from an event. Search workflow actions. 244 . such as a search that looks for the occurrence of specific combinations of ipaddress and http_status' field values in your index over a specific time range. which generate an HTTP POST request to a specified URI. This action type enables you to do things like create entries in external issue management systems using a set of relevant field values.   GET workflow actions. which create typical HTML links to do things like perform Google searches on specific values or run domain name queries against external WHOIS databases.

Define a Label for the action. Click New to open up a new workflow action form. Use Apply only to the following event types to identify one or more event types. Set Action type to link. 4. the workflow action only appears for events that have those fields. The Label field enables you to define the text that is displayed in either the field or event workflow menu. If you identify an event type. Navigate to Settings > Fields > Workflow Actions. either in their event menu or field menus. Use Apply only to the following fields to identify one or more fields. allowing you to pass information to an external web resource. If you leave it blank or enter an asterisk the action appears in menus for all fields. For Show action in determine whether you want the action to appear in the Event menu. Labels can be static or include the value of relevant fields. the Fields menus.Create a GET workflow action GET link workflow actions drop one or more values into an HTML link. 3. such as a search engine or IP lookup service. When you identify fields. 6. 2. Clicking that link performs an HTTP GET request in a browser. 5. the workflow action only appears in the event menus for events that belong to the event type. To define a GET workflow action: 1. or Both. 245 . Determine whether the workflow action applies to specific fields or event types in your data.

you use the name of the field enclosed by dollar signs. when you declare the value of a field. Click Save to save your workflow action definition. Variables passed in GET actions via URIs are automatically URL encoded during transmission. Under Open link in. 8. This means you can include values that have spaces between words or punctuation characters. 9.7. Set the Link method to get. determine whether the workflow action displays in the current window or if it opens the link in a new window. 10. Similar to the Label setting. 246 . In URI provide a URI for the location of the external resource that you want to send your field values to.

Hands-on Lab Please refer to Lab on desktop 247 .

Navigate to Settings > Fields > Workflow Actions. If you identify an event type. either in their event menu or field menus. Labels can be static or include the value of relevant fields. This means that you have to identify POST arguments to send to the identified URI. When you identify fields. or Both. Set Action type to Link. the Fields menus. 248 . Determine whether the workflow action applies to specific fields or event types in your data. Click New to open up a new workflow action form. the workflow action only appears events that have those fields. 5. 4. 2. Use Apply only to the following fields to identify one or more fields. 3. Define a Label for the action. For Show action in determine whether you want the action to appear in the Event menu. the workflow action only appears in the event menus for events that belong to the event type. If you leave it blank or enter an asterisk the action appears in menus for all fields. However.Create a POST workflow action Set up a POST workflow action You set up POST workflow actions in a manner similar to that of GET link actions. 1. Use Apply only to the following event types to identify one or more event types. 6. The Label field enables you to define the text that is displayed in either the field or event workflow menu. POST requests are typically defined by a form element in HTML along with some inputs that are converted into POST arguments.

you can use field names enclosed in dollar signs to identify the field value from your events that should be sent over to the resource. Click Save to save your workflow action definition. Set Link method to Post. 8. These arguments are key and value combinations. Under URI provide the URI for a web resource that responds to POST requests. Click Add another field to create an additional POST argument. Under Open link in. and the value in the second field. Enter the key in the first field. On both the key and value sides of the argument. 11. Under Post arguments define arguments that should be sent to web resource at the identified URI. Splunk software automatically HTTP-form encodes variables that it passes in POST link actions via URIs.7. This means you can include values that have spaces between words or punctuation characters. 10. determine whether the workflow action displays in the current window or if it opens the link in a new window. 9. 249 . You can define multiple key/value arguments in one POST workflow action.

Create a Search workflow action To set up workflow actions that launch dynamically populated secondary searches. you might simply enter clientip=$clientip$ in that field. bounded by dollar signs. you can restrict the search workflow action to events containing specific sets of fields and/or which belong to particular event types. If you want it to run in a view other than the current one. you start by setting Action type to search on the Workflow actions detail page. select that view. Finally. as with other workflow action types. 250 . This reveals a set of Search configuration fields that you use to define the specifics of the secondary search. if you're setting up a workflow action that searches on client IP values that turn up in events. And as with all workflow actions. For example. If these fields are left blank the search runs over all time by default. Be sure to set a time range for the search (or identify whether it should use the same time range as the search that created the field listing) by entering relative time modifiers in the in the Earliest time and Latest time fields. you can determine whether it opens in the current window or a new one. In Search string enter a search string that includes one or more placeholders for field values. Identify the app that the search runs in.

Hands-on Lab Please refer to Lab on desktop 251 .

End of Module Quiz Please refer to questions on virtual machine 252 .

View fired alerts End of Module Hands on Quiz 253 .Creating and Managing Alerts      Describe alerts Create alerts View fired alerts Hands on Lab covering: Describe alerts. Create alerts.Module 15 .

The following list describes the types of alerts:  Per result alert. Based on a real-time search. Based on a real-time search. Typically the action is an email based on the results of the search.  Rolling-window alert. But you can also choose to run a script or to list the alert as a triggered alert in Settings. 254 .Describe alerts An alert is an action that a saved search triggers based on the results of the search. To avoid sending out alerts too frequently. Runs a search according to a schedule that you specify when creating the alert. specify a throttle condition for an alert. The trigger condition is a combination of specified results of the search within a specified time window. you specify a condition that triggers the alert. You specify results of the search that trigger the alert. When you create an alert you are creating a saved search with trigger conditions for the alert. When creating an alert. The trigger condition is whenever the search returns a result.  Scheduled alert.

1.Create alerts A scheduled alert runs periodically at a scheduled time. the alert sends an email with information about the conditions that triggered the alert. responding to a condition that triggers the alert. When the number of errors exceeds 5. This example uses a search to track when there are too many errors in a Splunk Enterprise instance during the last 24 hours. 255 . From the Search Page. The alert sends an email every day at 10:00AM when the number of errors exceed the threshold. Click Save As > Alert. Specify the following values for the fields in the Save As Alert dialog box: Title: Errors in the last 24 hours Alert type: Scheduled Time Range: Run every day Schedule: At 10:00 Trigger condition: Number of Results Trigger if number of results: is Greater than 5. create the following search index=_internal " error " NOT debug source=*splunkd. 3.log* earliest=-24h latest=now 2.

256 . Click Send Email. Include: Link to Alert and Link to Results Accept defaults for all other options. Click Next. Set the following email settings.resultCount$ errors reported on $trigger_date$. 6. using tokens in the Subject and Message fields: To: email recipient Priority: Normal Subject: Too many errors alert: $name$ Message: There were $job.4. 5.

7. Click Save. After you create the alert you can view and edit the alert in the Alerts Page. it sends the following email: 257 . When the alert triggers.

258 .

View fired alerts Simply go to the Alerts Page on the top Toolbar 259 .

Hands-on Lab Please refer to Lab on desktop 260 .

End of Module Quiz Please refer to questions on virtual machine 261 .

Module 16 . Manage macros.Creating and Using Macros         Describe macros Manage macros Create and use a basic macro Hands on Lab covering: Describe macros. Add and use arguments with a macro. Define arguments and variables for a macro Add and use arguments with a macro Hands on Lab covering: Define arguments and variable for a macro. Create and use a basic macro. End of Module Hands on Quiz 262 .

including saved and ad hoc searches. Search macros can be any part of a search. such as an eval statement or search term. and do not need to be a complete command. You can also specify whether or not the macro field takes any arguments.Describe Macros Search macros are chunks of a search that you can reuse in multiple places. 263 .

by default. such as mymacro. if mymacro required two arguments. they are tokenized and indicated by wrapping dollar signs around the arguments. which after expansion would cause the metadata command to be incorrectly formed and therefore invalid. If the search macro requires the user to input arguments. You can create multiple search macros that have the same name but require different numbers of arguments: foo. etc. for example. Example: "| metadata type=sources". it should be named mymacro(2). Definition is the string that your search macro expands to when referenced in another search.Manage and create macros In Settings > Advanced Search > Search macros. foo(2). for example. Destination app is the name of the app you want to restrict your search macro to. The UI does not do the macro expansion and cannot correctly identify the initial pipe to differentiate it from a regular search term. The UI constructs the search as if the macro name were a search term.   If Eval Generated Definition? is checked. If your search macro takes an argument. 264 . you need to indicate this by appending the number of arguments to the name. then the 'Definition' is expected to be an eval expression that returns a string that represents the expansion of this macro. you may not use it as the first term in searches from the UI. click "New" to create a new search macro. your search macros are restricted to the Search app. If a macro definition includes a leading pipe character ("|"). foo(1). $arg1$. Name is the name of your search macro. The arguments values are then specified when the search macro is invoked. Define the search macro and its arguments Your search macro can be any chunk of your search string or search command pipeline that you want to re-use as part of another search.

and dash '-'. you would use: `my-macro("He said \"hello!\"")`. You can also reference a search macro within other search macros using this same syntax. "Apply macros to saved and ad hoc searches". validation succeeds when it returns true. you need to escape the quotes when you call the macro in your search. If it returns false or is null. Apply macros to saved and ad hoc searches To include a search macro in your saved or ad hoc searches. A-Z. validation fails. the string returned is rendered as the error string. Otherwise. This list should not contain any repeated elements. this character is located on the same key as the tilde (~). Note: Do NOT use the straight quote character that appears in the same key as the double quote ("). If the validation expression is not a boolean expression. 265 . Argument names may only contain the characters: alphanumeric 'a-Z. on most English-language keyboards.  If a macro argument includes quotes. use the left quote (also known as a grave accent) character. and the Validation Error Message is returned. if you wanted to pass a quoted string as your macro's argument. it is expected to return a string or NULL.Arguments are a comma-delimited string of argument names.   Validation Expression is a string that is an 'eval' expression that evaluates to a boolean or a string. For example. Validate your argument values You can verify that the argument values used to invoke the search macro are acceptable. 0-9'. If the validation expression is a boolean expression. If it returns null. validation is considered a success. How to invoke search macros are discussed in the following section. underscore '_'.

Hands-on Lab Please refer to Lab on desktop 266 .

End of Module Quiz Please refer to virtual machine for quiz 267 .

Save pivot report as a dashboard.  End of Module Hands on Quiz.  Create a pivot report  Save pivot report as a dashboard  Hands on Lab covering: Create a pivot report. Select a data model object.Module 17 . Understand the relationship between data models and pivot. 268 .Using Pivot  Describe Pivot  Understand the relationship between data models and pivot  Select a data model object  Hands on Lab covering: Describe Pivot.

Describe Pivot The Pivot tool lets you report on a specific data set without the Splunk Enterprise Search Processing Language (SPL™). First. and then uses hierarchically arranged collections of data model objects to further subdivide the original dataset and define the attributes that you want Pivot to return results on. 269 . charts. and other visualizations. identify a dataset that you want to report on. How does Pivot work? It uses data models to define the broad category of event data that you're working with. and then use a drag-and-drop interface to design and generate pivots that present different aspects of that data in the form of tables. They do a lot of hard work for you to enable you to quickly focus on a specific subset of event data. Data models and their objects are designed by the knowledge managers in your organization.

In building a typical data model. Data models are composed of objects. and calculated fields. think of data models as analogs to database schemas. 270 . transactions. These specialized searches are used by Splunk Enterprise to generate reports for Pivot users. Splunk Enterprise knowledge managers design and maintain data models. When a Pivot user designs a pivot report. This information can affect your data model architecture--the manner in which the objects that make up the data model are organized. Data models can have other uses. knowledge managers use knowledge object types such as lookups. Then she selects an object within that data model that represents the specific dataset on which she wants to report. It encodes the domain knowledge necessary to build a variety of specialized searches of those datasets. and visualizations based on column and row configurations that you select. such as Web Intelligence or Email Logs. charts. which can be arranged in hierarchical structures of parent and child objects. What is a data model? A data model is a hierarchically structured search-time mapping of semantic knowledge about one or more datasets. When you plug them into the Pivot Editor. If you are familiar with relational database design. she selects the data model that represents the category of event data that she wants to work with. you must understand your data sources and your data semantics. they let you generate statistical tables. search-time field extractions. These knowledge managers understand the format and semantics of their indexed data and are familiar with the Splunk Enterprise search language. especially for Splunk Enterprise app developers. To create an effective data model. They enable users of Pivot to create compelling reports and dashboards without designing the searches that generate them.Understand the relationship between data models and pivot Data models drive the Pivot tool. Each child object represents a subset of the dataset covered by its parent object.

Objects break down into four types. search objects. and transaction objects in data models are collectively referred to as "root objects. Here are some basic facts about data model objects:     An object is a specification for a dataset. You can apply data models to different indexes and get different datasets. Child objects inherit constraints and attributes from their parent objects and have additional constraints and attributes of their own. Data model objects are defined by characteristics that mostly break down into constraints and attributes." Child objects have inheritance. transaction objects. The top-level event. Objects are hierarchical. Objects in data models can be arranged hierarchically in parent/child relationships. Each data model object corresponds in some manner to a set of data in an index.Data Model Object Data models are composed of one or more objects. and child objects. search. These types are: Event objects. 271 .

Hands-on Lab Please refer to Lab on desktop 272 .

End of Module Quiz Please refer to virtual machine for Quiz 273 .

End of Course Quiz Please refer to virtual machine for Quiz 274 .