Professional Documents
Culture Documents
Extended Events
Introduction
SQL Server Extended Events is a performance monitoring tool that helps to collect and
monitor the database engine actions to diagnose problems in SQL Server. Microsoft has
introduced the extended events with the SQL Server 2008 and then decided to retire
the SQL profiler. Behind taking such a decision, there were some disadvantages of SQL
profiler. The main problem of the profiler was related to performance because it
consumes many system resources, and this situation was affecting the database
performance negatively. SQL Server extended events do not impact the performance of
the SQL Server as the profiler does and it also offers numerous events that help to
troubleshoot the query performance and other problems. Such as to resolve the
deadlock problems, it can be the first option.
What is SQL Server extended events?
One of the common complaining issues of SQL Server is long-running queries. For this
reason, as a first step, we need to detect long-running queries to troubleshoot these
queries. To define a query as problematic in terms of the runtime of a query, a certain
threshold value must be determined for the relevant database because the database
environment waiting tolerance can change. Some systems cannot tolerate microsecond
wait, while some systems have higher thresholds for long wait times. As we stated, we
can use different tools to find out the poorly performing queries. However, using
extended events can be one of the effective methods to diagnose these queries in our
database engine. At this point, the disorienting problem will be which event to use
because SQL Server offers numerous events.
What is SQL Server extended events?
After giving a name to the session then we jump into the Events tab. Events are the
objects that are triggered when a specific action occurs. On this screen, we can find all
events and can filter the sql_statement_completed event. After finding the
sql_statement_completed event, we send out it to the Selected events list through the
right arrow button.
Create an extended event session to identify the long-running queries
Create an extended event session to identify the long-running queries
We click the Configure button to select the global event fields. Event fields are the
elements of the events that give information about the specific part of the events. Such
as, to find the application name that fired the event we need to choose the client app
name field. We select the client app name and client hostname on the Global Fields
(Action) list.
Create an extended event session to identify the long-running queries
Create an extended event session to identify the long-running queries
Create an extended event session to identify the long-running queries
Create an extended event session to identify the long-running queries
On the Data Storage tab, we will choose where the SQL Server extended events will
store the collected data. We select the event file to write the collected data into
an XEL file. After selecting the event_file type, we need to define the file location and the
maximum size of the file. When the Enable file rollover option is checked, after the
extended event XEL file reaches this maximum file size limit a new XEL file is created with
the same filename but appends a sequence number to the end of the file name.
Create an extended event session to identify the long-running queries
Create an extended event session to identify the long-running queries
On the Advanced tab, we can find some detailed settings. We can set how much
tolerance we have for data loss for the session we created in the event storage mode
options. We will choose the No event loss but this option can affect the system
performance negatively the databases which have heavy workloads. Another option
is Maximum dispatch latency and this option specifies the maximum duration that an
event can be kept in the buffer before being written to the target. Finally, we click
the OK button and create the extended event.
Create an extended event session to identify the long-running queries
Create an extended event session to identify the long-running queries
• Event Retention mode: This is how we want our events to be handled. Suppose in the case of a heavily
loaded system, this option ensures we want to capture all events or is it fine to lose some events while
keeping system performance and reliability intact.
• The event retention mode is set to “Single event loss”. This means if the event buffers fill up before
SQL Server can write the events to the targets, it will drop a single event instead of causing
significant performance degradation on the server.
• ALLOW_SINGLE_EVENT_LOSS. An event can be dropped from the session if the buffers
are full. A compromise between performance and data loss, this is the default value.
• ALLOW_MULTIPLE_EVENT_LOSS. Full event buffers containing multiple events can be
discarded. Minimal performance impact, but high data loss.
• NO_EVENT_LOSS. Events are never discarded; tasks that trigger events must wait until
event buffer space is available. Potential for severe performance impact, but no data loss.
• If we set this option to "No event loss" this option may impact system performance so care should
be taken.
Create an extended event session to identify the long-running queries
• Maximum Dispatch latency: This option forces events to be flushed from memory to the targets at a
defined interval.
• In seconds. Select this option to prolong or shorten the maximum dispatch latency.
• Unlimited. Select this option if you want the events dispatched only when the buffer is full.
• The final option includes:
• Maximum memory size
• Maximum event size
• Memory partition mode which is how the memory partition should be handled i.e. none, per node
or per CPU.
• NONE. Memory is not partitioned. A single group of event buffers are created.
• PER_NODE. A group of event buffers is created per NUMA node.
• PER_CPU. A group of event buffers is created per CPU.
Configuring Targets
Event File
The event file target can be used to write session data to a file.
Event Pairing
The event pairing target is used to match events that occur in pairs (for example, statement
starting and statement completing, or lock acquired and lock released), and report on beginning
events that have no matching end event.
Ring Buffer
The ring buffer target is used to write session data into a block of memory. When the allocated
memory is full, the oldest data in the buffer is discarded and new data is written in its place.
Configuring Targets
Histogram
The histogram target is used to partition a count of events into groups based on a specified value.
Event Counter
The event counter target is used to count events in a session. It takes no configuration
parameters.
Create an extended event session to identify the long-running queries
Analyze the collected data
After we create the extended event session, it will automatically start and begin to collect
the data.
We can use the Watch Live Data option to analyze the collected data.
For example, after executing the following query in the Adventureworks database, this
query will be captured by the extended event.
1 SELECT SO.AccountNumber FROM
2 Sales.SalesOrderHeader SO
3 INNER JOIN Sales.SalesOrderDetail SD
4 ON SO.ModifiedDate = SD.ModifiedDate
Analyze the collected data
Analyze the collected data
When we configured the Extended Events, if we selected the option to show live data after
configuration we can navigate to the newly created event, right click on the event and select the
"Watch Live Data" as shown below.
Analyze the collected data
Execution Time-outs
When a Transact-SQL statement runs for longer than the client application’s command time-out
setting, a time-out error will be raised by the client
application. Without detailed client application logging, it may be difficult to identify the
statement causing a time-out.
This scenario is an ideal use case for the Extended Events event pairing target, using either of the
following pairs:
• sqlserver.sp_statement_starting and sqlserver.sp_statement_completed (for systems using
stored procedures for database access).
• sqlserver.sql_statement_starting and sqlserver.sql_statement_completed (for systems
using ad hoc SQL for database access).
When a time-out occurs, the starting event will have no corresponding completed event, and will
be returned in the output of the event pairing target.
Usage Scenarios for Extended Events
Troubleshooting ASYNC_NETWORK_IO
The ASYNC_NETWORK_IO wait type occurs when the database engine is waiting for a client
application to consume a result set. This can occur because the client application processes a
result set row-by-row as it is returned from the database server.
To troubleshoot this issue with Extended Events, capture the sqlos.wait_info event, filtering on
wait_type value NETWORK_IO. The histogram target might be suitable for this investigation,
using either the client application name or the client host name to define histogram groups.
Usage Scenarios for Extended Events
Tracking Recompilations
Query execution plan recompilations occur when a plan in the plan cache is discarded and
recompiled.
High numbers of plan recompilations might be an indicator of a performance problem, and may
cause CPU pressure. Windows performance counters can be used to track overall recompilation
counts for a SQL Server instance, but more detail may be needed to investigate further.
In Extended Events, the sqlserver.sql_statement_recompile event can provide detailed
information, including the cause of recompilation.
The histogram target can be used for tracking recompilations. Group on source_database_id to
identify the database with the highest number of recompilations in an instance. Group on
statement/object_id to find the most commonly recompiled statements.
Usage Scenarios for Extended Events
Logon Information
Virtual machine: 10987C-MIA-SQL
User name: ADVENTUREWORKS\Student
Password: Pa55w.rd
This module describes tools and techniques you can use to monitor and trace Microsoft® SQL
Server® performance data, and to create baselines to assess future changes in performance. It focuses
on data collection strategy and techniques to analyze the collected data.
This lesson describes the tools that you can use to monitor and trace SQL Server diagnostic data.
Monitoring SQL Server enables you to detect performance problems as and when they happen or
before they happen, so you can resolve issues proactively. You can also use the trace and monitoring
data to establish a baseline and create a benchmark for SQL Server performance.
• Dynamic Management Objects
• Windows Performance Monitor
• Activity Monitor
• Extended Events
• SQL Server Profiler
• Analyzing Trace Data
• Trace Replay
• Demonstration: Analyzing Performance Monitor Data
Dynamic Management Objects
DMOs are a collection of dynamic management views (DMVs) and dynamic management functions
(DMFs) that you can use to monitor the health of a
SQL Server instance or database.
You can query a DMV by using Transact-SQL in the same way that you can query other views. A
DMF is a table-valued function that accepts one or more parameters.
• Server-scoped DMOs are stored in the sys schema of the master database and provide
information at the instance level. Querying a server-scoped DMO requires the VIEW SERVER
STATE permission and the SELECT permission on the appropriate object.
• Database-scoped DMOs are stored in the sys schema of each database and provide information
at the database level. Querying a database-scoped DMO requires the VIEW DATABASE STATE
permission and the SELECT permission on the appropriate object.
Dynamic Management Objects
There are more than 150 DMOs covering many categories; they are an important and useful tool
that you can use to monitor and tune SQL Server. You can identify the category of a DMV by its
prefix, as shown in the following table:
Dynamic Management Objects
The following DMVs are particularly valuable for gathering performance information:
• sys.dm_db_index_physical_stats. This provides size and fragmentation information for all
indexes.
• sys.dm_exec_rquests. This provides information about currently running queries including start
time, resource utilization, and estimated completion time. You can use it to identify blocking
issues.
• sys.dm_io_virtual_file_stats. This returns information about database file reads and writes. You
can use it to identify contention issues.
• sys_dm_os_wait_stats. This returns the total wait time for each wait type since instance startup
or when the wait statistics were last cleared. You can clear wait statistics by using the DBCC
SQLPERF(WAITSTATS, CLEAR) command.
Windows Performance Monitor
From the Monitoring Tools icon you can right click and launch the Resource Monitor.
Resource Monitor is another powerful tool that can help you see how your system
resources are being used. You also have the ability to launch the System Reliability
Monitor. This utility allows you to see information about software updates and
installations. You can also see critical events that occurred and on what day those
events occurred. Finally, you can see all of the problem reports that have been sent
from your computer by clicking on the "View all problem reports" link at the bottom of
the window.
Windows Performance Monitor
Performance Monitor
The Performance Monitor is primarily for viewing real time statistics. By default only
one counter is selected; the %Processor Time counter. However you can add additional
counters by clicking on the green plus sign. This will allow you to monitor any counters
you wish in real time.
While you can see all of the performance counters you like here, the real power of
Performance Monitor is found in its ability to capture performance metrics over
an elapsed period of time. Capturing data over a period of time allows you to see
trends and these trends are what are most useful for determining the overall
performance of your system. To capture this data, you can create what are called "Data
Collector Sets".
Windows Performance Monitor
Data Collector Sets
Data Collector Sets are aptly named. They collect data from your system so that you
can view changes in configuration information and performance information over a
specified period of time.
Activity Monitor is a lightweight monitoring tool that is built into SQL Server Management Studio.
Activity Monitor displays the current SQL Server processes and their effect on the instance. Activity
Monitor provides information about the following areas:
• Active expensive queries. This shows running queries with high resource usage in terms of CPU,
I/O, and memory.
• Data file I/O. This displays usage information for the database files.
• Processes. This shows diagnostic information about running processes in SQL Server in a tabular
format that is easy to analyze.
• Recent expensive queries. This shows recent queries with high resource usage in terms of CPU,
I/O, and memory.
• Resource waits. This displays real-time wait statistics.
• Overview. This provides an overview of the current health of the system.
Activity Monitor
SQL Server Activity Monitor presents information in a document window that consists of a number
of collapsible panes. You can customize these by rearranging or sorting columns, or by adding filters.
When you expand a tab in Activity Monitor, queries are sent to the database engine to retrieve the
required data. To ensure that unnecessary load is not placed on the server, querying stops when you
collapse a tab.
Extended Events
See module 9
SQL Server Profiler
SQL Server Profiler is an application for capturing, managing, and analyzing traces. The trace
can be saved to a trace file or a database table. SQL Server Profiler has been deprecated and
may be removed in future versions of SQL Server.
You can use SQL Server Profiler to capture relevant data points for selected events that you can
use to identify the causes of performance problems.
SQL Server Profiler
Trace Replay helps you to assess the impact of hardware and software changes by putting
a reallife load on a test system. You can perform a Trace Replay by using SQL Server
Profiler or the Microsoft SQL Server Distributed Replay feature.
You can use Trace Replay to replay a production workload on a test server to help to
identify performance issues or to test the impact of software patches, bug fixes, or
configuration changes. For example, you can replay the trace to check the effect of the
newly created indexes on the test server.
Database Engine Tuning Advisor
The Database Engine Tuning Advisor is a graphical and command-line tool that analyzes
how the SQL Server Database Engine processes queries and makes recommendations for
improving the performance of queries through the creation of indexes, partitions, indexed
views, and statistics. It is provided with SQL Server and can consume a trace file, analyze
the queries that are captured, and then provide details of:
• New indexes and statistics that can increase query performance.
• Suggest partition strategy.
• The usage of existing indexes.
Demonstration: Analyzing Performance Monitor Data
This lesson focuses on how to establish baselines and create benchmarks for SQL Server. You must
establish a SQL Server baseline because baselines give insights into trends that occur in a SQL
Server environment. This simplifies troubleshooting.
• Stress-Testing Tools
A baseline is the normal or usual state of a SQL Server environment. A SQL Server
baseline informs you how a system typically performs on any given day. Some of the
main benefits of a baseline include the following:
• It provides insights into trends that occur in an environment, in addition to data.
• You can compare baselines with current system state to find out what has changed
and proactively tune or resolve problems.
• It provides better capacity planning.
• It makes troubleshooting swift and easy.
Methodology for Creating Baselines
What to Capture?
Many diagnostic data points are available to be collected from one or more sources. Too
much data can be overwhelming and difficult to analyze. It will only add to the storage
cost instead of being useful.
Capture relevant data points—the data that can help you diagnose and foresee
performance problems.
Methodology for Creating Baselines
The data you collect will not be useful until you can analyze it to get meaningful
information. You can analyze Performance Monitor data by using Microsoft Excel® or by
importing the performance logs into a database:
• Microsoft Excel. The performance data, in csv format, can be manually analyzed in
Excel. If the data is collected in binary format, the binary log file can be converted to
csv format with the relog command-line utility. The relog utility ships with Windows
and does not require a separate installation. The following example shows a typical
command to convert a binary log file to csv format by using the relog utility:
relog <binary file path> -f csv –o <csv file path>
Analyzing data in Excel can be a tedious task. The column headers need to be formatted,
the performance data requires formatting, and then aggregate columns need to be added to
get the maximum and minimum standard deviation for the counter values. This becomes
even more tedious when more than one file is to be analyzed
Analyzing Collected Data
• Database. The performance data can be imported into SQL Server and analyzed by using
Transact- SQL statements. The performance data can be imported into a database manually,
loaded by using SQL Server Integration Services, or loaded by using the relog utility. The
simplest method is probably the use of the relog command-line utility. The following example
shows the syntax to import a performance log into a database by using the relog utility:
The relog utility accepts a binary log file and inserts it into the database that the Open
Database Connectivity (ODBC) connection specifies. The display string identifies the binary
log file or the data collector set within the database
Analyzing Collected Data
The relog utility imports data into three different tables, as follows:
• DisplayToID. This lists each data collector set that is imported into the database. A unique
identifier uniquely identifies each data collector set. The data collector is identified by the
display string value that is specified when importing the data, as shown in the preceding relog
command syntax. The table also contains the number of records that are imported and the log
start and log stop time.
• CounterDetails. This contains one row for each counter that is present in the performance log
file. Every counter is uniquely identified by a unique counter id. Each counter has an
associated machine name. This is helpful in identifying counter values from different
computers.
• CounterData. The CounterData table stores the actual counter values. The important
columns are GUID, counterID, counter value, and counterdatetime. The GUID columns
link to the DisplayToID GUID column. The counterID columns link to the CounterDetails
counterID column. The counter value column contains the actual counter value, and the
counterdatetime column contains the timethat the value was recorded.
Database Engine Tuning Advisor
• Transact-SQL
• XML
Demonstration: Collecting Performance Data using
DMVs
• In this demonstration, you will see how to collect
performance data by using DMVs
Lab: Monitoring, Tracing, and Baselining
Logon Information
Virtual machine: 10987C-MIA-SQL
User name: ADVENTUREWORKS\Student
Password: Pa55w.rd