Professional Documents
Culture Documents
Setting Alerts
Database designers and administrators are often interested in
ensuring that the databases that they manage are running within
specific parameters. For example, you might want to know if the
number of open transactions on a server goes above 20, as this might
indicate code-related performance problems. Although you could hire
someone to constantly stare at performance-related statistics and tap
you on the shoulder whenever something of interest occurs, there’s a
much easier way to achieve the same results.
When you install SQL Server 2000, the SQL Server Agent service is
also installed and configured. SQL Server Agent can be used to send
alerts based on performance data. For example, if the number of user
connections exceeds a certain value, a server administrator can be
notified. Exercise 6-3, walks you through the process of
implementing performance-based alerts using SQL Server Agent.
These alerts can be used to notify you of important events on your
server when they occur.
EXERCISE 6-3
Setting Performance-Based Alerts
1. Expand the Management folder and then expand the SQL
Server Agent item. Right-click Alerts and select New Alert.
Illustration 6
Name the alert and set the type to SQL Server Performance
Condition Alert. Ensure that the Enabled box is checked.
2. To define the alert condition, choose an object and a counter.
These are the same values that can be monitored by Performance
Monitor (described earlier). Finally, set the alert to fire when a
value is lower than, higher than, or equal to a certain number. For
example, you might choose to create an alert for the following
parameters:
· Name High number of user connections
· Type SQL Server performance condition alert
· Object SQLServer:General Statistics
· Counter User connections
· Value Alert if counter rises above, say, 20
Illustration 7
You want to find queries that are Create a SQL Profiler trace that
performing table scan operations. includes information from the Scans
object.
You want to monitor the performance Create a SQL Profiler trace that
effects of locks on your database includes information from the Locks
server. object. This will provide you with
information about which processes are
holding locks, and which processes are
waiting for locks to be released.
Executing Traces
To execute the trace, select File | Run Traces and select either a
sample trace or one that you’ve created. A screen will appear showing
information related to monitored events, as shown in Figure 6-4. You
can run multiple traces at once to record different kinds of
information (for example, query performance and overall server
statistics). To simulate the recorded information at a later time, you
can replay a trace by using the Replay menu. Finally, to view captured
information, you can select File | Open and then select Trace File (if
you’ve saved to a file) or Trace Table (if you’ve saved to a SQL
Server table). Trace files can also be used with data analysis tools,
such as Microsoft Access and Microsoft Excel.
Figure 6-4: Viewing trace file information in SQL Profiler
If you’re planning to use certain trace settings often, you can create
new trace templates. Trace templates include all of the settings
(including events, data columns, and filters) that you want to set when
you start a capture. This is very useful when, for example, you might
have an application that connects to many different database servers.
You can simply copy the trace template to each of the machines you
want to monitor, choose settings for the trace (including where the
trace data should be stored), and then start the trace.
It’s very useful to store trace information in database tables,
because this provides you with a way to easily analyze large
sets of results. For example, you can query against a trace
table and find the worst-performing queries. Or, you can
choose to delete all of the rows in the table that are not of
interest to you. By using your bag of SQL tricks, trace file
information stored in a table can be a very valuable asset.
In addition to using the SQL Profiler application to manually start the
monitoring of database performance characteristics, you can use
built-in functions and stored procedures in SQL Server 2000 to create
traces that are defined within SQL Server 2000. These traces can be
configured to automatically begin whenever SQL Server 2000 is
started, and they don’t require that a user be logged into the machine
to run. To get started with scripting traces, you can simply choose the
Script Trace option from the File menu in SQL Profiler. The resulting
SQL file can be used to automatically create and define a trace in
SQL Server. For more information about scripting and executing
traces in this way, see SQL Server Books Online.
By now, you can probably think of a lot of different uses for SQL
Profiler trace information. For example, you can create a trace that
provides you with the details of which queries are being run on your
production servers. Using this information, you can pinpoint queries
that take a long time to run. Or, you could look for information
about the most active databases on a server that hosts many different
databases. And, the ability to replay trace files can be very helpful in
the performance optimization process since it allows you to record a
workload, make changes (such as adding indexes or changing servers
options) and then to retest using the same, original workload. In this
way, you can be sure that your changes had an overall positive impact
on performance.
Overall, the information provided by SQL Profiler can provide some
extremely useful and relevant insight into the activity on your
production servers. Later in this chapter, we’ll look at how trace
information can be used to optimize indexing structure. With that in
mind, let’s move on to the important topic of indexing.
Creating Statistics
As we mentioned earlier in this chapter, one of Microsoft’s goals with
SQL Server 2000 was to greatly reduce the amount of administration
required to maintain database servers by creating a largely self-tuning
platform. An excellent example of this is the fact that SQL Server
2000 automatically creates, calculates, and maintains statistics for
database tables. You’ll probably find that the default database settings
will be best for most applications.
You can, however, override the default settings for the creation and
update of statistics in Enterprise Manager by viewing the properties
of a database. On the Options tab (see Figure 6-5), you can specify
whether you want to enable or disable the automatic creation and
update of statistics. As these operations require some server
resources, you might have special reasons for not performing them
automatically. For the vast majority of applications, however, the
automatic creation of statistics is highly recommended as it can
significantly increase performance, and it reduces administrative
overhead (that is, SQL Server won’t “forget” to keep statistics up-to-
date).
Figure 6-5: Viewing options for automatically creating and updating statistics
Creating Indexes
As is the case for other types of database objects, there are several
different ways to create and manage indexes in SQL Server 2000. An
easy way to create indexes is by using the graphical tools in Enterprise
Manager. By right-clicking on a table within Enterprise Manager and
choosing Design Table, you’ll be able to access the properties of the
table. You can easily view, modify, and create indexes by selecting the
Indexes/Keys tab (see Figure 6-6). Note there are options for the
type of index (or key constraint), the columns to be contained in the
index, and details about the index characteristics.
Figure 6-6: Viewing index information in Enterprise Manager
Although Enterprise Manager is a useful tool for creating one or a
few indexes on specific tables, you will probably want to take
advantage of the flexibility and maintainability of creating indexes
within Transact-SQL code. The command syntax shown here can be
used to create indexes in Transact-SQL:
Execution Plan
SQL Query Analyzer has the ability to return a graphical execution
plan of the steps that SQL Server had to perform to obtain the
results from a query (see Figure 6-8). Included are statistics related to
the resource usage for the query (such as CPU time and disk I/O),
along with the actual objects that were referenced and any indexes
that were used. By hovering the mouse over portions of the
execution plan, you will receive in-depth details about the specific
operation that was performed.
Figure 6-8: Viewing a graphical execution plan in Query Analyzer
If you’re new to SQL Server 2000 or to relational database systems,
the results from Query Analyzer might seem very complicated.
However, by gaining an understanding of the types of steps that are
performed (and how they can help you optimize queries), you’ll be
able to quickly identify inefficiencies within your queries. The
following is a partial list of the types of operations that you’ll find in
the execution plan, along with their usefulness:
· Table scans and clustered index scans In general, table and
index scans are inefficient processes. When the query optimizer
performs a scan, it is an indication that SQL Server had to parse
through all of the rows in a given table. This is much slower than
using indexes to quickly pinpoint the data that you need. If
significant time is spent on these steps, you should consider adding
indexes to the objects that are being scanned.
· Table seeks and clustered index seeks Seek operations are
more efficient than scan operations. Generally, you’ll find these
steps to be much quicker than similar table scans or clustered index
scans. If the objects that are being referenced in the queries contain
covering indexes, the query optimizer is able to utilize these
indexes using a seek operation.
· Parallelism If you’re running your queries on a machine that
has multiple CPUs, you can see how SQL Server has decided to
split up the job among different processors. This might be useful,
for example, if you want to estimate the performance
improvements from running on production servers (vs. those in the
development environment).
· Number and size of rows processed A handy feature of the
execution plan is that it graphically displays arrows between steps,
showing the data that’s passed between various operations within a
query. The wider the arrow, the more rows are being processed
(and, the greater the amount of memory that is used to perform the
query). Very wide rows might indicate that you have forgotten a
JOIN condition, resulting in a very large list of results for specific
steps.
A complete list of all of the types of information returned in the
execution plan is available in Books Online. Based on this
information, here are a few scenarios that you might encounter.
You suspect that several queries are Use the execution plan to identify
performing slowly because of missing where table scans or clustered index
indexes. scans are occurring.
You want to ensure that the new Use the execution plan to ensure that
indexes you created are being used you see table seeks and clustered
by specific queries. index seek operations, instead of
scans.
A particular query takes a long time to View the number of rows processed
execute. You suspect that one or within the execution plan to find
more JOIN conditions are missing. abnormally large numbers of rows that
are being passed between steps.
Server Trace
A server trace provides a basic return of the types of operations that
were performed by SQL Server in order to execute a batch of SQL
statements. This feature can be very helpful in determining what
steps SQL Server is attempting to perform in order to complete the
commands that you are sending. Figure 6-9 provides an example of
the types of information that may be returned for a series of
Transact-SQL batch operations.
Figure 6-9: Viewing a server trace in Query Analyzer
Client Statistics
When you’re dealing with database performance, the most important
parameter to measure is what the end user sees. For example, it
doesn’t matter all that much what’s going on within SQL Server if
your users have to wait minutes for simple queries to execute. Query
Analyzer includes the ability to view details of the query results with
respect to the client. The statistics that are returned (as shown in
Figure 6-10) can be useful for determining the end-user experience
for certain types of workloads.
Figure 6-10: Viewing client statistics information in Query Analyzer
Index and Statistics Management
In addition to the performance information returned by Query
Analyzer, database developers can click on actual database objects
and choose to correct performance problems without leaving Query
Analyzer. For example, if a particular step in a query takes a long
time to execute because it is performing a table scan, a developer
could right-click on the icon of the table that is being scanned and
choose either Manage Indexes or Manage Statistics (see Figure 6-11).
Figure 6-11: Choosing options to manage indexes and statistics in Query Analyzer
From the resulting dialog box, the developer could quickly and easily
create indexes or statistics. Figure 6-12 provides an example of the
dialog box that is used to view and create indexes. The developer
could then rerun the query to measure changes in performance and
to ensure that the index is being properly used.
Figure 6-12: Viewing and creating indexes using Query Analyzer
In addition to the features available via the execution plan
functionality in Query Analyzer, you can choose to run the Index
Tuning Wizard for a specific SQL query (or group of queries) by
running the Index Tuning Wizard from within Query Analyzer.
Instead of using a Profiler-generated trace file or trace table, you’ll
have the option of choosing SQL Query Analyzer text as the source
of the workload to analyze.
To set these options for all databases on a server, you can use the
sp_configure stored procedure, as follows:
USE master
EXEC sp_configure 'query governor cost
limit', '1'
RECONFIGURE
EXEC sp_configure
Before this setting will take effect, you need to stop and restart SQL
Server. Finally, the query governor cost limit can be set on a per-
transaction basis by using the following statement as part of a
transaction:
SET QUERY_GOVERNOR_COST_LIMIT
A value of 0 will set no limit on the maximum query execution time.
Any values greater than 0 will specify the number of seconds that a
query may run. Note that the same query might take differing
amounts of time to complete, based on server hardware
configurations.
You should be very careful when making system-wide
changes, such as setting the query cost governor option.
There may be legitimate reasons for long-running queries to
execute, such as month-end reports or database
maintenance tasks. When the query cost governor is set,
these operations may not complete, causing other
application problems. Be sure to thoroughly understand
your application and workload before setting this option—
doing so can often avoid a lot of angry phone calls.
People often refer to scalability when talking about the features of a database
platform. However, the term itself—let alone the actual measurements—is open
to interpretation. In general, we can define scalability as the ability of database
server software to take advantage of upgraded hardware. The law of diminishing
returns applies, though. The performance increase of adding a second CPU to a
server might not be the 100 percent that one would expect, due to the overhead
involved with splitting operations between processors. Many efforts have been
made to ensure that the architecture of SQL Server allows for scaling to large
databases and making use of advanced options, such as multiple gigabytes of
physical memory and parallel processing.
Based on this information, here are a few scenarios that you might
encounter.
You want to measure the overall Use Performance Monitor and add the
performance of your server, including appropriate objects and counters. You
CPU and memory utilization. can also create logs of performance
activity for longer-term analysis.
You want to find the worst-performing Use SQL Profiler to monitor for the
queries in your application. worst-performing queries.
You want to find out why a specific Use the graphical execution plan in
query is taking a long time to execute. Query Analyzer to find out which steps
are taking the most time to complete.
In general, it is recommended that you use all of the various tools and
features of SQL Server 2000 for improving overall system efficiency
and performance. After all, highly optimized queries alone won’t give
you the performance you want if the server settings are not
configured properly. It’s a constant juggling act to optimally manage
resources, but it’s one that can easily pay for itself in reduced
equipment costs and happier end users!
CERTIFICATION SUMMARY
In this chapter, we looked at the many different methods that can be
used for monitoring, analyzing, and optimizing the performance of
SQL Server 2000. Specifically, we examined the importance of
optimizing performance at the levels of the server, of the database,
and of specific queries. Through the use of tools such as SQL
Profiler and the Index Tuning Wizard, you can quickly and easily
identify and remove performance bottlenecks in your database
implementation. The end result should be an optimally performing
database server and applications.
LAB QUESTION
Your organization is an application service provider (ASP) that
provides service to a Web-based application for several hundred
customers. Each of your 20 database servers hosts several customer
databases. For example, customers A, B and C may be hosted on
DBServer1, and customers D, E, and F may be hosted on
DBServer2. Recently, several users have complained about slow
performance when performing specific operations within the
application. The problems seem to have occurred recently, and no
major changes (at least none that you’re aware of) have been
implemented on the production servers.
Several members of your organization (within and outside of the IT
team) have speculated about potential causes of the problems and
have recommended solutions. For example, the Executive team has
recommended that additional servers be added, although this decision
is based on a “hunch.” Those within the IT department claim that
the problem is application related (perhaps some extremely slow
queries or locking problems). Database and application developers, in
turn, are convinced that the problem lies in the configuration of the
servers since none of these issues have been seen in the development
environment. There’s only one thing that everyone agrees on: The
problem must be solved.
As the head of the database implementation team, it is ultimately
your responsibility to ensure that the application performs adequately
in production. You have performed some basic analysis of the
Web/application servers and have found that they are not the
bottleneck. Basic information seems to indicate that the performance
issues are occurring at the level of the database server. Therefore, you
need to isolate and fix the problem. How can you use the various
performance-monitoring tools and features in SQL Server 2000 to
determine the source of the problem?
LAB ANSWER
Isolating and troubleshooting performance problems can be difficult
and time-consuming. Based on the scenario presented, the actual
problem might lie in any of dozens of places. Worse yet, there could
be multiple problems or circumstances that could be contributing to
the overall problem. From a political angle, the finger-pointing and
guesses about the problem don’t help in resolving the issue.
So what’s the “correct” answer to this question? The single most
important aspect of troubleshooting any performance-related problem
is to take an organized and methodical approach towards isolating the
issues. Through an effective process of elimination, you should be
able to pinpoint the problem (or, at the very least, determine what the
problem is not). Based on that information, you can focus your efforts
on the more likely causes.
In this specific scenario, some suspect that the problem might exist at
the server level. You can prove/disprove that by using the Windows
NT/2000 Performance Monitor application. By measuring statistics
related to CPU, memory and network performance, you can
determine whether a hardware upgrade might help. You can also
determine whether the problems are occurring on all of the machines
or only a few of them. While you’re working on Performance
Monitor, you should be sure to measure SQL Server statistics. Using
SQL Server objects and counters, you can determine, for example,
the number of queries that are running on those machines, the
number of recompilations that are occurring and how many user
connections are actually created.
If, after measuring server performance, you find that you need to dive
deeper into the problem, you can use SQL Profiler to measure actual
query load on specific servers. For example, if you know that certain
complex functionality is taking a long time, a SQL Profiler trace can
help you determine whether it’s a general problem, or if specific
stored procedures or views are the culprit.
Finally, assuming you isolate the problem to specific queries, stored
procedures, or views, you can use Query Analyzer to pinpoint the
slow steps in the process. For example, you might find that a missing
or out-of-date index or two are causing drastic drops in performance.
Or, you might find that a few inefficient views that are called often
use significant server resources.
Through this organized approach, you can take a seemingly chaotic
situation and determine how to isolate and fix the issue. If all goes
well, you’ll quickly and easily resolve the issue. In the worst case, you
can rest assured that you’re taking an organized approach to
eliminating potential causes of the issue.