You are on page 1of 5

Perfomance tuning:

==================
**1.What is Index and types?
*** 2.Diff b\w CLuster and non cluster index?
3.What is covering index and heap table?
**** 4.What is fill factor and fragmentation and how to find fragmentation?
***5.What is blocing and how to identify?
** 6.What is the process will follow when blocing occurs?
*7.What DMV's will use to identify blocings?
** 8.If multiple blockings occur what you will do?
*** 9.What is deadlock?What events will use in SQL perofiler for trace deadlock
information?
*10.What is locks and types?How identify locking details?
11.What are the isolation levels?
*12.What is dirty read?
*** 13.What is the diff b/w rebuild and reorg index?what are the queries will us
e?
*** 14.What is update statistics?what query will use?
** 15.When CPU utilization more what to do?
16.When memory utilization more what to do?
17.WHat is profiler and how you will trace long running queries?
**18.WHat is DMV and explain any 5 DMV's which used?
***19.WHat DBCC Checkdb?
**** 20.If my query is running slow like that application team complained what y
ou will do?
21.What is perfmon and tell any 10 counters?
* 22.What are the counters were used for replication and mirroring?
23.What is resource governor?
24.What is Contained Database? In which version of SQL Server they were introdu
ced?
**25.What is shrinking and have you done anytime datafile shrinking.If we will
shrink is there any effect?
* 26.How would you find open transactions in SQL Server?
27.What is CPU Affinity?
How do I set the transaction isolation level when connecting to a SQL Server data
base?
To set the isolation level, you can issue a SET TRANSACTION ISOLATION LEVEL stat
ement after you connect to SQL Server. The isolation level will apply to the res
t of that session, unless you explicitly change the level again.
Within the SET TRANSACTION ISOLATION LEVEL statement, you must specify one of th
e following five isolation levels:
READ UNCOMMITTED: A query in the current transaction can read data modified withi
n another transaction but not yet committed. The database engine does not issue
shared locks when Read Uncommitted is specified, making this the least restricti
ve of the isolation levels. As a result, it s possible that a statement will read
rows that have been inserted, updated or deleted, but never committed to the dat
abase, a condition known as dirty reads. It s also possible for data to be modifie
d by another transaction between issuing statements within the current transacti
on, resulting in nonrepeatable reads or phantom reads.
READ COMMITTED: A query in the current transaction cannot read data modified by a
nother transaction that has not yet committed, thus preventing dirty reads. Howe
ver, data can still be modified by other transactions between issuing statements
within the current transaction, so nonrepeatable reads and phantom reads are st
ill possible. The isolation level uses shared locking or row versioning to preve
nt dirty reads, depending on whether the READ_COMMITTED_SNAPSHOT database option
is enabled. Read Committed is the default isolation level for all SQL Server da

tabases.
REPEATABLE READ: A query in the current transaction cannot read data modified by
another transaction that has not yet committed, thus preventing dirty reads. In
addition, no other transactions can modify data being read by the current transa
ction until it completes, eliminating nonrepeatable reads. However, if another t
ransaction inserts new rows that match the search condition in the current trans
action, in between the current transaction accessing the same data twice, phanto
m rows can appear in the second read.
SERIALIZABLE: A query in the current transaction cannot read data modified by ano
ther transaction that has not yet committed. No other transaction can modify dat
a being read by the current transaction until it completes, and no other transac
tion can insert new rows that would match the search condition in the current tr
ansaction until it completes. As a result, the Serializable isolation level prev
ents dirty reads, nonrepeatable reads, and phantom reads. However, it can have t
he biggest impact on performance, compared to the other isolation levels.
SNAPSHOT: A statement can use data only if it will be in a consistent state throu
ghout the transaction. If another transaction modifies data after the start of t
he current transaction, the data is not visible to the current transaction. The
current transaction works with a snapshot of the data as it existed at the begin
ning of that transaction. Snapshot transactions do not request locks when readin
g data, nor do they block other transactions from writing data. In addition, oth
er transactions writing data do not block the current transaction for reading da
ta. As with the Serializable isolation level, the Snapshot level prevents dirty
reads, nonrepeatable reads and phantom reads. However, it is susceptible to conc
urrent update errors.
Update Statistics:
----------------1.WHat is statistics and why need to update the statistics?
> SQL Server statistics are a collection of distinct values in a specific ta
ble column or columns, collected by SQL Server by sampling table data.
> SQL Server statistics are created automatically by Query Optimizer for ind
exes on tables or views when the index is created. Usually, additional statistic
s are
not needed, nor the existing ones require modification to achieve best perf
ormance.
2.WHat command will use to update the statistics?
Use <DB Name>
Exec SP_updattestats
or
To update a particular table use the following query
update statistics <Table name>.<Index Name>
3.How to find when update statistics were updated?
DBCC show_statistics
4.What is full scan and sample in update statistics?
FULLSCAN
new statistics are created by scanning all table/view rows and the
number of Rows Sampled is equal to the number of the table/view rows. For table
s with a small number of rows, even when this parameter is not specified, all ta
ble/view rows are sampled.
UPDATE STATISTICS Person.Address WITH FULLSCAN
UPDATE STATISTICS Person.Address WITH FULLSCAN

SAMPLE
the new statistics are created by sampling a specific number of tabl
e/view rows.
Using SAMPLE 100 PERCENT gives the same results as using the FULLSCAN parame
ter. SAMPLE and FULLSCAN cannot be used in the same UPDATE STATISTICS statement.
UPDATE STATISTICS Person.Address WITH SAMPLE 10 PERCENT
UPDATE STATISTICS Person.Address WITH SAMPLE 10 PERCENT
Fragmentation:
=============
What is Fragmentation? How to detect fragmentation and how to eliminate it?
------------------------------------------------------------------------A. Storing data non-contiguously on disk is known as fragmentation. Before lear
ning to eliminate fragmentation, you should have a clear understanding of the ty
pes
of fragmentation. We can classify fragmentation into two types:
Internal Fragmentation: When records are stored non-contiguously inside the page,
then it is called internal fragmentation. In other words, internal
fragmentation is said to occur if there is unused space between records in a p
age. This fragmentation occurs through the process of data modifications
(INSERT, UPDATE, and DELETE statements) that are made against the table and th
erefore, to the indexes defined on the table. As these modifications are not equ
ally
distributed among the rows of the table and indexes, the fullness of each page
can vary over time. This unused space causes poor cache utilization and more I/
O,
which ultimately leads to poor query performance.
External Fragmentation: When on disk, the physical storage of pages and extents i
s not contiguous. When the extents of a table are not physically stored contiguo
usly
on disk, switching from one extent to another causes higher disk rotations, an
d this is called Extent Fragmentation.
How to detect Fragmentation: We can get both types of fragmentation using the DM
V: sys.dm_db_index_physical_stats. For the screenshot given below,
the query is as follows:
SELECT OBJECT_NAME(OBJECT_ID), index_id,index_type_desc,index_level,
avg_fragmentation_in_percent,avg_page_space_used_in_percent,page_count
FROM sys.dm_db_index_physical_stats
(DB_ID(N'AdventureWorksLT'), NULL, NULL, NULL , 'SAMPLED')
ORDER BY avg_fragmentation_in_percent DESC
.avg_fragmentation_in_percent: This is a percentage value that represents extern
al fragmentation. For a clustered table and leaf level of index pages, this is
Logical fragmentation, while for heap, this is Extent fragmentation. The lower
this value, the better it is. If this value is higher than 10%, some corrective
action should be taken.
avg_page_space_used_in_percent: This is an average percentage use of pages that r
epresents to internal fragmentation. Higher the value, the better it is.
If this value is lower than 75%, some corrective action should be taken.
Reducing Fragmentation in an Index: There are three choices for reducing fragmen
tation, and we can choose one according to the percentage of fragmentation:
If avg_fragmentation_in_percent > 5% and < 30%, then use ALTER INDEX REORGANIZE:
This statement is replacement for DBCC INDEXDEFRAG to reorder the leaf level
pages of the index in a logical order. As this is an online operation, the ind
ex is available while the statement is running.
If avg_fragmentation_in_percent > 30%, then use ALTER INDEX REBUILD: This is repl

acement for DBCC DBREINDEX to rebuild the index online or offline.


In such case, we can also use the drop and re-create index method.
Index Rebuild : This process drops the existing Index and Recreates the index.
USE AdventureWorks;
GO
ALTER INDEX ALL ON Production.Product REBUILD
GO
In order to build only a specific index:
dbcc dbreindex('database', 'indexname', 'fillfactor')
USE <Database_Name>
GO
ALTER INDEX <Index_Name> ON <Table_Name> REBUILD
GO
Index Reorganize : This process physically reorganizes the leaf nodes of the ind
ex.
USE AdventureWorks;
GO
ALTER INDEX ALL ON Production.Product REORGANIZE
GO
dbcc indexdefrag('database', 'indexname', 'tablename')
SQL Server s Buffer Manager Measurements
========================================
Page Life Expectancy:
-------------------The number of seconds a page will stay in the buffer pool without references
The relevance of monitoring this counter cannot be overstated, as a low valu
e for it indicates that a SQL Server memory pressure.
indicated that the value for this counter should be at least 300 for OLTP appli
cations. SQL Server s CAT also indicated that values for this counter should never
quickly drop by 50% or more.
SELECT [cntr_value]
FROM sys.dm_os_performance_counters
WHERE
[object_name] LIKE '%Buffer Manager%'
AND [counter_name] = 'Page life expectancy'
Buffer Cache hit ratio:
---------------------the percentage of pages found in the buffer cache without having to read from
disk.
OLTP applications it should equal or exceed 98%
Page reads / sec, Page writes / sec and Lazy writes / sec Measurements:
---------------------------------------------------------------------Continuing with Buffer Cache counters, we have the number of read and written d
atabase pages per second together with Lazy writes / sec. Taken together these
three measurements may give good indication on memory contention (pressure) an
d possible indexing issues (specific SQL Server counters that may reflect on dat
abase
design issues will be detailed in a separate article).
SQL Server s Memory Manager Measurements:

=========================================
Total Server Memory and Target Server Memory:
-------------------------------------------Total Server Memory is defined as the amount of memory the server has committed
using the memory manager while Target Server Memory is defined as
the ideal amount of memory the server can consume
Operating System s Memory Measurements
======================================
Available Mbytes
---------------Available Mbytes (and its more granular kin Available Bytes and Available Kbytes ) c
ounter is a system-level counter that indicates how much free memory is
available in the system. It is part of Memory performance counter category.

You might also like