Professional Documents
Culture Documents
Ans. Sys.Databases
sys.dm_exec_requests DMV
Sys.dm_os_waiting_tasks
SQL Server Management Studio Reports – Standard Report – Activity All Blocking Transactions
Once you see the queries that have the longest elapsed time or most I/O, you can then look at the execution plans of those
particular queries to see what inefficiencies are there, and look for possible places to improve indexes or even rewrite a query
using a different approach.
1. Activity Monitor
SELECT
SERVERPROPERTY ( 'ProductVersion' ), 9.00.368.***
SERVERPROPERTY ( 'ProductLevel' ), SP2
SERVERPROPERTY ( 'Edition' ) – Enterprise Edition
1. Paste the following SQL script into your New Query page replacing [YourDatabase]
with the name of your database.
EXEC sp_resetstatus [YourDatabase];
DBCC checkdb([YourDatabase])
2. Click Execute
Primary Reasons why SQL database is marked as Suspect
When SQL Server starts up, it attempts to obtain an exclusive lock on the server’s device file. If the
device file is being used by another process or if it is found missing, the SQL Server starts
displaying the errors. Possible reasons behind such errors are:
1. The system failed to open the device where the data or the log file resides.
2. Cannot find the file specified during the creation or opening of the physical device.
3. SQL server crashed or restarted in the middle of a transaction thus corrupting the transactions
log.
4. Cannot access data or log file while coming online, because of the installed antivirus.
5. The database server was shut down improperly.
6. Lack of Disk Space.
7. SQL cannot complete a rollback or roll forward operation.
8. Database files are being held by the operating system, third-party backup software, etc.
Steps to Fix the SQL Server Database Suspect Mode Error
Here are the steps to change ‘SQL database suspect mode to normal mode’ :
Index Reorganize : This process physically reorganizes the leaf nodes of the index.
USE AdventureWorks;
GO
ALTER INDEX ALL ON Production.Product REORGANIZE
GO
Recommendation: Index should be rebuild when index fragmentation is great than 40%. Index should be
reorganized when index fragmentation is between 10% to 40%. Index rebuilding process uses more CPU and it
locks the database resources. SQL Server development version and Enterprise version has option ONLINE, which
can be turned on when Index is rebuilt. ONLINE option will keep index available during the rebuilding.
Please follow the steps below to change the fixed TCP port for the SQL Server:
1. Stop "SafeCom Service", for which the SQL Server port should be changed.
2. On the SQL Server open "SQL Server Configuration Manager".
3. In the left pane expand "SQL Server Network Configuration" and click on "Protocols for SQL-
INSTANCE-NAME" the SafeCom databases are running on.
(Default instance name for SQL Server is MSSQLSERVER. For Slaves default is
SAFECOMEXPRESS)
4. In the right pane right click on "TCP/IP > Properties".
5. Click on the "IP Addresses" Tab and scroll down to section "IPAll".
6. Clear the "TCP Dynamic Ports" field.
7. Fill the "TCP Port" field with the TCP port which should be used for the SQL Connection.
(default TCP Port: 1433)
8. Click "OK", if settings have been changed you will get prompted to restart the SQL Server in order to
apply the changed Port settings.
9. Close "SQL Server Configuration Manager"
10. Start SafeCom Service.
Make sure to follow KB "How to connect with non-default SQL TCP/IP port in SafeCom / KB 24852.
This will be needed if you are using a non-default SQL TCP port and if you are not running the SQL
Server Browser.
10. Always On
11. Log Shipping process, DCDR Drill, Reverse Log Shipping, Tuff File, Work File
Problem
In SQL Server 2005, TempDB has taken on some additional responsibilities. As such, some of the
best practice have changed and so has the necessity to follow these best practices on a more wide
scale basis. In many cases TempDB has been left to default configurations in many of our SQL
Server 2000 installations. Unfortunately, these configurations are not necessarily ideal in many
environments. With some of the shifts in responsibilities in SQL Server 2005 from the user defined
databases to TempDB, what steps should be taken to ensure the SQL Server TempDB database is
properly configured?
Solution
In an earlier tip, we discussed sizing (Properly Sizing the TempDB Database) the TempDB database
properly. The intention of that tip was to determine the general growth and usage of the database in
order to determine the overall storage needs. In this tip we want to take a broader look at how
TempDB can be optimized to improve the overall SQL Server performance.
Global (##temp) or local (#temp) temporary tables, temporary table indexes, temporary stored
procedures, table variables, tables returned in table-valued functions or cursors.
Database Engine objects to complete a query such as work tables to store intermediate results
for spools or sorting from particular GROUP BY, ORDER BY, or UNION queries.
Row versioning values for online index processes, Multiple Active Result Sets (MARS)
sessions, AFTER triggers and index operations (SORT_IN_TEMPDB).
DBCC CHECKDB work tables.
Large object (varchar(max), nvarchar(max), varbinary(max) text, ntext, image, xml) data type
variables and parameters.
Next Steps
Based on this information, take a look at your TempDB configurations across your SQL Server
environment and determine what changes are needed.
As you spec out new machines, be sure to keep TempDB in mind. If you suspect TempDB
should have its own disks, be sure to account for those as you purchase and/or configure your
disk drives.
If you have a standard SQL Server deployment checklist, be sure that it includes the items
from this tip that make sense in your environment.
Check out these related tips:
o SQL Server System Databases
o Interview Questions - SQL Server System Databases
o Properly Sizing the TempDB Database
Every SQL Server has a shared database named tempdb that is for use by temporary objects.
Because there is only one tempdb database per instance, it often proves to be a bottleneck for those
systems that make heavy usage of tempdb. Typically, this happens because of PAGELATCH, in-
memory latch contention on the allocation bitmap pages inside of the data files. The allocation bitmap
pages are the page free space (PFS), global allocation map (GAM), and shared global allocation map
(SGAM) pages in the database. The first PFS page occupies PageID 1 of the database, the first GAM
page occupies PageID 2, and the first SGAM page occupies PageID 3 in the database. After the first
page, the PFS pages repeat every 8088 pages inside of the data file, the GAM pages repeat every
511,232 pages (every 3994MB known as a GAM interval), and the SGAM pages repeat every
511,232 + 1 pages in the database.
When PAGELATCH contention exists on one of the allocation bitmap pages in the database, it is
possible to reduce the contention on the in-memory pages by adding additional data files, with the
same initial size and auto-growth configuration. This works because SQL Server uses a round-robin,
proportional fill algorithm to stripe the writes across the data files. When multiple data files exist for a
database, all of the writes to the files are striped to those files, with the writes to any particular file
based on the proportion of free space that the file has to the total free space across all of the files:
This means that writes are proportionally distributed to the files according to their free space, to
ensure that they fill at the same time, irrespective of their size. Each of the data files has its own set
of PFS, GAM, and SGAM pages, so as the writes move from file to file the page allocations to occur
from different allocation bitmap pages spreading the work out across the files and reducing the
contention on any individual page.
Note: It is possible to have PAGELATCH contention in tempdb that is not related to the allocation
bitmap pages, typically on one of the system tables such as syshobts, and in these cases adding
additional files will not help in reducing the in-memory latch contention. The system table contention
occurs when you are creating and destroying objects rapidly that they don’t get cached by the tempdb
metadata cache. The system table exists once for the database, not once per file, so adding files
won’t alleviate the contention.
Existing Recommendations
There are several different published suggestions for calculating the number of files used by tempdb
for the best performance. The SQL Server Customer Advisory Team (SQLCAT) team recommends
that tempdb should be created with one file per physical processor core, and this tends to be one of
the most commonly quoted configuration methods for tempdb. While this recommendation is founded
in practical experience, it is important to keep in mind the types of environments that the SQLCAT
team typically works, which are typical the highest volume, largest throughput environments in the
world, and therefore are atypical of the average SQL Server environment. So while this
recommendation might prevent allocation contention in tempdb, it is probably overkill for most new
server implementations today. Paul Randal has written about this in the past in his blog post A SQL
Server DBA myth a day: (12/30) tempdb should always have one data file per processor core where
he suggests a figure of ¼ to ½ the number of cores in the server as a good starting point. This has
typically been the configuration that I have followed for a number of years for setting up new servers,
and I made a point of then monitoring the allocation bitmap contention of tempdb on the actual
workload to figure out if it was necessary to increase the number of files further.
At PASS Summit 2011, Bob Ward, a Senior Escalation Engineer in Product Support, presented a
session on tempdb and some of the changes that were coming in SQL Server 2012. As a part of this
session Bob recommended that for servers with eight CPUs or less, start off with one file per CPU for
tempdb. For servers with more than eight CPUs Bob recommended to start off with eight tempdb data
files and then monitor the system to determine if PAGELATCH contention on the allocation bitmaps
was causing problems or not. If allocation contention continues to exist with the eight files, Bob’s
recommendation was to increase the number of files by four and then monitor the server again,
repeating the process as necessary until the PAGELATCH contention is no longer a problem for the
server. To date, these recommendations make the most sense from my own experience and they
have been what we’ve recommended at SQLskills since Bob’s session at PASS.
SELECT
session_id,
wait_type,
wait_duration_ms,
blocking_session_id,
resource_description,
ResourceType = CASE
WHEN PageID = 1 OR PageID % 8088 = 0 THEN 'Is PFS Page'
WHEN PageID = 2 OR PageID % 511232 = 0 THEN 'Is GAM Page'
WHEN PageID = 3 OR (PageID - 1) % 511232 = 0 THEN 'Is SGAM Page'
ELSE 'Is Not PFS, GAM, or SGAM page'
END
FROM ( SELECT
session_id,
wait_type,
wait_duration_ms,
blocking_session_id,
resource_description,
CAST(RIGHT(resource_description, LEN(resource_description)
- CHARINDEX(':', resource_description, 3)) AS INT) AS PageID
FROM sys.dm_os_waiting_tasks
WHERE wait_type LIKE 'PAGE%LATCH_%'
AND resource_description LIKE '2:%'
) AS tab;
The problem with this solution is that you have to be constantly polling the sys.dm_os_waiting_tasks
DMV to catch the contention. If the contention is transient, then you may miss it altogether or only
capture some occurrences.
IF EXISTS(SELECT *
FROM sys.server_event_sessions
WHERE name = 'SQLskills_MonitorTempdbContention')
DROP EVENT SESSION [SQLskills_MonitorTempdbContention] ON SERVER;
GO
CREATE EVENT SESSION SQLskills_MonitorTempdbContention
ON SERVER
ADD EVENT sqlserver.latch_suspend_end
Replication doesn't require any recovery model from any of its databases (publisher, distributer, subscriber). It's technically in
simple recovery anyway since the log is truncated without another backup running.
There is a myth that for replication to work properly the databases always have to be in Full recovery
mode. Well that is not at all true.
A snapshot agent creates a snapshot of the Publisher which is then taken up by the Distributor agent to
apply any schema changes. Log reader agent then replicates transaction to the distributor after reading
the log records which are marked for a replication and the distributor agent replicates them over to the
Subscriber.
So now when a checkpoint occurs it will skip those records which are marked for replication. Once the
distributor agent traverses the records to the Subscriber the transaction which were before marked as
“Marked for replication” will be marked as “Replicated” by the log reader agent.
Now when the next checkpoint occurs these transactions will also be truncated. So it is not necessary that
the recovery model always have to be in Full recovery model for all this to happen. As logging is done
even in Simple recovery mode and maintained until the next checkpoint occurs.
But you have to be careful when you are performing the following actions if your database is in simple
recovery model and is part of replication as output of these actions will not be replicated
CREATE INDEX
TRUNCATE TABLE
BULK INSERT
BCP
SELECT. . .INTO
The reason …Well the replication engine will not be able to pick up these changes as these changes will
only log page allocations and de-allocations. You wont have to worry about the schema changes as these
changes will be be replicated though they are in simple recovery model.
So the ideal strategy of Recovery model for setting up a replication environment from my point of view
would be
Some people might argue that if Publisher is set to Simple then tlog backup wont be possible.
Well tlog backup on the Publisher wont be of any use anyways in Full recovery model because log
backup’s wont cover the records until they are marked as “Marked or replication” i.e records which still
haven’t been replicated to the Subscribers.
So it is better to have Subscriber in full recovery model and set up tlog backups over there which can save
a lot of log space on the Publisher.