You are on page 1of 10

1.

2016 installation steps


2. How to check database is online

Ans. Sys.Databases

3. How to check blocking

 Ans. sp_who2 System Stored Procedure

 sys.dm_exec_requests DMV

 Sys.dm_os_waiting_tasks

 SQL Server Management Studio Activity Monitor

 SQL Server Management Studio Reports – Standard Report – Activity All Blocking Transactions

 SQL Server Profiler - sp_configure 'blocked process threshold', 20

4. How to check long running query in SQL


SQL Server 2005 keeps alot of good information in the dynamic management views about this kind of thing. Below are the 2
main queries I use to find slow running application queries in our systems.

Queries taking longest elapsed time:

SELECT TOP 100


qs.total_elapsed_time / qs.execution_count / 1000000.0 AS average_seconds,
qs.total_elapsed_time / 1000000.0 AS total_seconds,
qs.execution_count,
SUBSTRING (qt.text,qs.statement_start_offset/2,
(CASE WHEN qs.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2
ELSE qs.statement_end_offset END - qs.statement_start_offset)/2) AS
individual_query,
o.name AS object_name,
DB_NAME(qt.dbid) AS database_name
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt
LEFT OUTER JOIN sys.objects o ON qt.objectid = o.object_id
where qt.dbid = DB_ID()
ORDER BY average_seconds DESC;

Queries doing most I/O:

SELECT TOP 100


(total_logical_reads + total_logical_writes) / qs.execution_count AS average_IO,
(total_logical_reads + total_logical_writes) AS total_IO,
qs.execution_count AS execution_count,
SUBSTRING (qt.text,qs.statement_start_offset/2,
(CASE WHEN qs.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2
ELSE qs.statement_end_offset END - qs.statement_start_offset)/2) AS
indivudual_query,
o.name AS object_name,
DB_NAME(qt.dbid) AS database_name
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt
LEFT OUTER JOIN sys.objects o ON qt.objectid = o.object_id
where qt.dbid = DB_ID()
ORDER BY average_IO DESC;

Once you see the queries that have the longest elapsed time or most I/O, you can then look at the execution plans of those
particular queries to see what inefficiencies are there, and look for possible places to improve indexes or even rewrite a query
using a different approach.

The easiest way:

1. Activity Monitor

2. Server Standard Reports

3. Database Standard Reports

5. Latest service pack in SQL Enterprise\Standard Edition


SELECT @@VERSION;

SELECT
SERVERPROPERTY ( 'ProductVersion' ), 9.00.368.***
SERVERPROPERTY ( 'ProductLevel' ), SP2
SERVERPROPERTY ( 'Edition' ) – Enterprise Edition

6. How to recover suspect database. (Ans.Check DB)

1. Paste the following SQL script into your New Query page replacing [YourDatabase]
with the name of your database.
EXEC sp_resetstatus [YourDatabase];

ALTER DATABASE [YourDatabase] SET EMERGENCY

DBCC checkdb([YourDatabase])

ALTER DATABASE [YourDatabase] SET SINGLE_USER WITH ROLLBACK IMMEDIATE

DBCC CheckDB ([YourDatabase], REPAIR_ALLOW_DATA_LOSS)

ALTER DATABASE [YourDatabase] SET MULTI_USER

2. Click Execute
Primary Reasons why SQL database is marked as Suspect
When SQL Server starts up, it attempts to obtain an exclusive lock on the server’s device file. If the
device file is being used by another process or if it is found missing, the SQL Server starts
displaying the errors. Possible reasons behind such errors are:

1. The system failed to open the device where the data or the log file resides.
2. Cannot find the file specified during the creation or opening of the physical device.
3. SQL server crashed or restarted in the middle of a transaction thus corrupting the transactions
log.
4. Cannot access data or log file while coming online, because of the installed antivirus.
5. The database server was shut down improperly.
6. Lack of Disk Space.
7. SQL cannot complete a rollback or roll forward operation.
8. Database files are being held by the operating system, third-party backup software, etc.
Steps to Fix the SQL Server Database Suspect Mode Error
Here are the steps to change ‘SQL database suspect mode to normal mode’ :

1. Open SQL Server Management Studio and connect your database


2. Select the New Query option
3. Turn off the suspect flag on the database and set it to EMERGENCY
EXEC sp_resetstatus ‘db_name’;
ALTER DATABASE db_name SET EMERGENCY
4. Perform a consistency check on the master database
DBCC CHECKDB (‘database_name’)
5. Bring the database into the Single User mode and roll back the previous transactions
ALTER DATABASE database_name SET SINGLE_USER WITH ROLLBACK IMMEDIATE
6. Take a complete backup of the database
7. Attempt the Database Repair allowing some data loss
DBCC CHECKDB (‘database_name’, REPAIR_ALLOW_DATA_LOSS)
8. Bring the database into the Multi-User mode
ALTER DATABASE database_name SET MULTI_USER
9. Refresh the database server and verify the connectivity of database
Ideally, after these steps have been executed, users should be able to connect to the database
smoothly. In the case of any data loss, you’ll have the database backup to restore from (step 4).
7. Repair option, when to recover index. Repair rebuild option

8. Diff between Rebuild and Re-organize and what is value


Index Rebuild : This process drops the existing Index and Recreates the index. It also locks the table, but now we
can rebuild with online. Also rebuild of index updates statistics.
USE AdventureWorks;
GO
ALTER INDEX ALL ON Production.Product REBUILD
GO

Index Reorganize : This process physically reorganizes the leaf nodes of the index.
USE AdventureWorks;
GO
ALTER INDEX ALL ON Production.Product REORGANIZE
GO

Recommendation: Index should be rebuild when index fragmentation is great than 40%. Index should be
reorganized when index fragmentation is between 10% to 40%. Index rebuilding process uses more CPU and it
locks the database resources. SQL Server development version and Enterprise version has option ONLINE, which
can be turned on when Index is rebuilt. ONLINE option will keep index available during the rebuilding.

9. Steps to change SQL Server Port Number

Please follow the steps below to change the fixed TCP port for the SQL Server:

1. Stop "SafeCom Service", for which the SQL Server port should be changed.
2. On the SQL Server open "SQL Server Configuration Manager".
3. In the left pane expand "SQL Server Network Configuration" and click on "Protocols for SQL-
INSTANCE-NAME" the SafeCom databases are running on.
(Default instance name for SQL Server is MSSQLSERVER. For Slaves default is
SAFECOMEXPRESS)
4. In the right pane right click on "TCP/IP > Properties".
5. Click on the "IP Addresses" Tab and scroll down to section "IPAll".
6. Clear the "TCP Dynamic Ports" field.
7. Fill the "TCP Port" field with the TCP port which should be used for the SQL Connection.
(default TCP Port: 1433)
8. Click "OK", if settings have been changed you will get prompted to restart the SQL Server in order to
apply the changed Port settings.
9. Close "SQL Server Configuration Manager"
10. Start SafeCom Service.
Make sure to follow KB "How to connect with non-default SQL TCP/IP port in SafeCom / KB 24852.
This will be needed if you are using a non-default SQL TCP port and if you are not running the SQL
Server Browser.

10. Always On
11. Log Shipping process, DCDR Drill, Reverse Log Shipping, Tuff File, Work File

12. TempDB optimization

Problem
In SQL Server 2005, TempDB has taken on some additional responsibilities. As such, some of the
best practice have changed and so has the necessity to follow these best practices on a more wide
scale basis. In many cases TempDB has been left to default configurations in many of our SQL
Server 2000 installations. Unfortunately, these configurations are not necessarily ideal in many
environments. With some of the shifts in responsibilities in SQL Server 2005 from the user defined
databases to TempDB, what steps should be taken to ensure the SQL Server TempDB database is
properly configured?

Solution
In an earlier tip, we discussed sizing (Properly Sizing the TempDB Database) the TempDB database
properly. The intention of that tip was to determine the general growth and usage of the database in
order to determine the overall storage needs. In this tip we want to take a broader look at how
TempDB can be optimized to improve the overall SQL Server performance.

What is TempDB responsible for in SQL Server 2005?

 Global (##temp) or local (#temp) temporary tables, temporary table indexes, temporary stored
procedures, table variables, tables returned in table-valued functions or cursors.
 Database Engine objects to complete a query such as work tables to store intermediate results
for spools or sorting from particular GROUP BY, ORDER BY, or UNION queries.
 Row versioning values for online index processes, Multiple Active Result Sets (MARS)
sessions, AFTER triggers and index operations (SORT_IN_TEMPDB).
 DBCC CHECKDB work tables.
 Large object (varchar(max), nvarchar(max), varbinary(max) text, ntext, image, xml) data type
variables and parameters.

What are some of the best practices for TempDB?

 Do not change collation from the SQL Server instance collation.


 Do not change the database owner from sa.
 Do not drop the TempDB database.
 Do not drop the guest user from the database.
 Do not change the recovery model from SIMPLE.
 Ensure the disk drives TempDB resides on have RAID protection i.e. 1, 1 + 0 or 5 in order to
prevent a single disk failure from shutting down SQL Server. Keep in mind that if TempDB is
not available then SQL Server cannot operate.
 If SQL Server system databases are installed on the system partition, at a minimum move the
TempDB database from the system partition to another set of disks.
 Size the TempDB database appropriately. For example, if you use the SORT_IN_TEMPDB
option when you rebuild indexes, be sure to have sufficient free space in TempDB to store
sorting operations. In addition, if you are running into insufficient space errors in TempDB, be
sure to determine the culprit and either expand TempDB or re-code the offending process.
Where can I find additional information related to TempDB best practices?

 Check out these articles in SQL Server 2005 Books Online:


o Optimizing tempdb Performance
o Capacity Planning for tempdb
o Troubleshooting Insufficient Disk Space in tempdb
o tempdb Database
o tempdb and Index Creation

Next Steps

 Based on this information, take a look at your TempDB configurations across your SQL Server
environment and determine what changes are needed.
 As you spec out new machines, be sure to keep TempDB in mind. If you suspect TempDB
should have its own disks, be sure to account for those as you purchase and/or configure your
disk drives.
 If you have a standard SQL Server deployment checklist, be sure that it includes the items
from this tip that make sense in your environment.
 Check out these related tips:
o SQL Server System Databases
o Interview Questions - SQL Server System Databases
o Properly Sizing the TempDB Database

Every SQL Server has a shared database named tempdb that is for use by temporary objects.
Because there is only one tempdb database per instance, it often proves to be a bottleneck for those
systems that make heavy usage of tempdb. Typically, this happens because of PAGELATCH, in-
memory latch contention on the allocation bitmap pages inside of the data files. The allocation bitmap
pages are the page free space (PFS), global allocation map (GAM), and shared global allocation map
(SGAM) pages in the database. The first PFS page occupies PageID 1 of the database, the first GAM
page occupies PageID 2, and the first SGAM page occupies PageID 3 in the database. After the first
page, the PFS pages repeat every 8088 pages inside of the data file, the GAM pages repeat every
511,232 pages (every 3994MB known as a GAM interval), and the SGAM pages repeat every
511,232 + 1 pages in the database.

When PAGELATCH contention exists on one of the allocation bitmap pages in the database, it is
possible to reduce the contention on the in-memory pages by adding additional data files, with the
same initial size and auto-growth configuration. This works because SQL Server uses a round-robin,
proportional fill algorithm to stripe the writes across the data files. When multiple data files exist for a
database, all of the writes to the files are striped to those files, with the writes to any particular file
based on the proportion of free space that the file has to the total free space across all of the files:
This means that writes are proportionally distributed to the files according to their free space, to
ensure that they fill at the same time, irrespective of their size. Each of the data files has its own set
of PFS, GAM, and SGAM pages, so as the writes move from file to file the page allocations to occur
from different allocation bitmap pages spreading the work out across the files and reducing the
contention on any individual page.

Note: It is possible to have PAGELATCH contention in tempdb that is not related to the allocation
bitmap pages, typically on one of the system tables such as syshobts, and in these cases adding
additional files will not help in reducing the in-memory latch contention. The system table contention
occurs when you are creating and destroying objects rapidly that they don’t get cached by the tempdb
metadata cache. The system table exists once for the database, not once per file, so adding files
won’t alleviate the contention.

Existing Recommendations
There are several different published suggestions for calculating the number of files used by tempdb
for the best performance. The SQL Server Customer Advisory Team (SQLCAT) team recommends
that tempdb should be created with one file per physical processor core, and this tends to be one of
the most commonly quoted configuration methods for tempdb. While this recommendation is founded
in practical experience, it is important to keep in mind the types of environments that the SQLCAT
team typically works, which are typical the highest volume, largest throughput environments in the
world, and therefore are atypical of the average SQL Server environment. So while this
recommendation might prevent allocation contention in tempdb, it is probably overkill for most new
server implementations today. Paul Randal has written about this in the past in his blog post A SQL
Server DBA myth a day: (12/30) tempdb should always have one data file per processor core where
he suggests a figure of ¼ to ½ the number of cores in the server as a good starting point. This has
typically been the configuration that I have followed for a number of years for setting up new servers,
and I made a point of then monitoring the allocation bitmap contention of tempdb on the actual
workload to figure out if it was necessary to increase the number of files further.

At PASS Summit 2011, Bob Ward, a Senior Escalation Engineer in Product Support, presented a
session on tempdb and some of the changes that were coming in SQL Server 2012. As a part of this
session Bob recommended that for servers with eight CPUs or less, start off with one file per CPU for
tempdb. For servers with more than eight CPUs Bob recommended to start off with eight tempdb data
files and then monitor the system to determine if PAGELATCH contention on the allocation bitmaps
was causing problems or not. If allocation contention continues to exist with the eight files, Bob’s
recommendation was to increase the number of files by four and then monitor the server again,
repeating the process as necessary until the PAGELATCH contention is no longer a problem for the
server. To date, these recommendations make the most sense from my own experience and they
have been what we’ve recommended at SQLskills since Bob’s session at PASS.

Tracking tempdb contention


Before SQL Server 2012, the best way to track allocation contention in tempdb was to query
sys.dm_os_waiting_tasks for PAGELATCH waits and then parse out the resource_description
column to identify the database_id, file_id, and page_id of the resource being waited on. Robert Davis
first showed an example of how to do in his blog post Breaking Down TempDB Contention, since
when, the technique has evolved into the version shown in Listing 1

SELECT
session_id,
wait_type,
wait_duration_ms,
blocking_session_id,
resource_description,
ResourceType = CASE
WHEN PageID = 1 OR PageID % 8088 = 0 THEN 'Is PFS Page'
WHEN PageID = 2 OR PageID % 511232 = 0 THEN 'Is GAM Page'
WHEN PageID = 3 OR (PageID - 1) % 511232 = 0 THEN 'Is SGAM Page'
ELSE 'Is Not PFS, GAM, or SGAM page'
END
FROM ( SELECT
session_id,
wait_type,
wait_duration_ms,
blocking_session_id,
resource_description,
CAST(RIGHT(resource_description, LEN(resource_description)
- CHARINDEX(':', resource_description, 3)) AS INT) AS PageID
FROM sys.dm_os_waiting_tasks
WHERE wait_type LIKE 'PAGE%LATCH_%'
AND resource_description LIKE '2:%'
) AS tab;

Listing 1: Tracking tempdb contention with sys.dm_os_waiting_tasks

The problem with this solution is that you have to be constantly polling the sys.dm_os_waiting_tasks
DMV to catch the contention. If the contention is transient, then you may miss it altogether or only
capture some occurrences.

Tracking allocation contention with Extended Events


Extended Events were introduced in SQL Server 2008. In SQL Server 2012 the number of events,
and the information that they produce, has expanded significantly. Two of the events that were
introduced in SQL Server 2008 were the sqlserver.latch_suspend_begin and
sqlserver.latch_suspend_end events. These events fire when a latch wait occurs inside the database
engine. However, in SQL Server 2008, these are of limited usage because they don’t provide the
duration, database_id, file_id and page_id associated with the latch wait. In SQL Server 2012, these
columns were added to the sqlserver.latch_suspend_end event, making it possible to predicate the
event firing to track only allocation contention inside of tempdb. Listing 2 shows the beginning of the
Extended Event session that we are going to build on throughout this section to track allocation
contention inside of tempdb.

IF EXISTS(SELECT *
FROM sys.server_event_sessions
WHERE name = 'SQLskills_MonitorTempdbContention')
DROP EVENT SESSION [SQLskills_MonitorTempdbContention] ON SERVER;
GO
CREATE EVENT SESSION SQLskills_MonitorTempdbContention
ON SERVER
ADD EVENT sqlserver.latch_suspend_end

13. Migration Activity

14. Replication Concept. Can Sub and Dist be on same instance

15. Publisher and Subs recovery model Different


replication is not dependant on the recovery mode to function, the publisher can be in simple mode as it will not allow
transactions to be removed from the publisher until they are transferred to the distributer.

Replication doesn't require any recovery model from any of its databases (publisher, distributer, subscriber). It's technically in
simple recovery anyway since the log is truncated without another backup running.
There is a myth that for replication to work properly the databases always have to be in Full recovery
mode. Well that is not at all true.

First let me give a short overview on how replication works.

A snapshot agent creates a snapshot of the Publisher which is then taken up by the Distributor agent to
apply any schema changes. Log reader agent then replicates transaction to the distributor after reading
the log records which are marked for a replication and the distributor agent replicates them over to the
Subscriber.

So now when a checkpoint occurs it will skip those records which are marked for replication. Once the
distributor agent traverses the records to the Subscriber the transaction which were before marked as
“Marked for replication” will be marked as “Replicated” by the log reader agent.

Now when the next checkpoint occurs these transactions will also be truncated. So it is not necessary that
the recovery model always have to be in Full recovery model for all this to happen. As logging is done
even in Simple recovery mode and maintained until the next checkpoint occurs.

But you have to be careful when you are performing the following actions if your database is in simple
recovery model and is part of replication as output of these actions will not be replicated

 CREATE INDEX
 TRUNCATE TABLE
 BULK INSERT
 BCP
 SELECT. . .INTO

The reason …Well the replication engine will not be able to pick up these changes as these changes will
only log page allocations and de-allocations. You wont have to worry about the schema changes as these
changes will be be replicated though they are in simple recovery model.

So the ideal strategy of Recovery model for setting up a replication environment from my point of view
would be

1) Publisher database be in Simple recovery mode.

2) Subscriber be in a Full recovery mode.

Some people might argue that if Publisher is set to Simple then tlog backup wont be possible.

Well tlog backup on the Publisher wont be of any use anyways in Full recovery model because log
backup’s wont cover the records until they are marked as “Marked or replication” i.e records which still
haven’t been replicated to the Subscribers.

So it is better to have Subscriber in full recovery model and set up tlog backups over there which can save
a lot of log space on the Publisher.

16. Recovery Models


17. Server Slowness, How to check?

18. Performance Tuning. Index Scan – How you will remove

19. Cluster Index and Non – Cluster Index

20. How to find missing index

Select * from sys.dm_db_missing_index_details

21. In place\Side by Side Migration

22. Mirroring and It’s Mode

23. Difference between Sync and Async

24. Witness from different edition, how we can add in mirroring

You might also like