You are on page 1of 28

1-Deterministic and non-Deterministic function

Deterministic-give same result set anytime they called with specific set of value at the same state of
database. Ex all aggregate function sum,Avg, count, power

Non-Deterministic – may give different result set anytime they called with specific set of value at the
same state of database. Ex getdate, current_timestamp

Rank function behaves as both Deterministic and non-Deterministic function

Select rank (1) with seed value will give same result every time executed. So Deterministic

Select rank () without seed value will give different result every time executed. So non Deterministic

2-With Encryption and schema binding

By using WITH ENCRYPTION key word while creating function or proc or altering we can encrypt function
and proc to prevent other user viewing the content of proc or function

SchemaBinding – If we have a function which is using table employee. If someone deleted table
employee then function will give error.

To prevent this while creation function we can use WITH SCHEMABINDING which wont allow to delete
object referenced by the function.or changing the attribute (column) refered by function.

3- Temp Table

It is created in TempDB database ad are of two types

a-Local temp table(with single #) (available in the same connection I’e window) (automatically delted
once connection closed)

b-global temp table(with ##)(available in all the connection I’e window) ) (automatically delted once last
reference connection closed)

4 -indexes

Index used to find data in table quickly.can be created in table and views.

Without index table scanning I’e search each row from beginning to end(which is bad)

5- Clustered index and Non-clustered index

Clustered- determines physical order of data. So max can have 1 Clustered index in a table.

[Type text]
But one clustered index can have multiple columns known as composite index.

Primary key automatically creates a clustered index on a table.

Non-clustered- does not decides on the physical order of data. Data stored in one place and ondex in
another place. So can have more than one non clustered index

6- Advantage and disadvantage of indexes

Advantage- searching is faster

Disadvantage- require additional disk space as nonclustered index creates separe table for each
nonclustered index

insertion updatedelete can also be slower example if we are deleting a value from a table that will be
deleted from index table also.

7 – What is Covering Query

If selecting column in select clause present in the index then there is no need to look up in the table
again. The requested column data can be simply returned from the index.

Ex- index is created on column Firstname and lastname and Select statement is like Select
Firstname,Lastname from tblemployes.

A clustered index is always a covering query since it contains all the data in a table so no lookup required
8-Updatable View

View can be updatable, insert and delete record into base table.

If a view is based on multiple table and we will perform update on that view. Data may update wrongly.

To avoid that to correctly update through view that is based on multiple table we can use INSTEAD
TRIGGER.

8. Indexed views

In general view is a stored sql query .deos not store data in view.

But by creating index in a view, view get materialized and can capable of storing data.

In oracle it is materialized view in sql server indexed view

View should be created with SchameBinding option to create index in a view

Table used in view should have referenced with 2 part name I’e schema.tablename

There are some limitations when you create an indexed view. You can’t
use EXISTS, NOT EXISTS, OUTER JOIN, COUNT(*), MIN, MAX, subqueries, table

[Type text]
hints, TOP and UNION in the definition of your indexed view. Also, it is not allowed to refer to other
views and tables in other databases in the view definition. You can’t use
the text, ntext, image and XML, data types in your indexed views. Float data type can be used in
the indexed view but can’t be used in the clustered index. If the Indexed view’s definition
contains GROUP BY clause, you should add COUNT_BIG(*) to the view definition
The first index created should be unique clustered index.

By creating index views data return from created index not from referenced table.

Every time an entry made in referenced table the index is recalculated and stores the result. So it returns
result faster.

Benefits of clustered indexes created for an indexed view depends on the SQL Server edition. If you
are using SQL Server Enterprise edition, SQL Server Query Optimizer will automatically consider the
created clustered index as an option in the execution plan if it is the best index found. Otherwise, it
will use a better one. In the other SQL Server editions such as Standard edition, the SQL Server Query
Optimizer will access all the underlying source tables and use its indexes. In order to force the SQL
Server Query Optimizer to use the index view’s clustered index in the execution plan for the query,
you should use the WITH (NOEXPAND) table hint in the FROM clause.

9-VIEW Limitation

Cant pass parameter to view.

Order by clause is invalid in view unless we Top or for xml is specified.

View can be based in temp table.

10-Triggers

Triggers are special type of store procedure which automatically executed in response to DML events.

a-DML trigger b- DDL triggers c-Logon Trigger

11- DML Triggers

Fires automatically based on insert update and delete

Are of 2 types 1 AFTER trigger also called FOR trigger 2 instead of trigger

A-After Trigger/FOR Trigger

FOR INSERT AND FOR DELETE

Inserted and deleted table is special table which is only available within the context of trigger.

Not available outside the context of trigger.

[Type text]
FOR UPDATE

Update trigger creates both inserted and deleted table within the trigger context.

B-Instead Of Trigger

Instead of insert trigger

Usually used in case to insert through the view, which is referencing multiple tables.

Because by using simple views insert can’t be perform where view is based on multiple tables .

Instead of update trigger.

Usually used in case to update through the view, which is referencing multiple tables.

Because by using simple views if we perform update sql through error if update is impacting more than
one base table. But if update is impacting one base table it may update incorrectly .as sql server can’t
decide to update which base table data when update view is for multiple table.

Instead of delete trigger

Same as above

12 Database Normalization

 Is the process of organize data to minimize data redundancy. which in turn ensure data
consistency.
 There are three main reasons to normalize a database. The first is to minimize duplicate
data, the second is to minimize or avoid data modification issues, and the third is to
simplify queries.

There are 6 normal form 1NF to 6NF

a->First Normal form- 1NF

data in each column should be atomic(no multiple values separated by comma)

table does not contain any repeating column group (employee1 employee2 employee3)

identify data each record uniquely using primary key.

BSecond Normal form- 2NF

[Type text]
Meets all condition of 1NF.

Move redundant data to separate table.

Create relationship between these two tables using foreign key.

BThird Normal form- 3NF

Meets all conditions of 1nf and 2nf

Does not contains columns that are not fully dependent upon primary keys.

13- Error Handling in 2000

@@Error return 0 if there is any error.

@@Error cleared and reset on each execution. So check it immediately following the statement being
verified. or save it in a variable so can be checked later.

To raise custom error use function RAISEERROR(‘Error message’,errorseverity,errorstate)

Errorseverity=16 tells general error that can be corrected by user.

State = between 1 & 255. RaiseError only generates error with state from 1 through 127.

14- Error Handling in 2005 or more version

Introduction of try catch in error handling

15 – to go to the line of error press cntrl+G

16- Trnsaction

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED to see uncommitted data.

Default isolation level in sql server is read committed so if you run a query with begin transaction in a
window, then other window will only show result by changing the default mode by the above command.

17- ACID

ACID test consists of 4 requirements that every transaction have to pass successfully:

 Atomicity – requires that a transaction that involves two or more discrete parts of
information must commit all parts or none

[Type text]
 Consistency – requires that a transaction must create a valid state of new data, or it must roll
back all data to the state that existed before the transaction was executed

 Isolation – requires that a transaction that is still running and did not commit all data yet,
must stay isolated from all other transactions

 Durability – requires that committed data must be stored using method that will preserve all
data in correct state and available to a user, even in case of a failure

18- What is Row level and Page level data compression

19 – full text indexing

See https://www.simple-talk.com/sql/learn-sql-server/understanding-full-text-indexing-in-sql-server/

Syntax to create fulltext catalog


CREATE FULLTEXT CATALOG ProductFTS
WITH ACCENT_SENSITIVITY = OFF

Syntax to create fulltext index


- CREATE FULLTEXT INDEX ON ProductDocs
(DocSummary, DocContent TYPE COLUMN FileExtension LANGUAGE 1033)
KEY INDEX PK_ProductDocs_DocID
ON ProductFTS
WITH STOPLIST = SYSTEM

20-What is phantom Reads.


Ans: Phantom reads happens when a transaction executes a query twice and gets different result sets
each time. This happens when a second transaction insert a new row that matches the where clause of
the query run the first transaction.
This can be avoided by setting the transaction 1 Transaction level to Serializable or Snapshot
Ex- SET Transaction Isolation level Serializable

21-Difference between Repeatable Read & Serializable isolation level.


Ans: Repeatable read ensures that data reads from one transaction will be prevented from being
updated and deleted in another transaction. But this does not prevent a new row being inserted which
results in phantom read.
On the other hand serialization isolation ensures that data reads from one transaction will be prevented
from being updated and deleted in another transaction also new row being inserted by another
transaction (Applying range locks). So this prevents both non- repeatable read and phantom read
problem.

22-Difference between Snapshot & Serializable isolation level.


Ans: Serialization isolation is implemented by acquiring locks which means the resources are locked for
the duration of current transaction. decreases the number of concurrent transaction

Snapshot isolation does not acquire locks. It maintain versioning in tempdb. So it significantly increases
the number of concurrent transaction

[Type text]
Note: To enable snapshot isolation level need to be enable database level
Alter Database DBName
SET ALLOW_SNAPSHOT_ISOLATION ON
THEN setting at transaction level SET Transaction Isolation level SNAPSHOT

23- What is Dirty Read?


Ans: Dirty reads occur when one transaction reads data that has been written but not yet committed by
another transaction. If the changes are later rolled back, the data obtained by the first transaction will
be invalid.
READ UNCOMMITTED is the only isolation level which reads Dirty data. Which is similar to With NOLOCK

24-What is Nonrepeatable read?


Nonrepeatable reads happen when a transaction performs the same query two or more times and each
time the data is different. This is usually due to another concurrent transaction updating the data
between the queries.
25- What is a lost update problem?
Lost updates occur when two or more transactions select the same row and then update the row based
on the value originally selected. Each transaction is unaware of other transactions. The last update
overwrites updates made by the other transactions, which results in lost data.

26- Major isolation level in sql server ?

Read Uncommitted

Read Committed

Repeatable Read

Serializable

Snapshot

27-What is User Defined Table type??

Table-Valued Parameters is a new feature introduced in SQL SERVER 2008. In earlier versions of SQL
SERVER it is not possible to pass a table variable in stored procedure as a parameter, but now in SQL
SERVER 2008 we can use Table-Valued Parameter to send multiple rows of data to a stored procedure or
a function without creating a temporary table or passing so many parameters.

Table-valued parameters are declared using user-defined table types. To use a Table Valued Parameters
we need follow steps shown below:

Create a table type and define the table structure—

CREATE TYPE DeptType AS TABLE


(
DeptId INT, DeptName VARCHAR(30)
);

[Type text]
Declare a stored procedure that has a parameter of table type.

Declare a table type variable and reference the table type.

Using the INSERT statement and occupy the variable.

We can now pass the variable to the procedure.

28-Whats is memory-optimized tables and natively-compiled stored procedures in sql server


2014?

These new data structures are part of the In-Memory OLTP Engine of SQL Server which can be
used in order to achieve significant performance over processing that uses “traditional” disk-
based data structures.

CREATE TABLE [dbo].[Product]


(
ID INT NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 1000),
Code VARCHAR(10) COLLATE Latinl_General_100_BIN2 NOT NULL,
Description VARCHAR(200) NOT NULL,
Price FLOAT NOT NULL
)WITH (MEMORY_OPTIMIZED = ON,
DURABILITY = SCHEMA_ONLY);
 Memory-optimized tables are by default durable,
 Memory-optimized tables do not support clustered indexes but do however support non-
clustered indexes (currently up to eight).
https://www.simple-talk.com/sql/learn-sql-server/introducing-sql-server-in-memory-oltp/

https://www.mssqltips.com/sqlservertip/3051/overcoming-storage-speed-limitations-with-
memoryoptimized-tables-for-sql-server/

29:What is Collation in Sql server database

CREATE DATABASE CaseSensitive


COLLATE SQL_Latin1_General_CP1_CS_AS

CREATE DATABASE CaseInSensitive


COLLATE SQL_Latin1_General_CP1_CI_AS

Case insensitive is the default collation if nothing is specified during creation.


The above command will create two different databases with Case-Sensitive and Case -Insensitive
collations. When we retrieved the default datatypes from the database we can see that they have
different collation for their data types. It is absolutely possible to database with a different collation on
a same SQL Server Instance. It is also possible to create an individual column in a table with
different collations from server instance and database as well.

30. Apply operator in Sql server

[Type text]
1-The APPLY operator allows you to join two table expressions; the right table expression is
processed every time for each row from the left table expression.

2-The APPLY operator comes in two variants, the CROSS APPLY and the OUTER APPLY.

The CROSS APPLY operator returns only those rows from left table expression (in its final output) if
it matches with right table expression.

Whereas the OUTER APPLY operator returns all the rows from left table expression irrespective of
its match with the right table expression.

Real-time Scenario

 Calling a Table Valued Function for each row in the outer query

SELECT *
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle)

 Find Top N per group queries

SELECT pr.name,
pa.name
FROM sys.procedures pr
OUTER APPLY (SELECT TOP 2 *
FROM sys.parameters pa
WHERE pa.object_id = pr.object_id
ORDER BY pr.name) pa
ORDER BY pr.name,
pa.name

 One real life example would be if you had a scheduler and wanted to see what the
most recent log entry was for each scheduled task.

select t.taskName, lg.logResult, lg.lastUpdateDate


from task t
cross apply (select top 1 taskID, logResult, lastUpdateDate
from taskLog l
where l.taskID = t.taskID
order by lastUpdateDate desc) lg

31. Parameter Sniffing in Sql server

The first time a stored procedure is run it will compile into the plan cache. This allows the SQL
engine to reuse the plan in the cache multiple times without needing to spend time and CPU cycles
on recompilation.

This works great most of the time, however with stored procedures that may deal with different
amounts of data depending on the parameters passed in, then storing the plan in the cache can be
problematic.

[Type text]
The first parameter used to compile the plan into the plan cache may not be the best parameter to
compile the plan with. Other parameters that might be passed in in subsequent runs may not run
well with the compiled plan. This is known as a parameter sniffing issue.

By using WITH RECOMPILE while creating/Altering procedure or calling procedure we are telling
system to recompile after each run.

ALTER PROCEDURE Proc_Name (@string varchar(150)) WITH RECOMPILE AS

OPTION (RECOMPILE)

OPTION (RECOMPILE) is a statement level command that has some very distinct advantages over
WITH RECOMPILE. It does not require the whole stored procedure to be recompiled so in a large
stored procedure time and CPU cycles are not spent on unnecessary compiles. It uses Constant
Folding which can have generate a far superior plan.
Example

ALTER PROCEDURE sptabdemoproc (@string varchar(150)) AS

SELECT COUNT(*)
FROM tabdemo1 AS A
INNER JOIN tabdemo2 AS B ON A.ID = B.ID2
WHERE (B.String = @string OR @string IS NULL)
OPTION (RECOMPILE)

Constant folding is a technique the optimizer uses to remove any unnecessary code to help improve
performance. Constant Folding does this by removing unnecessary variables and simplifying the
query before compiling the plan. In the example in the demo when the parameter comes into the
plan it is either set to a string or a ‘Null’. Constant Folding will remove whichever is not used. So if
‘Marker’ is passed in as the parameter then Constant folding will simply remove the ‘OR String IS
Null’ from the predicate:

One important side effect when using OPTION (RECOMPILE) is the impact it will have on temporary
tables. If a temporary table is created with an OPTION (RECOMPILE) statement then the optimizer
will need to recompile every statement it touches.

32. Purpose Of Master database?

In SQL Server, system objects are no longer stored in the master database; instead, they are stored in
the Resource database. Also, master is the database that records the existence of all other databases

[Type text]
and the location of those database files and records the initialization information for SQL Server.
Therefore, SQL Server cannot start if the master database is unavailable

When data base created it creates 2 files

1- master.mdf
2- Master.ldf

33. Purpose of Model database?

 Purpose - Template database for all user defined databases


 Prominent Functionality
o Objects
o Columns
o Users
 Additional Information
o User defined tables, stored procedures, user defined data
types, etc can be created in the Model database and will
exist in all future user defined databases
o The database configurations such as the recovery model for
the Model database are applied to future user defined
databases
o Some of the settings of model are also used for creating a new tempdb during
start up, so the model database must always exist on a SQL Server system.

34. Purpose of MSDB database


 Purpose - Primary database to manage the SQL Server Agent
configurations
 Prominent Functionality
o SQL Server Agent Jobs, Operators and Alerts
o DTS Package storage in SQL Server 7.0 and 2000
o SSIS Package storage in SQL Server 2005
 Additional Information
o Provides some of the configurations for the SQL Server Agent service
o For the SQL Server 2005 Express edition installations, even though the
SQL Server Agent service does not exist, the instance still has the
MSDB database
o Missing SQL Server Agent History
o MSSQLTips Category - SQL Server Agent
35: Purpose of Temp DataBase.

 The tempdb is a special system database used to store temporary


objects and data like tables, views table variables, tables returned in
functions, temporary objects and indexes.

[Type text]
 The tempdb can also be used for internal operations like rebuilding indexes (when the
SORT_IN_TEMPDB is ON), queries using UNION, DBCC checks, GROUP BY, ORDER BY. Hash
join and Hash aggregate operations.
 Recovery model is simple because the information stored is temporary. And Tempdb cannot
be back up as it is temporary.
36. Temporary table and table variable.
 Local temp objects are objects accessible ONLY in the session that created it. These objects
are also removed automatically when the session that created it ends (unless manually
dropped).
 Global temporary objects are objects that are accessible to ANYONE who can login to your
SQL Server. They will only persist as long as the user that created it lasts (unless manually
dropped) but anyone who logs in during that time can directly query, modify or drop these
temporary objects. These objects are also removed automatically when the session that
created it ends (unless manually dropped).
 I tend to like temp tables in scenarios where the object is used over a longer period of time –
I can create non-key indexes on it and it's more flexible to create to begin with (SELECT INTO
can be used to create the temp table). I also have the ability to use the temporary table in
nested subprocedures because it's not local to the procedure in which it was created.
However, if you don't need any of those things then a table variable might be better. When it
is likely to be better – when you have smaller objects that don't need to be accessed outside
of the procedure in which it was created.
 Table Variables doesn’t allow the explicit addition of Indexes after it’s declaration, the only
means is the implicit indexes which are added as a result of the Primary Key or Unique Key
constraint defined during Table Variable declaration. On the other hand Temporary table
supports adding Indexes explicitly after Temporary Table creation and it can also have the
implicit Indexes which are the result of Primary and Unique Key constraint.
 No statistics is maintained on table variable which means that any changes in data impacting
table variable will not cause recompilation.
 Scope of the Table variable is the Batch or Stored Procedure in which it is declared. And they
can’t be dropped explicitly, they are dropped automatically when batch execution completes
or the Stored Procedure execution completes. On same session it is not available if new
batch starts (Ex by GO new batch will start)

 Scope of the Local Temporary Table is the session in which it is created and they are dropped
automatically once the session ends and we can also drop them explicitly. If a Temporary
Table is created within a batch, then it can be accessed within the next batch of the same
session. Whereas if a Local Temporary Table is created within a stored procedure then it can
be accessed in its child stored procedures, but it can’t be accessed outside the stored
procedure.

 Scope of Global Temporary Table is not only to the session which created, but they will
visible to all other sessions. They can be dropped explicitly or they will get dropped

[Type text]
automatically when the session which created it terminates and none of the other sessions
are using it.

37. What is Index definition??

 Indexes helps the database engine to find the requested data efficiently using the minimal
resource. Indexes also helps in data integrity through uniqueness of the column but it is not
mandatory to define index on unique column.
 Index help us improve the performance of the data retrieval but has an overhead on DML
operation

38. Explaining the Clustered Table Structure

 Clustered index stores the actual data in the order of clustering key using a b tree structure
and the data can be stored only in one order, SO we can have only one cluster index.

 The pages in the root level and intermediate levels are called index pages. In the index pages,
SQL server stores the clustering key and entry point (page pointer) to the next level of B tree.

 Next level is the bottom level is called leaf level (index level 0). The pages in this level are
called leaf pages or data pages. In this page you can find complete data (all column) of each
record in the salesorderdetails table. In other words, the leaf level of the clustered index is
where the actual data is stored.
 DBCC IND

DBCC IND command provides the list of pages used by the table or index. The command
provides the page numbers used by the table along with previous page number, next page
number. The command takes three parameters.

Syntax is provided below.

DBCC IND ( <database_name>, <table_name>, non clustered index_id*)


The list of columns returned are provided below.

IndexID: Provides id of the index. 0 - for heap, 1 - clustered index.,Non


clustered ids > 2 .
PagePID : Page number
IAMFID : Fileid of the file containing the page ( refer sysfiles )
ObjectID : Objectid of the table used.
Iam_chain_type: Type of data stored ( in row data,row overflow etc )
PageType : 1 refers to Data page, 2 -> Index page,3 and 4 -> text pages
Indexlevel: 0 - refers to leaf. Highest value refers to root of an index.
NextPagePID,PrevPagePID : refers to next and previous page numbers.
 DBCC PAGE:
Next undocumented command we would be seeing is DBCC PAGE:

[Type text]
DBCC PAGE takes the page number as the input parameter and displays the content of the
page.Its almost like opening database page with your hands and viewing the contents of the
page.

Syntax:

DBCC PAGE(<database_name>, <fileid>, <pagenumber>, <viewing_type>)

DBCC PAGE takes 4 parameters. They are database_name, fileid, pagenumber,


viewing_type.Viewing_type parameter when passed a value 3 and displays the results in tabular
format.If you are viewing a data page then the results are always in text format. For Index pages,
when we pass the value 3 as parameter we get the results in a tabular format.DBCC PAGE
command requires the trace flag 3604 to be turned on before its execution.

DBCC traceon(3604)
GO
DBCC PAGE(dbadb, 1, 8176, 3)
GO
 The leaf level of clustered index are made up of data pages which contain the actual data of table
where as the leaf level of non clustered index are made up of index pages.
 Non clustered index can be defined on a heap table or clustered table.In the leaf level of
nonclusterd index, each index row contain the nonclustered key value and a row locator.This
locator point to a the data row in the clustered index or heap.The row locator in nonclusterd index
rows are either a pointer to a row or a clustered index key for a row. If the table is a heap, which
means it does not have a clustered index,the row locator is a pointer to the row.The pointer is
built from the file identifier ,page number and slot number of the row on the page. The whole
pointer is known as a Row ID(RID). If the table has a clustered index, the row locator is the
clustered index key for the row.

39. Locks in Sql Server

a. Shared Lock:
Shared locks are held on data being read under the pessimistic concurrency model.
While a shared lock is being held other transactions can read but can't modify
locked data. After the locked data has been read the shared lock is released, unless
the transaction is being run with the locking hint (READCOMMITTED,
READCOMMITTEDLOCK) or under the isolation level equal or more restrictive than
Repeatable Read. Those locks acquired by readers during read operations such as
SELECT. There can be several shared locks on any resource (such as a row or a page) at any
one time.

[Type text]
b. Exclusive locks are also referred to as write locks. Only one exclusive lock can exist on a
resource at any time. Exclusive locks are not compatible with other locks, including shared
locks

40. Column stored Index in Sql Server.

 The columnstore index in SQL Server 2012 stores columns instead of rows, and is designed to
speed up analytical processing and data-warehouse queries.

 A segment can contain values from one column only, which allows each column’s data to be
accessed independently. However, a column can span multiple segments, and each segment
can be made up of multiple data pages. Data is transferred from the disk to memory by
segment, not by page. A segment is a highly compressed Large Object (LOB) that can contain
up to one million rows.

 Most notably, in SQL Server 2012, there was no way to create clustered columnstore indexes,
and the nonclustered columnstore indexes could not be updated. To update the data, you
had to drop and rebuild the index or swap out partitions.

 Clustered Columnstore Index is a new feature in SQL Server 2014


 It will be very useful for more columns table where you select only limited columns daily,
for example if there is ProductSalesFact table, you normally select for this product what is
the count of sales, or for this quarter what is the sales etc, Eventhough it has hundreds of
columns it access only two required columns.

CREATE NONCLUSTERED COLUMNSTORE INDEX csi_FactResellerSales


ON dbo.FactResellerSales
(ProductKey, UnitPrice, CustomerPONumber, OrderDate);

40. What is Dynamic Management View In sql Server?

 Dynamic management views and functions return server state information that can be
used to monitor the health of a server instance, diagnose problems, and tune
performance.
 Please note the name of the DMV/DMF starts with "dm_". They all reside in sys schema
 To access DMV/DMF you need to have SELECT permission on the objects and VIEW
SERVER STATE and VIEW Database STATE grant permission.
Example

 sys.dm_exec_query_stats
 sys.dm_exec_sql_text(sql_handle)
 sys.dm_exec_query_plan(plan_handle)
The above can be used to find the slow running queries and stored
procedures.
41. How to get the list of all SQL Server Agent jobs with schedules which database and tables will
you use?

[Type text]
We will be using system tables from MSDB database.
Sys.SysJobs
Sys.SysSchedules
Sys.SysJobSchedules

41. Difference between Stored Procedure and User defined function?


 We cannot use dynamic SQL from used-defined functions written in T-SQL .This is
because you are not permitted do anything in a UDF that could change the database
state
Sr.No. User Defined Function Stored Procedure

1 Function must return a value. Stored Procedure may or not return values.
Will allow only Select
Can have select statements as well as DML statements
2 statements, it will not allow us
such as insert, update, delete and so on
to use DML statements.
It will allow only input
3 parameters, doesn't support It can have both input and output parameters.
output parameters.
It will not allow us to use try-
4 For exception handling we can use try catch blocks.
catch blocks.
Transactions are not allowed
5 Can use transactions within Stored Procedures.
within functions.
We can use only table
Can use both table variables as well as temporary table in
6 variables, it will not allow using
it.
temporary tables.
Stored Procedures can't be
7 Stored Procedures can call functions.
called from a function.
Procedures can't be called from Select/Where/Having and
Functions can be called from a
8 so on statements. Execute/Exec statement can be used to
select statement.
call/execute Stored Procedure.
A UDF can be used in join
9 Procedures can't be used in Join clause
clause as a result set.

42. What is Sql injection?


 A SQL Injection attack is a form of attack that comes from user input that has not been
checked to see that it is valid. The objective is to fool the database system into running
malicious code that will reveal sensitive information or otherwise compromise the server.
 This happens mostly because of Dynamic sql query.

43.How to run Dynamic Sql Statements?

sp_executesql (also known as “Forced Statement Caching”)


I. Allows for statements to be parameterized.
II. Only allows parameters where SQL Server would normally allow parameters.

[Type text]
III. Has strongly typed variables/parameters – and this can reduce injection and offer
some performance benefits!
IV. Creates a plan on first execution (similar to stored procedures) and subsequent
executions reuse this plan
EXEC (also known as “Dynamic String Execution” or DSE)
I. Allows *any* construct to be built.
II. Treats the statement similarly to an adhoc statement. This means that the
statement goes through the same process that adHoc statements do – they are
parsed, probably parameterized and possibly deemed “safe” for subsequent
executions to re-use.
III. Does not have strongly typed parameters in the adhoc statement and therefore
can cause problems when the statements are executed (I have ways around
this.)
IV. Does not force a plan to be cached.
 This can be a pro in that SQL Server can create a plan for each execution.
 This can be a con in that SQL Server needs to recompile/optimize for each
execution.
44. What is analytic function in sql server?

 Analytic functions work much the same way as aggregate functions, except that
the ORDER BY subclause is required for those functions that support the clause.
The FIRST_VALUE function retrieves the first value from a sorted list, and
the LAST_VALUE function retrieves the last value.
Example:

 Here the result is not proper for Highest sales and lowest sales because these values are
running totals because the FIRST_VALUE and LAST_VALUE functions support
the ROWS/RANGE subclause, which impacts the ORDER BY operation.
 For example, the highest amount of sales for the France row is 32000 and the lowest amount
is 19000. These calculations are based only on the first three rows in this partition as a result
of the ROWS/RANGE default settings being applied.

[Type text]
 LAG and Lead
 The LEAD function retrieves a value from a row previous to the current one.
The LAG function retrieves a value from a row after the current one. The
following SELECT statement demonstrates how to use these functions:

45. How to get the table count without using Count (*)

SELECT OBJECT_NAME(id) AS TableName,


rowcnt AS [RowCount]
FROM sysindexes s
INNER JOIN sys.tables t
ON s.id = t.OBJECT_ID
WHERE s.indid IN ( 0, 1, 255 )
AND is_ms_shipped = 0
Indid column in Sysindex database stores the below
 0 if there is no clustered index in the table.
 1 if there is clustered index in the table.
 255 if datatype is image
 2-244 for non-clustered index

[Type text]
46.Which function(s) we can use to concatenate strings in SQL Server2012?

We can use CONCAT function to achieve the above

Syntax: Select Concat(‘aaa’,’aaagg’,’rrr’)

47. What is computed column and Persisted Value ?


A computed column is computed from an expression that can use another column or
columns in the same table.
We are required to have the "Date of Retirement" for each employee as (DOBirth + 60
years - 1 day). Instead of calculating it each time in the report or updating the
column [DORetirement] each time through a trigger when [DOBirth] is updated, we
have a better approach here to create [DORetirement] as a computed column

Here are a few rules for Persisted Value in computed column.


 If Persisted property is off then calculated column will be just a
virtual column. No data for this column will be stored on disk and
values will be calculated every time when referenced in a script.
If this property is set active then data of computed column will
be stored on disk.
 Any update in referenced column will be synchronized
automatically in computed column if it is persisted.
 Along with some other conditions Persisted is required to create
an index on the computed column.

48. What is Lock Escalation?


 In a situation where more than 5,000 locks are acquired on a
single level, SQL Server will escalate those locks to a single
table level lock. By default, SQL Server will always escalate to
the table level directly, which mean that escalation to the page
level never occurs. Instead of acquiring numerous rows and
pages lock, SQL Server will escalate to the exclusive lock (X) on
a table level

[Type text]
Lock escalation is of 3 types which can be achieved by the below way

TABLE: Is the default escalation level, which also include partitioned table.

AUTO: In this case escalation happens to a partitioned level if a table is partitioned. Partitioned table will
acquire Exclusive lock (X) while table will acquire Intense Exclusive lock (IX).This may lead to lead lock.
Should apply carefully

DISABLE: This will disable lock escalation to table level.

49. Lock in Sql Server?


 Locking is essential to successful SQL Server transactions
processing and it is designed to allow SQL Server to work
seamlessly in a multi-user environment.
 Types of lock in sql server

Exclusive lock (X) :

The exclusive lock will be imposed by the transaction when it wants to modify the page or row data,
DML statements DELETE, INSERT and UPDATE causes the same. An exclusive lock can be imposed to
a page or row only if there is no other shared or exclusive lock imposed already on the target. Only
one exclusive lock can be imposed to a page or row, and once imposed no other lock can be
imposed on locked resources.

Shared lock (S) – this lock type, when imposed, will reserve a page or row to be available only for
reading, which means that any other transaction will be prevented to modify the locked record as
long as the lock is active.

[Type text]
However, a shared lock can be imposed by several transactions at the same time over the same page
or row and in that way several transactions can share the ability for data reading since the reading
process will not affect the actual page or row data.

Update lock (U) – An update lock can be imposed on a record that already has a shared lock. In
such a case, the update lock will impose another shared lock on the target row. Once the transaction
that holds the update lock is ready to change the data, the update lock (U) will be transformed to an
exclusive lock (X). While the update lock can be imposed on a record that has the shared lock, the
shared lock cannot be imposed on the record that already has the update lock

Intent locks (I) – this lock is a means used by a transaction to inform another transaction about
its intention to acquire a lock. The purpose of such lock is to ensure data modification to be executed
properly by preventing another transaction to acquire a lock on the next in hierarchy object. In
practice, when a transaction wants to acquire a lock on the row, it will acquire an intent lock on a
table, which is a higher hierarchy object. By acquiring the intent lock, the transaction will not allow
other transactions to acquire the exclusive lock on that table (otherwise, exclusive lock imposed by
some other transaction would cancel the row lock).

This is an important lock type from the performance aspect as the SQL Server database engine will
inspect intent locks only at the table level to check if it is possible for transaction to acquire a lock in
a safe manner in that table, and therefore intent lock eliminates need to inspect each row/page lock
in a table to make sure that transaction can acquire lock on entire table

Schema locks (Sch) -will be acquired when a DDL statement is executed, and it will prevent access
to the locked object data as the structure of the object is being changed. SQL Server allows a single
schema modification lock (Sch-M) lock on any locked object. In order to modify a table, a transaction
must wait to acquire a Sch-M lock on the target object. Once it acquires the schema modification
lock (Sch-M), the transaction can modify the object and after the modification is completed and the
lock will be released. A typical example of the Sch-M lock is an index rebuild.

Bulk Update locks (BU) – this lock is designed to be used by bulk import operations when issued
with a TABLOCK argument/hint. When a bulk update lock is acquired, other processes will not be
able to access a table during the bulk load execution. However, a bulk update lock will not prevent
another bulk load to be processed in parallel. But keep in mind that using TABLOCK on a clustered
index table will not allow parallel bulk importing.

50. Exceptional Handling using Raiseerror and throw statements.

 Raiseerror has both mandatory (i.e. msg_id/msg_str, severity, state)

And optional parameter (with option) but throw can be used without any parameter.

 One of the known issues of raising an error using RAISERROR statement is often the
incorrect error line number that is returned to the calling application but in case throw
gives proper error line number.

[Type text]
 Throw is available 2012 version onwards

 Sometimes RAISERROR statement returns incorrect error number which does not match as
per sys.messages object message_id. But throw returns correct error number.

 There are sometimes inconsistencies in terms of whether or not T-SQL commands should be
executed after RAISERROR statement is executed. In case of throw any commands after
throw never gets executed.

 Severity of Exceptions Raised by THROW is Always Set to 16. But we have flexibility of
RAISERROR statement in terms of setting and resetting the severity level of an error.

 Unfortunately, the THROW statement lacks support for the WITH argument. In RAISERROR,
there is a choice of three possible values that can be used along the WITH argument. These
possible values can be:

LOG

NOWAIT

SETERROR

 Errornumber more than 50000 needs to be first add in error log by sp_addmessage
procedure before using in Raiseerror function Yet, THROW statement is able to reference a
non-existent error number.

 Although we can raise both user-defined and system-defined exceptions in a RAISERROR


statement, system-defined exceptions can only be raised when the THROW statement is used
within a CATCH block

51. What is stored procedure?

 A stored procedure is a set of precompiled sql statements which is used to perform a special
task.

 A stored procedure can reduce network traffic.

An operation requiring hundreds of lines of Transact-SQL code can be performed


through a single statement that executes the code in a procedure, rather than by
sending hundreds of lines of code over the network.

52. What is constraint?

Constraints are rules and restrictions applied on a column or a table such that unwanted data can't
be inserted into tables to maintain the accuracy and integrity of the data inside table.
Constraints can be divided into following two types,

[Type text]
 Column level constraints : limits only column data

 Table level constraints : limits whole table data

53. What is view?

A VIEW, in essence, is a virtual table that does not physically exist in SQL
Server. Rather, it is created by a query joining one or more tables.

54. What is transaction?

A transaction is a single unit of work. If a transaction is successful, all of the


data modifications made during the transaction are committed and become a
permanent part of the database. If a transaction encounters errors and must
be canceled or rolled back, then all of the data modifications are erased.

If a run-time statement error (such as a constraint violation) occurs in a


batch, the default behavior in the Database Engine is to roll back only the
statement that generated the error.

A transaction will be rolled back if the connection closes (network error,


client disconnect, high-severity error) and the commit was not reached. A
transaction will be rolled back if the SQL Server terminates (shutdown, power
failure, unexpected termination) and the commit was not reached. Under
default settings, a non-fatal error thrown by a statement within a transaction
will not automatically cause a rollback. (fatal = severity 19 and above)

So what can we do if we do want a transaction to completely roll back if any


error is encountered during the execution?

There are two option.


1) Use the Xact_Abort setting
2) Catch and handle the error, and specify a rollback within the error handling

When SET XACT_ABORT is ON, if a Transact-SQL statement raises a run-time


error, the entire transaction is terminated and rolled back.

When SET XACT_ABORT is OFF, in some cases only the Transact-SQL


statement that raised the error is rolled back and the transaction continues
processing. Depending upon the severity of the error, the entire transaction
may be rolled back even when SET XACT_ABORT is OFF. OFF is the default
setting.

[Type text]
55. How to find a character repeated in a string using CTE

DECLARE @MyTable TABLE (Input NVARCHAR(30))

--insert sample values


INSERT INTO @MyTable (Input)
VALUES('Abracadabra'), ('Hocus Pocus'), ('Korona Kielce Królem'), ('Chamba Wamba'),
('Vinietai'), ('Corozo')

--here CTE begins:


;WITH CTE AS
(
--initial query
SELECT Input, CONVERT(VARCHAR(1),LEFT(Input,1)) AS Letter, RIGHT(Input, LEN(Input)-1) AS
Remainder
FROM @MyTable
WHERE LEN(Input)>1
--recursive part
UNION ALL
--recursive query
SELECT Input, CONVERT(VARCHAR(1),LEFT(Remainder,1)) AS Letter, _
RIGHT(Remainder, LEN(Remainder)-1) AS Remainder
FROM CTE
WHERE LEN(Remainder)>0
)
SELECT Input, Letter, ASCII(Letter) AS CharCode, COUNT(Letter) AS CountOfLetter
FROM CTE
GROUP BY Input, Letter, ASCII(Letter)
HAVING COUNT(Letter)>2

56. SQL Server differences of char, nchar, varchar and nvarchar data types?
 nchar and nvarchar can store Unicode characters.
 char and varchar cannot store Unicode characters.
 char and nchar are fixed-length which will reserve storage space for
number of characters you specify even if you don't use up all that space.
 varchar and nvarchar are variable-length which will only use up spaces for
the characters you store. It will not reserve storage like char or nchar.
N stands for National Language Character Set and is used to specify a
Unicode string. When using Unicode data types, a column can store any
character defined by the Unicode Standard. Note that Unicode data types
take twice as much storage space as non-Unicode data types.

nchar and nvarchar will take up twice as much storage space, so it may be wise to use them
only if you need Unicode support.

57. Decimal , Numeric and Float data type.


Example decimal[ (p[ ,s] ) p is precision and s means scale

[Type text]
 The result precision and scale have an absolute maximum of 38. When a
result precision is greater than 38, it is reduced to 38, and the
corresponding scale is reduced to try to prevent the integral part of a
result from being truncated
 Decimal and numeric datatype both are same and have (18, 0) as default
(precision,scale) parameters in SQL server

DECIMAL(18,4) Means a total of 18 digits, 4 of which after the decimal point (and 14
before the decimal point).

use the float or real data types only if the precision provided by decimal (up to 38 digits) is
insufficient

58. Data types in Sql server.

A data type is an attribute that specifies the type of data that the object can hold: integer data, character
data, monetary data, date and time data, binary strings, and so on.

Large object data types: text, ntext, image, varchar(max), nvarchar(max), varbinary(max), and xml

Data
Range Storage
type

bigint -2^63 (-9,223,372,036,854,775,808) to 2^63-1 8 Bytes


(9,223,372,036,854,775,807)

int -2^31 (-2,147,483,648) to 2^31-1 (2,147,483,647) 4 Bytes

smallint -2^15 (-32,768) to 2^15-1 (32,767) 2 Bytes

tinyint 0 to 255 1 Byte

Bit- An integer data type that can take a value of 1, 0, or NULL. The string values TRUE and FALSE can be
converted to bit values: TRUE is converted to 1 and FALSE is converted to 0.

59. Isnumeric and TRY_PARSE()

Isnumeric does not give proper result as expected to avoid that we can use

TRY_PARSE() function 2012 onwards returns the result of an expression, translated to the requested
Data-Type, or NULL if the Cast fails.

60. What are the differences between primary keys and foreign keys in SQL Server?”

[Type text]
 A primary key provides a mechanism for ensuring that the rows in a table are unique.
 Because the primary key must be able to identify each row, no columns that participate in a
primary key can contain NULL values.
 When you create a primary key, SQL Server automatically creates an index based on the key
columns. If no clustered index is defined on the table, SQL Server creates a clustered index;
otherwise, a non-clustered index is created.
 Like a primary key, a foreign key is also a type of constraint placed on one or more columns in a
table. The foreign key establishes a link between the key columns and related columns in another
table. (You can also link the foreign key columns to columns within the same table.)
 The foreign key enforces referential integrity between the two tables. That means you can add
only permitted data to the foreign key columns in the child table
 Other ways that foreign keys differ from primary keys are that you can create more than one
foreign key on a table and you can define foreign keys on columns that permit NULL values. In
addition, SQL Server does not automatically index the foreign key columns like it does for primary
keys. If you want to index the foreign key columns, you must do so as a separate step.
 the foreign key must reference a primary key or unique constraint, although that reference can
be on the same table or on a different table
 A foreign key must also have the same number of columns as the number of columns in the
referenced constraint, and the data types must match between corresponding columns
 with one notable exception. Foreign key columns can contain NULL values.
 It is not mandatory that a foreign key must reference a primary key rather it may refer any key
column which is having Unique constraint.

61. How do I create a primary key on a SQL Server table?”


 You can add a primary key to a table when you create the table or after you’ve created it, as long
as a primary key doesn’t already exist. If nullability has not been defined on the columns that
participate in the primary key, the database engine will automatically configure those columns
as NOT NULL.
 You should base your clustered index on columns that will remain relatively stable and be
incremented in an orderly fashion, rather than grow randomly and change frequently
 In addition, you want your clustered index to support the queries that will most commonly access
the table’s data so you can take full advantage of how data is stored and the index structured.
 Clearly, deleting a primary key is an easy enough process. Be aware, though, that doing so also
deletes the index associated with the primary key

62.Cascading option with the foreign key column


 The foreign key cascading options determine what actions the database engine should take if you
try to delete or update data in the referenced columns in the parent table. (Adding data is not a
problem). For each action (delete or update), you can set one of the following four options.

1. NO ACTION: The database engine raises an error if you try to modify data in the parent table
that is being referenced in the child table. This is the default behavior.

[Type text]
2. CASCADE: The database engine updates or deletes the corresponding rows in the child data if
you update or delete that data in the parent table.

3. SET NULL: The database engine sets the foreign key columns to NULL in the child table if you
update or delete the corresponding values in the parent table.

4. SET DEFAULT: The database engine sets the foreign key columns to their default values in the
child table if you update or delete the corresponding values in the parent table.

63 .Should I create an index on a foreign key column?”


 If your foreign key is often referenced in your queries, particularly when joining the child and
parent tables that form the foreign key relationship, chances are you’ll want to create an index on
the foreign key column or columns

ALTER TABLE OurStuff


NOCHECK CONSTRAINT fk_StuffType;
This allows foreign key not to check

64 . What is partitioned table..?


 Table partitioning is a way to divide a large table into smaller, more manageable parts without
having to create separate tables for each part. Data in a partitioned table is physically stored in
groups of rows called partitions and each partition can be accessed and maintained separately.
Partitioning is not visible to end users, a partitioned table behaves like one logical table when
queried.
 An alternative to partitioned tables (for those who don’t have Enterprise Edition) is to create
separate tables for each group of rows, union the tables in a view and then query the view instead
of the tables. This is called a partitioned view.

65. What is a Partition Column?

 Data in a partitioned table is partitioned based on a single column, the partition column, often
called the partition key. Only one column can be used as the partition column, but it is possible to
use a computed column.
 When the partition column is used as a filter in queries, SQL Server can access only the relevant
partitions. This is called partition elimination and can greatly improve performance when querying
large tables.

66. What is a Partition Function?

 The partition function defines how to partition data based on the partition column. The partition
function does not explicitly define the partitions and which rows are placed in each partition.
Instead, the partition function specifies boundary values, the points between partitions. The total
number of partitions is always the total number of boundary values + 1.
 Partition functions are created as either range left or range right to specify whether the boundary
values belong to their left or right partitions:

[Type text]
 Range left means that the actual boundary value belongs to its left partition, it is the last value in
the left partition.
 Range right means that the actual boundary value belongs to its right partition, it is the first value
in the right partition.
 Example : The first boundary value is between 2012 and 2013. This can be created in two ways,
either by specifying a range left partition function with December 31 st 2012 as the boundary
value, or as a range right partition function with January 1 st 2013 as the boundary value:
 Partition functions are created as either range left or range right, it is not possible to combine
both in the same partition function. In a range left partition function, all boundary values
are upper boundaries, they are the last values in the partitions. If you partition by year, you use
December 31st. If you partition by month, you use January 31st, February 28th / 29th, March 31st,
April 30th and so on. In a range right partition function, all boundary values are lower boundaries,
they are the first values in the partitions. If you partition by year, you use January 1st. If you
partition by month, you use January 1st, February 1st, March 1st, April 1st and so on:

67. What is a Partition Scheme?

 The partition scheme maps the logical partitions to physical filegroups. It is possible to map each
partition to its own filegroup or all partitions to one filegroup.
 A filegroup contains one or more data files that can be spread on one or more disks. Filegroups
can be set to read-only, and filegroups can be backed up and restored individually. There are
many benefits of mapping each partition to its own filegroup. Less frequently accessed data can
be placed on slower disks and more frequently accessed data can be placed on faster disks.
Historical, unchanging data can be set to read-only and then be excluded from regular backups. If
data needs to be restored it is possible to restore the partitions with the most critical data

[Type text]

You might also like