Professional Documents
Culture Documents
SP and Triggers-Unlocked
SP and Triggers-Unlocked
Stored Procedures
and Triggers
Objectives
• Learn about the features and benefits of stored procedures.
• Create useful stored procedures.
• Understand input and output parameters.
• Learn how to validate input and handle errors in stored procedures.
• Use the Transact-SQL Debugger to debug stored procedures.
• Use temp tables in your stored procedures.
• Understand the uses for triggers and learn how to write one.
• Use an INSTEAD OF trigger with a view.
Unlike views, which are also saved as database objects, stored procedures
support the full capabilities of Transact-SQL. A single stored procedure can
contain up to 250 megabytes of text, a limit you’re not going to run up against
any time soon. Stored procedures can accept input parameters and can return
more than one result set, and they can also return data in the form of output
parameters and a return value.
TIP: SQL Server uses stored procedures in most of its internal operations. System
stored procedures starting with the db_ prefix are located in each user
database, and system stored procedures starting with the sp_ prefix are
located in the master database. You can read the Transact-SQL of these stored
procedures by opening them in the Enterprise Manager or by running the
sp_helptext system stored procedure in the Query Analyzer and supplying the
stored procedure name. You’ll even find that most of them contain
explanatory comments.
The only tasks that cannot be completed with a stored procedure are the
creation of triggers, defaults, rules, other stored procedures, and views. A
stored procedure can do everything from executing a basic SELECT statement
to enforcing complex business logic and explicit transactions.
1. The command is parsed for syntax. Any commands that are syntactically
incorrect are rejected.
If you send direct SQL statements one at a time from your application, then
each statement requires all four stages for each single statement. Contrary to
common belief, the execution plan is not saved with the stored procedure when
it is created. For a stored procedure, stages 1 and 2 happen only once, when
you save the stored procedure. Step 3 occurs only if the execution plan is not
already in the memory cache. Subsequent executions only need step 4
(execution).
Stored procedure execution plans are held in a memory cache and are shared
among multiple connections. However, there are certain conditions that will
cause SQL Server to compile a new execution plan:
If the stored procedure is not used for a while, the plan is flushed from the
cache, and the next time the stored procedure is called, a fresh plan is created
based on the latest statistics. The ability of SQL Server to recompile stored
procedure plans is a good thingan obsolete or inefficient plan is worse than
no plan at all.
[WITH
{RECOMPILE | ENCRYPTION | RECOMPILE , ENCRYPTION}]
[FOR REPLICATION]
AS sql_statement [...n]
Stored procedure names must conform to the rules for identifiers, and must be
unique within the database for each owner. You can optionally add a number
after a semicolon—this allows you to create a group of procedures with the
same name and different numbers. You would need to use name;number to
call those procedure, but you could drop a group of procedures by using just
the name without a number.
The @parameter argument specifies the parameters for the stored procedure
and must be declared with a data type. You’re allowed up to 2,100 parameters–
more than you’ll ever need. You’ll learn about input and output parameters
later in this chapter.
If you set the RECOMPILE option, SQL Server will not cache a plan. The
ENCRYPTION option will encrypt the definition of the stored procedure.
See One of the most important reasons to use stored procedures is their support for
StoredProcedures.SQL both input and output parameters. This section covers creating stored
procedures and handling parameters.
The easiest way to get a stored procedure to return a value is to use a SELECT
statement. This stored procedure returns the results from the SELECT
statement, but it is very different from a view containing the same statement.
You cannot SELECT just certain rows or columns from the result set of a
stored procedure the way you can with a view. Also, you cannot INSERT,
UPDATE, or DELETE rows from the result set of a stored procedure. Those
actions must all take place within the stored procedure.
procEmployeeList
However, it’s probably a good idea to get into the habit of explicitly executing
your stored procedures. Either of the following two statements will execute the
stored procedure:
EXECUTE procEmployeeList
EXEC procEmployeeList
or by position:
If you have more than one parameter, separate them with commas. If you
execute the stored procedure by position, the parameters must be supplied in
the order in which they are declared. If you execute the stored procedure by
name, the order is unimportant. Passing parameters by name makes your code
easier to read and maintain. On the other hand, passing values by position is
slightly faster. In the following case where you forget to supply the parameter
entirely, you’ll get a run-time error:
EXEC procEmployeeListByCity
Code calling the stored procedure won’t necessarily bomb if the parameter is
not supplied, if you have specified a default value. The following example uses
the ALTER PROC syntax to revise the original stored procedure. In this case,
NULL is supplied as the default value for the @City parameter. The stored
procedure code then tests the parameter value, branching to execute a different
query based on whether the input parameter is NULL or has actually been
supplied:
When you execute the stored procedure without supplying a parameter, only
three rows are returned, since the TOP clause has limited the result set, as
shown in Figure 1.
EXEC procEmployeeListByCity
Figure 1. The result set from executing the parameterized stored procedure
without passing a parameter truncates the output at three rows.
When you execute the stored procedure with a parameter, then the WHERE
clause is applied and the matching rows are returned, as shown in Figure 2.
Figure 2. The result set for employees who match the @City input parameter.
Sometimes it is convenient to be able to test for Null values without having to use the
special IS NULL syntax. You can do this in a stored procedure if you SET
ANSI_NULLS OFF before creating the procedure. Changing that setting allows Nulls
to be checked using the equal sign, just like other values, and the setting will be saved
with the stored procedure, even if it isn’t in effect for the rest of your database. The
following procedure would allow you to find all employees where City is Null by
passing a Null value to the @City parameter:
SET ANSI_NULLS OFF
GO
CREATE PROC procEmployeeListByCityNullsOK
@City varchar(25)
AS
SELECT EmployeeID, LastName, FirstName, Address,
City, State, ZipCode, HomePhone, url_EmailAddress
FROM tblEmployee
WHERE City = @City
GO
SET ANSI_NULLS ON
GO
Output Parameters
Output parameters are used when you want to return a valuesuch as the new
identity column for an inserted record. Declare them normally, with the
OUTPUT keyword at the end of the declaration statement:
The following procedure inserts a new record in the tblEmployee table. It uses
input parameters to supply the new column values, and it retrieves the new
identity column value into an output parameter, using the @@IDENTITY
system function. A default value isn’t necessary for the output parameter, but
there is no harm in providing one. Output parameters in SQL Server are
actually input/output parameters that can also be used to pass values into the
procedure:
To get the new identity column value using @@IDENTITY when executing
the stored procedure from within a Transact-SQL statement, you need to
declare a variable to hold the new identity column value. In addition, you need
to add the word OUTPUT when you pass your variable to the output parameter
of the stored procedure:
An alternate way is to return the new identity column in your stored procedure
as a result set, and not as an output parameter. The advantage to simply
returning it as a result set is that you don’t have to declare and initialize an
output parameter, as shown in the following statement. However, output
parameters are slightly more efficient than creating a result set.
You can’t use the SELECT statement to both assign variable values and select
data from a table at the same time. The following code snippet will cause an
error:
You need to separate the operations and have two SELECT statements, one to
assign the value, and the other to create the result set:
Most syntax errors result from spelling errors or from forgetting simple things such as
using CREATE PROC when you should have used ALTER PROC. The Query
Analyzer makes it easy for you to locate your bad syntax when you do hit a run-time
error. For example, let’s say you use incorrect syntax and the following message
appears in the Messages pane in the Query Analyzer:
Simply double-click on the message and you’ll jump to the offending line of code.
This comes in handy when you have written a lot of code and aren’t sure where the
problem line is located.
SET NOCOUNT ON
Use SET NOCOUNT ON as the first line after AS within your stored
procedure. This eliminates the printed message of (xx rows(s) affected) in the
Query Analyzer window. It also eliminates a message of DONE_IN_PROC
that is communicated from SQL Server to the client application, which causes
another round trip across the network. Setting this option will not have any
impact on the value of @@ROWCOUNT.
The following example uses SET NOCOUNT ON and also returns the number
of rows affected by the query as a second result set:
Execute the procedure, and note the second result set shown in Figure 3.
EXEC procEmployeeListNoCount
The rules for temp tables are different when the temp table is created in a
stored procedure. A temp table created within a stored procedure is only
available for the duration of the execution of that procedure, and is only visible
to the stored procedure in which it was created. If a stored procedure creates a
temp table, then calls another stored procedure, the child procedure is able to
reference this same table. However, if the child procedure creates a temp table,
the calling procedure will not be able to work with this table.
The following example uses a temp table to create a crosstab query showing a
summary of products sold over the last three years, broken down by product
and by year. The product sales numbers are selected from a view and added to
the temp table for one year at a time for each product. At the end of the
procedure, the temp table’s contents are returned by a SELECT statement:
-- Declare variables
DECLARE @Year int
-- Initialize counter
SET @Counter = 1
In addition to being able to create local temp tables, you can also create global temp
tables. A global temp table has global visibility, and can be seen by all connections.
Global objects are named with a double pound sign prefix, as shown in this example
where a global temp table named ##GlobalTemp is created:
However, global temp tables are automatically dropped only when the connection that
created them disconnects and all other connections stop referencing them. Once the
connection that created the global temp table disconnects, connections initiated
subsequent to the closing of that connection will be unable to use the global temp
table. Connections using the global temp table that were initiated before its creator
disconnected can continue to use it.
You’ll also want to declare a couple of variables to hold error codes and the
number of rows returned:
The next issue is validating the input parameters. What if you don’t want just
an EmployeeID with a bunch of nulls for LastName, FirstName, and all the
other fields in the table? That would be considered a garbage row by any
standards. In addition to creating NOT NULL constraints when you define
columns that should always contain data, you can also test for nulls in the
stored procedure before attempting an insert.
The first step is to validate that at least a FirstName and a LastName are
supplied. During validation, you can build up a message string and assign the
message code to return information if the parameters don’t contain acceptable
values.
At the end of the tests, check the return code. If there’s anything wrong, the
RETURN statement will unconditionally exit the stored procedure, passing
back as output parameters the return code of 0 and a return message indicating
the missing values:
IF @LastName IS NULL
SELECT @RetCode = 0,
@RetMsg = @RetMsg + 'Last Name Required. '
IF @FirstName IS NULL
SELECT @RetCode = 0,
@RetMsg = @RetMsg + 'First Name Required. '
IF @RetCode = 0
RETURN
The next part of the stored procedure does the actual insert:
INSERT INTO tblEmployee(
LastName, FirstName, Address,
City, State, ZipCode,
HomePhone)
VALUES(
There are three values you want to capture right after the INSERT statement:
@@ERROR, @@ROWCOUNT, and @@IDENTITY. Because @@ERROR
and @@ROWCOUNT are very fragile, they are best captured immediately
into local variables:
The local @Err variable is then tested, and if an error has occurred, processing
jumps using the GOTO statement to the error handler.
IF @Rows > 0
SELECT @EmployeeID = @@IDENTITY,
@RetCode = 1,
@RetMsg = 'New Employee Added'
ELSE
SELECT @EmployeeID = 0,
@RetCode = 0,
@RetMsg = 'New Employee Not Added'
RETURN
The error handler is at the end of the procedure and formulates the return code
and return message to pass back to the client:
HandleErr:
SELECT @EmployeeID = 0,
@RetCode = 0,
@RetMsg = 'Runtime Error: ' + CONVERT(VarChar, @Err)
RETURN
Figure 4 shows the result set displaying the return information from the stored
procedure.
Figure 4. The result set returning information from the stored procedure.
RAISERROR to create your own custom error messages. The following stored
procedure uses RETURN and RAISERROR, rather than output parameters, to
return error information to the client:
CREATE PROC procEmployeeInsertValidate2
@LastName varchar(50) = NULL,
@FirstName varchar(50) = NULL,
@Address varchar(50) = NULL,
@City varchar(25) = NULL,
@State varchar(2) = NULL,
@ZipCode varchar(10) = NULL,
@HomePhone varchar(10) = NULL,
@EmployeeID int = NULL OUTPUT
AS
DECLARE @Err int
DECLARE @Rows int
DECLARE @ErrMsg varchar (100)
HomePhone)
VALUES(
@LastName, @FirstName, @Address,
@City, @State, @ZipCode,
@HomePhone)
To retrieve the return value, declare a variable and use EXEC to assign the
return value of the procedure to that variable:
You can then use a SELECT statement to display the return value, along with
the new EmployeeID (if one was successfully created):
SELECT @RetC AS SuccessCode, @EmpID as NewEmployeeID
Rather than forcing you to use string concatenation to build your error
message, RAISERROR allows you to embed tokens in the message and to
supply values that will automatically be substituted for those tokens at runtime.
This follows the same pattern as the printf function in C or C++. In this
example, the first and last names of the employee are automatically included in
the error message.
To test the error message, try opening tblEmployee in design view in the
Enterprise Manager, and remove the default value for the required column
url_EmailAddress. Then run the following code, which doesn't supply a value
for this required column:
The error message that is returned automatically includes the values passed in
for FirstName and LastName:
Debugging
Once launched, you’ll be prompted to supply any parameters, as shown in
Figure 5. The Auto roll back option rolls back any changes you make during
the debugging session, allowing you to run your procedure without actually
changing any data. The debugger will then open in it’s own window.
Try It Out!
Follow these steps to test the debugger on the stored procedure created in the
previous example:
2. Supply only the LastName parameter (as shown in Figure 5) and click
Execute. This launches the debugger in a new window, with the stored
procedure loaded and running.
3. Inspect the Locals pane in the middle of the window. You should see
all of your parameters displaying NULL, except for the LastName, as
shown in Figure 6.
5. Drag the mouse slowly over the toolbar at the top of the debugging
window. There are no menu itemsyou can choose between the
toolbar, and the right-click mouse menu items shown in Figure 7.
6. Choose the Step Into toolbar button or press the F11 key to step
through the stored procedure. You’ll be able to see the values of your
variables and input parameters, as shown in Figure 8.
7. You can restart with the same input parameters by clicking the Go
(F5) button, but there doesn’t seem to be any way to supply fresh input
parameter or variable values.
8. When you’re done stepping through code, simply close the window
and the debugger will go away.
TIP: Although the Transact-SQL Debugger is only available for stored procedures,
you can get it to debug any code called by a stored procedure. Simply create a
wrapper stored procedure to call your trigger or function code, and start
stepping through it.
Building Triggers
Triggers are always associated with tables or views, and can’t be found as
independent objects in the Enterprise Manager. They can be found, however,
in the Object Browser, in a Triggers folder that appears for every table or view.
What Is a Trigger?
Triggers are procedures that run automatically in response to changes to your
data. The primary purpose of a trigger is to make a decision as to whether
these data changes should be committed to the database, but they can perform
any type of data manipulation action. There are three standard types of
triggers: INSERT, UPDATE, and DELETE. In SQL Server 2000, a new type
was added—INSTEAD OF. While earlier versions of SQL Server only
supported a single trigger of each type per table, SQL Server 7.0 supports
multiple triggers on the same table, and in SQL Server 2000 you can even
control which triggers fire first and last.
In SQL Server 4.x, triggers were the only method available to enforce primary
key/foreign key relationships, also known as referential integrity. When SQL
Server 6.x was introduced, so was the ability to create foreign key constraints
that enforced referential integrity without triggers, called declarative referential
integrity (DRI). The role of the trigger moved primarily from enforcing
relationships to enforcing business rules that were too complex to be enforced
in a CHECK constraint. Some developers continued to use triggers for
referential integrity, because they were the only means for implementing
cascading updates and deletes in older versions of SQL Server. Now that SQL
Server 2000 can enforce the primary/foreign key relationships as well as
cascading updates and deletes, there is no need to use triggers to enforce
referential integrity.
Many developers try to avoid using triggers at all, because their hidden actions
can make maintenance and debugging very difficult, especially if the triggers
make changes to tables other than the one being explicitly updated. By forcing
all data changes to be made using stored procedures, you can avoid the need to
use triggers at all. If, however, you allow users and client applications to
directly update, insert, or delete data using ad hoc queries, then triggers are one
way to maintain control over your data.
Like stored procedures, triggers can contain complex logic, variables, error
handling, and almost the full range of Transact-SQL programming. The only
limitation on your Transact-SQL code in triggers is that you cannot create
objects or modify their design, and you cannot perform administrative tasks
like backups.
NOTE SQL Server 2000 introduces a new type of trigger, the INSTEAD
OF trigger. INSTEAD OF triggers are executed instead of the
action that raised the trigger. INSTEAD OF triggers can also be
created on views. AFTER is synonymous with FOR, which is used
in earlier versions of SQL Server. An AFTER (FOR) trigger can
only be created on a table, not a view. AFTER triggers fire after
the INSERT, UPDATE, or DELETE triggering action, and after
any constraints and referential actions have been processed.
Triggers can also reference other objects, and you can join other tables to the
inserted and deleted tables. For example, to create an audit trail, you could
create a trigger that inserts the values contained in the deleted table into a
backup table and perhaps adds the name of the current user and the current
date and time. Since the trigger fires each time a row is deleted, no further
action is necessary.
You can create triggers for both tables and views. Triggers created for tables
will fire whenever the data in the table is modified. Triggers on views,
however, only fire if the data modification is done through that view, and the
only kinds of triggers you can create for views are INSTEAD OF triggers.
Trigger Syntax
The syntax for triggers is very much the same as for a stored procedure. The
primary difference between a stored procedure and a trigger is how the object
is executed. Stored procedures are explicitly called, while triggers are fired
automatically in response to a data modification.
Notice that there are no parameters that can be passed to a trigger. Triggers get
their source data from the inserted and deleted tables. The deleted table stores
copies of all rows that are to be impacted during a DELETE or UPDATE
operation. The inserted table holds new data for an INSERT or UPDATE.
Try It Out!
Follow these steps to create a trigger that disallows deleting a row of data from
a table:
1. Create a table named tblTest and insert two values into it, Stop and
Go.
2. The following trigger will show how you can rollback a transaction,
raise an error, and send a Net Send message over the network when
the row containing the value Stop is deleted. The ROLLBACK
statement cancels the delete, and the record remains unchanged.
IF @Test = 'Stop'
BEGIN
ROLLBACK TRAN
RAISERROR ('This record cannot be
deleted.',16,1)
END
GO
The biggest risk involved with using triggers is that users and developers may be firing
data operations without even being aware of them.
Triggers are a relic of older versions of SQL Server. The need for triggers to enforce
referential integrity or cascading deletes has largely passed. Stored procedures now
contain all of the features necessary to do everything a trigger can do. Implementing
all rules in stored procedures and disallowing direct table access provides improved
data security and better performance.
Because triggers consist of additional Transact-SQL code, they add overhead to data
operations and negatively impact performance. In addition, trigger code is buried deep
within a table definition. Other than designating a first and last AFTER trigger, you
have no control over the firing of triggers. Complex business rules implemented in
stored procedures rather than triggers will improve performance and simplify code
maintenance.
Since triggers execute code that extends a transaction, they can potentially extend the
duration of a transaction for a longer period of time, especially when they access other
tables. This performance penalty has the potential to lower multi-user concurrency to
unacceptable levels by prolonging the amount of time that locks are held for.
For example, a view containing a join between the products and the categories
tables would normally only allow you to update either one or the other in a
single UPDATE statement, not both:
UPDATE vwProductByCategoryItrig
SET Product = 'Shark Thingys', Category = 'Thingys'
WHERE ProductID = 1
UPDATE tblCategory
SET tblCategory.Category =
(SELECT inserted.Category
FROM inserted)
WHERE tblCategory.Category =
(SELECT deleted.Category FROM deleted)
Although a single UPDATE statement that tried to update both the Product and
the Category tables through a view would otherwise fail, the INSTEAD OF
trigger fires instead of the normal UPDATE statement and explicitly writes
changes back to both tables. The user or client application doesn’t even have to
know what the underlying tables are or how they are related. The update
statement will now succeed.
UPDATE vwProductByCategoryItrig
SET Product = 'Shark Thingys', Category = 'Thingys'
WHERE ProductID = 1
As you can see, INSTEAD OF triggers can make views very powerful indeed,
allowing actions that would not normally be permitted. You could also use
INSTEAD OF triggers to call stored procedures to perform the requested data
modification. This useful feature in SQL Server 2000 may tempt developers
who have been dead set against using triggers in the past to take a second look.
Summary
• Stored procedures deliver the best performance because a query plan is
cached on first execution.
• Stored procedures minimize network resources and can be used as a
security mechanism to prevent direct access to tables.
• Stored procedures support parameters, variables, control-of-flow, error
handling, and other Transact-SQL programming language features.
• SET NOCOUNT ON eliminates an unnecessary network round trip
and the done-in-proc message.
• Temporary tables provide a way to break up complex processing.
• The Transact-SQL Debugger is a useful new tool for debugging and
testing stored procedures.
• Triggers can be fired automatically on INSERT, UPDATE, and
DELETE statements.
• Triggers extend transactions and can negatively impact performance
and maintainability.
• An INSTEAD OF trigger can allow views that are not normally
updatable to be updated.
Questions
1. When does a stored procedure get compiled?
2. How can you prevent users from directly modifying data in tables?
3. How can you eliminate an extra network round-trip and the done-in-proc
message?
Answers
1. When does a stored procedure get compiled?
On first execution
2. How can you prevent users from directly modifying data in tables?
Remove all permissions from tables, and use stored procedures,
granting EXECUTE permissions on the stored procedures.
3. How can you eliminate an extra network round-trip and the done-in-proc
message?
Use SET NOCOUNT ON as the first statement in your stored
procedure.
Lab 11:
Stored Procedures
and Triggers
TIP: Because this lab includes a great deal of typed code, we’ve tried to make it
simpler for you. You’ll find all the code in StoredProceduresLab.SQL, in
the same directory as the sample project. To avoid typing the code, you can
cut/paste it from the text file instead, or open the file as a script in the Query
Analyzer.
Lab 11 Overview
In this lab you’ll learn how to create a stored procedure and how to debug it
using the Transact-SQL Debugger.
Objective
In this exercise, you’ll create a stored procedure named procCategoryAdd
that creates a new record in the tblCategory table. The procedure should not
create a new row if a category with this name already exists. The stored
procedure should accept input parameters for the values needed in the
tblCategory table, and return a success/failure code and a return message as
well as the new identity column value. It should also eliminate the extra round
trip the done-in-proc message causes.
Things to Consider
• How do you prevent duplicate values from being entered in the
Category field?
• How do you prevent errors?
• How do you handle errors?
• How do you return the new identity column value?
Step-by-Step Instructions
1. Start the SQL Query Analyzer, select the Shark database, and type the
procedure name followed by the input parameters:
3. Test to see if the input parameter is NULLif it is, exit the procedure by
issuing the RETURN statement after assigning the appropriate return value
and message to the output parameters:
IF @Category IS NULL
BEGIN
SELECT @RetVal = 0,
@RetMsg = 'Category not optional'
RETURN
END
6. Now test to see if there was an error, and check to see if a row got inserted.
Send back the appropriate return code, message, and @@IDENTITY
value:
7. Check the syntax, and if everything’s okay, press F5 to create the stored
procedure.
Objective
In this exercise, you’ll work with the Transact-SQL Debugger to test the stored
procedure you wrote in the first exercise. You’ll add the global @@ERROR
function to see if any run-time errors occur. You’ll test the stored procedure
with a missing input parameter to see that the code is working properly.
Things to Consider
• How do you specify the stored procedure you want to debug?
• How do you launch the Transact-SQL Debugger?
• How do you step through the code?
Step-by-Step Instructions
1. Press F8 to load the Object Browser. Expand the Stored Procedure node
in the database.
3. For the first run, do not fill in any parameter values, as shown in Figure 9.
Click Execute to launch the debugger.
4. Type in @@ERROR in the Globals window. Click the Step Into (F11)
button to step to the next statement in the code.
Figure 10. You should hit the RETURN statement because the @Category input
parameter was not supplied.
6. Close the debugger window. Execute steps 1 and 2 again, this time typing
the parameter value Shark Wear for @Category, as shown in Figure 11.
This category already exists in the Shark databaseyou want to test to
ensure that a duplicate cannot be entered. Click Execute.
Figure 12. The variable values for the input and output parameters are displayed
after the debugger has finished.
8. Close the debugger and start again, this time with a unique value for
@Category. Step through the code. You’ll find that the record didn’t
actually get inserted into the tblCategory table because the Auto rollback
option was selected when the debugger started.
9. To test the code from the Query Analyzer and see the return values, type
the following statements:
Figure 13. The result set from executing the stored procedure.