You are on page 1of 87

MSBI - SSIS

ETL Training
Microsoft SQL Server Integration Services 2008
Agenda

1. Introduction
2. SSIS Architecture
3. Control flow
4. Data flow
5. Labs
6. Emerging scenarios ( Fuzzy, SCD, RSS )
7. More Hands on Lab
8. SQL 2000 – SQL 2005 Migration
9. Resources
What You Learn

1. Use BIDS (Business Intelligence Development Studio)

2. Orchestrate control flow and data flow

3. Work with variables and configurations

4. Debug your flows and handle errors

5. Set up sophisticated logging

6. Extend SSIS through custom code

7. Deploy and manage SSIS packages


What is Expected

1. Understand the basics of ETL and use of Microsoft


SSIS

2. Understand the data flow, control flow, logging,


configurations etc

3. Build, deploy and manage SSIS packages


SQL Server 2008 BI Stack

Reporting Services
Development Tools

Management Tools
Analysis Services
OLAP & Data Mining

Integration Services
ETL

SQL Server
Relational Engine
BI Stack Usage (ETL – Cube – Reporting)

SSIS SSAS SSRS

Databases
ETL Standard Reports

Adhoc Reports
Data Cube
ETL
Warehouse
Flat Files
Excel
ETL

Excel
Introduction to SSIS and Product History

1 Completely new Product and Architecture


2 Introduced from Management Studio 2005 onwards
3 Develop, Debug and Deploy environment in Microsoft Visual
Studio
4 Re-define performance and scalability
5 Rich transforms and BI features out of the box reduce coding
6 Support for Complex operations, Reliability and Configurations
7 Highly extensible platform
8 Integration Services replaces Data Transformation Services
(DTS)
SSIS Package Architecture Overview
SSIS Package Architecture Overview

Standard transforms Custom transforms

Data
Data Source Destination
Adapters Adapters

Loops &
Sequences

Tasks

Event
Handlers

Wizards DTS Designer Command Line

File System Deployment


SQL Server Deployment
SSIS in Microsoft SQL Server 2008

• The Integration Services Run-time engine

• The Integration Services Data flow engine

Data Flow
Control Flow Data Flow

FTP Flat File Oracle ADO.NET


Source Source

Send Mail
Merge
Loop

Execute SQL De-duplicate

Split
Data Flow
SQL Server Flat File
SSIS Package Architecture Overview
Microsoft SQL Server 2005 Integration Services (SSIS) consists of four key parts:
Integration Services service
Integration Services object model
Integration Services runtime and runtime
executable
Data Flow task
SQL Server Import and Export Wizard

The SQL Server Import and Export Wizard provides the


simplest method of copying data between data sources.
This wizard creates a basic SSIS package with one or
more data flow tasks.

To open SQL Server Import and Export Wizard,


Click Start  All Programs  Microsoft SQL Server 
Import and Export Data.

Configure source, destination and package details. Finish


the wizard to import or export the data.
A Guided tour of BI Development Studio

– Launching BIDS
– Project templates – creating and opening SSIS projects
A Guided tour of BI Development Studio (Cont..)

The
package
designer

Package
Explorer

Control
Flow
Solution
Explorer

The Properties
Toolbox Window

Data Flow
SSIS Packages, Control Flow & Task
SSIS Packages, Control Flow & Task

• Package
 Unit of Execution
 One Or More Tasks

Data Flow
Control Flow

Connection Managers
SSIS Packages, Control Flow & Task
(Cont..)

Control Flow Overview


A package consists of a control flow and, optionally, one or more data flows. SSIS
provides three different types of control flow elements:

1. Precedence Constraints
2. Tasks
3. Containers
Control Flow – Loop Containers

• For Loop Container

• ForEach Loop Container


• Several stock enumerators… e.g., all files in
a directory
• Custom enumerators!

• Sequence Container
Control Flow – Sequence Containers

The Sequence container defines a control


flow that is a subset of the package control
flow

Sequence containers group the package into multiple separate control flows, each
containing one or more tasks and containers that run within the overall package control
flow. As each Sequence container is treated as a single unit of operation, it enables the
execution and tracking of parallel operations.
Lab: Creating a Simple SSIS Package

Hands on exercise to be completed on your machine

Excercise:1: Move data from one table to another using simple SSIS
package

Excercise:2: Consider sample employee data from a table, calculate


total salary and load the data into multiple tables
department wise using a SSIS package
Data Sources and Data Source Views
Data
. Sources and Data
Source
Data Sources - A data source inViews
Integration Services (SSIS) represents a connection
to a data source and contains the connection string that defines how Integration
Services will connect to a physical data store.
Data Source Views - A data source view is a document that describes the schema
of an underlying data source. The Data Source View can be used to understand
data source schema.
Connection Manager

Provider
Select the provider to use for
the connection.

Property pane
Sets the provider-specific
connection string properties necessary to
create a connection.

Test connection
Uses the current settings in
the property pane to try to establish a
connection and display whether the
attempt succeeded.
Data Flow
Introduction to Data Flow

Data Flow Overview


Integration Services
(SSIS) provides three different
types of data flow components:
sources, transformations, and
destinations. Sources extract
data from data stores such as
tables and views in relational
databases, files, and Analysis
Services databases.
Transformations modify,
summarize, and clean data.
Destinations load data into data
stores or create in-memory
datasets.
Data Flow: Features

1. Data Extraction
2. Data Routing
3. Data Transformations
4. Data Cleansing
5. Data Warehouse Support
6. Business Intelligence Integration
Introduction to Data Flow - Sources

Sources are the data flow components that make data from different types of
data sources available to a data flow. Sources have one regular output and many
have one error output.

• DataReader Source - Consumes data from a .NET Framework data


provider.
• Excel Source - Extracts data from an Excel file.
• Flat File Source - Extracts data from a flat file.
• OLE DB Source - Consumes data from an OLE DB provider.
• Raw File Source - Extracts raw data from a file.
• XML Source - Extracts data from an XML file.
Introduction to Data Flow – Sources (Cont..)

OLE DB Source Editor Flat File Editor and XML Source Editor
Introduction to Data Flow – Destination

Destinations are the data flow components that load the data in a data
flow into different types of data sources or create an in-memory dataset.
Destinations have one input and one error output.
The following is the list of destinations that SQL Server
Integration Services (SSIS) provides.
Data Mining Model Training Destination
DataReader Destination
Dimension Processing Destination
Excel Destination
Flat File Destination
OLE DB Destination
Partition Processing Destination
Raw File Destination
Recordset Destination
Script Component
SQL Server Compact Edition Destination
SQL Server Destination
Introduction to Data Flow – Transformations
SQL Server 2008 Integration Services (SSIS) transformations are the components in the
data flow of a package that aggregate, merge, distribute, and modify data.

Row Rowset Split and Join Business Other


Transformations Transformations Transformation Intelligent Transformations
s Transformations
Conditional
Character Map Aggregate Fuzzy Grouping Export Column
Split
Copy Column Sort Multicast Fuzzy Lookup Import Column
Percentage
Data Conversion Union All Term Extraction Audit
Sampling
Term Lookup Row Sampling Merge Term Lookup Row Count
Slowly
Data Mining
Derived Column Pivot Merge Join Changing
Query
Dimension
Script
Un pivot Lookup
Component
OLE DB
Command
.
Transformations - Copy Column Transformation

The Copy Column transformation creates new columns by copying input columns and adding
the new columns to the transformation output. Later in the data flow, different transformations
can be applied to the column copies
Transformations - Data Conversion Transformation
The Data Conversion transformation converts the data in an input column to a different data type and then
copies it to a new Output column.
Transformations - Conditional Split Transformation
The Conditional Split transformation can route data rows to different outputs depending on the
content of the data.
Transformations - Sort Transformation

The Sort transformation sorts input data in ascending or descending


order and copies the sorted data to the transformation output. You can
apply multiple sorts to an input;
Transformations – Data Viewers

A data viewer displays data that is moving between two data flow
components. The data flow components can be sources,
transformations and destinations.

Adding/Deleting/Configure data viewer


Add
Click to add a data viewer by using the Configure Data
Viewer dialog box.
Delete
Click to delete the selected data viewer.
Configure
Click to configure a selected data viewer by using the
Configure Data Viewer dialog box.
Character Encoding

To read data from flat files using SSIS, the SSIS needs to do character
encoding. Generally, the flat files can be delimited or fixed width. A text file has
to include things like line endings and carriage returns.

The ASCII character set only has 128 characters (Ex: 0-9, A-Z , a-z etc.).

Many flat file contains Greek characters too. And the difficulty is that in order to
properly read a text file, the reading program needs to know what encoding was
used when the file was created – otherwise it might display the wrong character,
substitute some default “unknown character”.

When you create a connection to a text file, SSIS actually needs the character
encoding of the file to be specified. It calls it a code page.
Package Encryption
ProtectionLevel is an SSIS package property that is used to specify how
sensitive information is saved within the package and also whether to encrypt
the package or the sensitive portions of the package.

DontSaveSensitive: Any sensitive information is simply not written out to the


package XML file.

EncryptSensitiveWithUserKey: Encrypts sensitive information based on the


credentials of the user who created the package.

EncryptSesnitiveWithPassword: Encrypts sensitive information using


password of the package which is configured for PackagePassword property.

EncryptAllWithPassword: Encrypts the entire contents of the SSIS package


with the specified password

EncryptAllWithUserKey: Encrypts the entire contents of the SSIS package by


using the user key. This means that only the user who created the package will
be able open it, view and/or modify it, and run it.

ServerStorage: SSIS won’t encrypt any part of the package.Use SQL Server
security.
Synchronous and Asynchronous Transformations

The SSIS data flow contains two types of transformations.

Synchronous:
A synchronous transformation processes incoming rows and pass
them on in the data flow one row at a time. Output is synchronous with input.
Ex: Data Conversion

Asynchronous:
An asynchronous transformation cannot pass each row along in the
data flow as it is processed.

Asynchronous transformations are divided into two.


i) Semi Blocking Transformations
Ex: Merge, Merge Join

ii) Fully Blocking Transformations


Ex: Sort, Aggregate
Log Providers

SSIS 2008 provides package execution logging feature. The following are the
five types of log providers.

1) Text File

2) SQL Server

3) Windows Event Log

4) XML File

5) SQL Server Profiler


Enabling Log Providers

1) Go to SSIS menu in the package. Click Logging.


2) Select a log provider in the Provider type list, and then click Add.
3) In the Configuration column, select a connection manager or click <New
connection> to create a new connection manager of the appropriate type for
the log provider.
4) On the Details tab, select Events to log all log entries, or clear Events to
select individual events.
5) Click Save and OK.
6) Save SSIS Package.
Disabling Log Providers

1) Go to SSIS menu in the package. Click Logging.


2) Select a log provider.
3) Click Delete.
4) Click OK.
Check Points

Check point starts the package execution from the point of failure in SSIS. To
configure check points in package, the following properties needs to be
configured at the package level.

CheckpointFileName: Specifies the full path and filename of your checkpoint


file.
CheckpointUsage: Specifies when to use checkpoints. The property supports
the following three options:
Never: A checkpoint file is not used.
IfExists: A checkpoint file is used if one exists. This option is the one most
commonly used if enabling checkpoints on a file.
Always: A checkpoint file must always be used. If a file doesn’t exist, the
package fails.
SaveCheckpoints: Specifies whether the package saves checkpoints. Set to
True to enable checkpoints on the package.

set the FailPackageOnFailure property to True on each container or task that


should participate in the checkpoint process.
Transaction

The TransactionOption property exists at the package level, container level and
task level. TransactionOption can be set to one of the following:

Required - If a transaction exists join it else start a new one


Supported - If a transaction exists join it. It is the default value.
NotSupported - Do not join an existing transaction

The built-in transaction support in SSIS makes use of the Distributed


Transaction Coordinator (MSDTC) service which must be running.
Variables
Variables

Variables store values that an Integration Services (SSIS) package and its
containers, tasks, and event handlers can use at run time. The scripts in the
Script task and the Script component can also use variables.

You can use variables in Integration Services packages for the following
purposes:

 Including an in-memory lookup table.


 Loading variables with data values and then using them to specify a
search condition in a WHERE clause.
 Loading a variable with an integer and then using the value to control
looping within a package control flow.
 Populating parameter values for Transact-SQL statements at run time.
 Building expressions that include variable values.
Variables (Cont..)
Variables Scope

 A variable defined on a container will “hide“ a variable with the


same name on that container's parent or grandparent (and
all direct ancestors).
 A variable defined on a package is visible anywhere within the
package, except for the above case.
 A variable defined on a container is visible to all the
container's children. Basically the same as rule 2, since a
package is a container.
 A variable defined on a execute package task is visible to the
package executed by it. Or to put it another way, child packages
may see the parent execute package task variables.
 If a child package uses a variable from a parent package, that
variable will not be available when the package executes on its
own. It will fail to find the parent's variable. It must execute as a
child of the package that contains the variable to see the variable.
Variables (Cont..)

System Variables
SSIS provides a set of
system variables that store
information about the running
package and its objects. These
variables can be used in
expressions and property
expressions to customize packages,
containers, tasks, and event
handlers.
Ex: PackageName

User Defined Variables


SSIS also provides a facility
to create user defined variables.
Ex: Count
Package Configurations and Deployment
Package Configurations and Deployment

SSIS introduced a feature called "Package Configurations".


Package configurations allow us to make SSIS packages portable
and help us change SQL Server and file connectivity information
dynamically.
The "Package Configuration Wizard" allows us to create package
configurations.
Integration Services packages in SQL Server 2005 have no support
for UDLs. You just cannot use UDLs in SSIS packages.

SSIS provides the following package configuration types:


 XML configuration file
 Environment variable
 Registry entry
 Parent package variable
 SQL Server
Package Configuration Wizard

• Makes it easier to deploy Packages( Development


to Production Environment)

• CreateDeploymentUtility - True
• AllowConfigurationChanges - True
• DeploymentOutputPath - <Path>
Package Deployment Wizard
Advanced Data Flow
Advanced Data Flow - Lookup Transformation

The Lookup transformation performs


lookups by joining data in input
columns with columns in a reference
dataset.
The reference dataset can be an
• Existing table
• View
• A new table
• Result of an SQL statement.
It uses an OLE DB connection manager
to connect to the database.
Advanced Data Flow - Lookup Transformation
(Cont..)
Advanced Data Flow - Lookup Caching

When you connect to the database to perform a Lookup


transformation, you can limit the amount of memory that
can be used by the transformation. The Lookup
transformation provides the MaxMemoryUsage
property, for specifying the maximum amount of
memory the transform can use to cache lookup data
Caching Types
Caching Type Description
Full caching The complete reference dataset is read before the input is
processed.
Partial caching The transformation specifies the size of the cache that is loaded
with reference data. This option is available only with
connections that support keyed access.
No caching The reference dataset is accessed by each row in the rowset.
Advanced Data Flow – Fuzzy Lookup Transformation

Fuzzy Lookup

The Fuzzy Lookup transformation performs data cleaning tasks such as


standardizing data, correcting data, and providing missing values.

This transformation differs from the Lookup transformation in its use of fuzzy
matching

The Fuzzy Lookup transformation uses fuzzy matching to return one or more
close matches from the reference table.

The Fuzzy Lookup transformation tries to find an exact match. If it fails, the
Fuzzy Lookup transformation provides close matches from the reference
table.
Advanced Data Flow – Fuzzy Lookup
Transformation

Reference Table

Part# Description Category


Input Record
D65_00026 Intellimouse Web 1.0 OEI Trigem Hardware
Part# Description Category D64_00027 Intellimouse Web 1.0 Hdwr Custom Hardware
NULL Intelmouse OEI Hardware 1-57231-875-9 Introducing Microsoft Windows 2000 Server Books
... ... ...

Failed
Exact Lookup Fuzzy Lookup

Matched record

Part# Description Category Score


Yes D65_00026 Intellimouse Web 1.0 OEI Trigem Hardware 0.73

Yes No
Further process clean data Score > 0.70 Manual review

If no exact match to a given input record is found in the reference table, try to identify a
fuzzy match. If the resulting fuzzy match has high similarity to the input, consider it
clean and route for further processing or loading.
Advanced Data Flow – Fuzzy Grouping Transformation

Fuzzy Grouping

The Fuzzy Grouping transformation performs data cleaning tasks by


identifying rows of data that are likely to be duplicates and selecting
a canonical row of data to use in standardizing the data.

To configure the transformation, you must select the input columns to


use when identifying duplicates, and you must select the type of
match—fuzzy or exact—for each column.

The transformation requires a connection to an instance of SQL Server


2005 to create the temporary SQL Server tables that the
transformation algorithm requires to do its work.
Advanced Data Flow – Fuzzy Grouping
Transformation

Input Relation
RID Organization Name Address

1 Msft Inc One Microsoft Way

2 Microsoft Corporation One Microsoft Way

… … …

Output from Fuzzy Self-Join


Input Org. Name Address Matched Score
RID RID

Fuzzy Lookup 1 Msft Inc One Microsoft Way 1 1.0

Operation 1 Msft Inc One Microsoft Way 2 0.91

2 Microsoft Corporation One Microsoft Way 2 1.0

2 Microsoft Corporation One Microsoft Way 1 0.91

Fuzzy Grouping Output


Input Organization Address Canonical Score

Post-processing
RID Name RID

1 Msft Inc One Microsoft Way 2 0.91 • Identifying duplicates


2 Microsoft
Corporation
One Microsoft Way 2 1.0 • Grouping
• Canonical value selection
… … … … …
Advanced Data Flow – Multicast Transformation

The Multicast transformation distributes its input to one or more


outputs.

This transformation is similar to the Conditional Split transformation.

Both transformations direct an input to multiple outputs.

The Multicast transformation directs every row to every output, and


the Conditional Split directs a row to a single output.
Advanced Data Flow – Merge and Merge Join
Transformation

Merge transformation

The Merge transformation combines two sorted datasets into a


single dataset.
The rows from each dataset are inserted into the output based on
values in their key columns.
With Merge transformation you can perform the following tasks:

– Merge data from two data sources, such as tables and files.

– Create complex datasets by nesting Merge transformations.

– Remerge rows after correcting errors in the data.


Advanced Data Flow – Merge and Merge Join
Transformation

Merge Join transformation

The Merge Join transformation provides an output that is generated by


joining two sorted datasets using a FULL, LEFT, or INNER join.

You can configure the Merge Join transformation in the following ways:

– Specify the join is a FULL, LEFT, or INNER join.

– Specify the columns the join uses.

– Specify whether the transformation handles null values as equal to


other nulls.
SCD (Slowly Changing Dimension) Types

The SCD is used to track dimensional changes. There are six types of SCDs. Out of
six, first three types are commonly used.
Type 0: Data never gets modified.
Type 1: Overwrites old with new data, and therefore does not track historical data.
Type 2: Tracks historical data by creating multiple records for a give natural key in the
dimensional tables with separate surrogate keys and/or different version numbers.
SCD (Slowly Changing Dimension)

Type 3: Tracks changes using separate columns.

Type 4: Tracks changes using history tables


SCD (Slowly Changing Dimension)

Type 6: Combines the approaches of types 1, 2 and 3.


Advanced Data Flow – Slowly Changing Dimension
Transformation

The Slowly Changing Dimension transformation coordinates the updating and


inserting of records in data warehouse dimension tables.
The Slowly Changing Dimension transformation provides the following functionality
for managing slowly changing dimensions:

– Matching incoming rows with rows in the lookup table to identify new and existing rows.

– Identifying incoming rows that contain changes when changes are not permitted.

– Identifying inferred member records that require updating.

– Identifying incoming rows that contain historical changes that require insertion of new
records and the updating of expired records.

– Detecting incoming rows that contain changes that require the updating of existing
records, including expired ones.
Advanced Data Flow – Slowly Changing Dimension
Transformation

The Slowly Changing Dimension transformation supports four types of


changes: changing attribute, historical attribute, fixed attribute, and
inferred member.
Changing attribute changes overwrite existing records. - Type 1 change. The
SCD transformation directs to an output: Attributes Updates Output.
Historical attribute changes create new records instead of updating existing
ones. - Type 2 change. The SCD transformation directs these rows to two
outputs: Historical Attribute Inserts Output and New Output.
Fixed attribute changes indicate the column value must not change. The SCD
transformation detects changes and can direct the rows with changes to
an output: Fixed Attribute Output.
Inferred member indicates that the row is an inferred member record in the
dimension table. An inferred member exists when a fact table references a
dimension member that is not yet loaded. The Slowly Changing
Dimension transformation directs these rows to an output: Inferred
Member Updates.
Advanced Data Flow – Slowly Changing Dimension
Transformation (Cont..)
Advanced Data Flow – Slowly Changing Dimension
Transformation (Cont..)

Steps to create Slowly Changing Dimension transformation outputs

1. Choose the connection manager to access the data source that contains the dimension table that you
want to update You can select from a list of connection managers that the package includes.

2. Choose the dimension table or view you want to update. After you select the connection manager, you
can select the table or view from the data source.

3. Set key attributes on columns and map input columns to columns in the dimension table. You must
choose at least one business key column in the dimension table and map it to an input column.

4. Choose the change type for each column.

– Changing attribute overwrites existing values in records.

– Historical attribute creates new records instead of updating existing records.

– Fixed attribute indicates that the column value must not change.
Advanced Data Flow – Slowly Changing Dimension
Transformation (Cont..)
Steps to create Slowly Changing Dimension transformation outputs

5. Set fixed and changing attribute options.


6. Set historical attribute options.
7. Specify support for inferred members and choose the columns that the
inferred member record contains.

– A record in which all columns with change types


are null.

– A record in which a Boolean column indicates the


record is an inferred member.
8. Review the configurations that the Slowly Changing Dimension Wizard
builds.
Extending SSIS Through Custom Code
Programmability - Script Task

The Script task


provides code to
perform functions that
are not available in the
built-in tasks and
transformations that
SSIS provides
Script task Usage

• You can access data by using technologies that are not supported by built-in connection types.

• You can create a package-specific performance counter.

• You can modify the data in the source with different data from the destination.

• You can validate important columns in the source data and skip records that contain invalid data to
prevent them from being copied to the destination.
SSIS Package Deployment
SSIS Package Deployment

Integration Services (SSIS) includes tools and wizards for deploying packages.

You deploy Integration Services packages by building a deployment utility for


the project that contains the packages you want to deploy

And running the Package Installation Wizard to install the packages to the file
system or to an instance of SQL Server 2005 or above.

Before you build a deployment utility for the packages, you can create package
configurations that update properties of package elements at run time.

The configurations are automatically included when you deploy the packages.
SSIS Package Deployment
How to: Create an Integration Services Package Deployment Utility

1. In Business Intelligence Development


Studio, open the solution that contains the
Integration Services project for which you
want to create a package deployment utility.
2. Right-click the project and click Properties.
3. In the <project name> Property Pages dialog
box, click Deployment Utility.
4. To update package configurations when
packages are deployed, set
AllowConfigurationChanges to True.
5. Set CreateDeploymentUtility to True.
6. Optionally, update the location of the
deployment utility by modifying the
DeploymentOutputPath property.
7. Click OK.
8. In Solution Explorer, right-click the project,
and then click Build.
9. View the build progress and build errors in
the Output window.
SSIS Package Deployment

How to: Deploy Packages to the File System


1. Open the deployment folder on the target
computer.
2. Double-click the manifest file, <project
name>.SSISDeploymentManifest, to start the
Package Installation Wizard.
3. On the Deploy SSIS Packages page, select the
File system deployment option.
4. Optionally, select Validate packages after
installation to validate the packages after they
are installed on the target server.
5. On the Select Installation Folder page, specify
the folder in which to install packages and
package dependencies.
6. If the package includes configurations, you can
edit updatable configurations by updating values
in the Value list on the Configure Packages
page.
7. If you elected to validate packages after
installation, view the validation results of the
deployed packages.
SSIS Package Deployment
How to: Deploy Packages to the SQL Server

1. Open the deployment folder on the target computer.


2. Double-click the manifest file, <project
name>.SSISDeploymentManifest, to start the
Package Installation Wizard.
3. On the Deploy SSIS Packages page, select the
SQL Server deployment option.
4. Optionally, select Validate packages after
installation to validate packages after they are
installed on the target server.
5. On the Specify Target SQL Server page, specify the
instance of SQL Server to install the packages to
and select an authentication mode.
6. On the Select Installation Folder page, specify the
folder in the file system for the package
dependencies that will be installed.
7. If the package includes configurations, you can edit
configurations by updating values in the Value list
on the Configure Packages page.
8. If you elected to validate packages after installation,
view the validation results of the deployed
packages.
Package Execution

• Business Intelligence Studio

• DTExec Utility

• DTExecUI Utility
SSIS Package Execution

Run a Package Using the DTExec Utility

1. Open a Command Prompt window.

2. Type dtexec / followed by the DTS, SQL, or File option and the package path,
including the package name.

3. If the package encryption level is EncryptSensitiveWithPassword or


EncryptAllWithPassword, use the Decrypt option to provide the password. If no
password is included, dtexec will prompt for the password.

4. Optionally, provide additional command-line options.

5. Press the Enter key.

6. Optionally, view logging and reporting information before closing the Command
Prompt window.
SSIS Package Execution

Run a Package Using the DTExecUI Utility

1. Open Management Studio.

2. On the View menu, click Object Explorer.

3. In Object Explorer, click Connect, and then click Integration Services.

4. Expand the Stored Packages folder and its subfolders to locate the package to run,
right-click the package, and then click Run Package.

5. In the Execute Package Utility dialog box, optionally, specify a different package to run.

6. Optionally, click Configurations, Command Files, Connection Managers, Execution


Options, Reporting, Logging, Set Values, or Verification to update run-time options.

7. To review the command line that the utility uses, click Command Line.
SSIS Package Execution

8. Click Execute.
9. To stop the running package, click Stop in the Package Execution
Progress dialog box.
10. When the package finishes, click Close to exit the Package Execution
Progress dialog box.
11. Click Close.
Scheduling Packages
Scheduling Packages

1. In Object Explorer, connect to an


instance of the SQL Server
Database Engine, and then expand
that instance.
2. Expand SQL Server Agent, expand
Jobs, right-click the job you want to
schedule, and click Properties.
3. Select the Schedules page, and
then click New.
4. In the Name box, type a name for
the new schedule.
5. Clear the Enabled check box if you
do not want the schedule to take
effect immediately following its
creation.
6. For Schedule Type, select one of the
following:
Scheduling Packages

Click Start automatically when SQL


Server Agent starts to start the job
when the SQL Server Agent service
is started.

Click Start whenever the CPUs become


idle to start the job when the CPUs
reach an idle condition.

Click Recurring if you want a schedule to


run repeatedly. To set the recurring
schedule, complete the Frequency,
Daily Frequency, and Duration
groups on the dialog.

Click One time if you want the schedule


to run only once. To set the One time
schedule, complete the One-time
occurrence group on the dialog.
Conclusion

You can extend the ETL functionality of SSIS by implementing RSS Feed
aggregation.
Read multiple RSS feeds.
Filter the content based on user preferences.
Create a single RSS feed that contains the combined feed.

Non-Traditional Data Sources


• RSS feeds
• Web services
• WMI events
By using the .NET Framework, SSIS also supports additional, non-traditional data formats,
such as BLOBs. This is done by providing APIs to extend the SSIS environment.

SSIS also opens up the programming environment to target data sources, such as Active
Directory or any other custom data source

You might also like