You are on page 1of 74

Siemens Teamcenter Oracle-to-SQL Server 2008 Migration Guide

Microsoft Corporation Published: June 2010 Author: Randy Dyess Solid Quality Mentors Technical Reviewers: Christopher Gill Teamcenter Centers of Excellence, Siemens PLM Software Hewat Kennedy Siemens

Abstract
This white paper outlines a best practices methodology to use when migrating from a Siemens Teamcenter Oracle installation to Siemens Teamcenter running on the Microsoft SQL Server 2008 database platform.

Copyright Information
The information contained in this document represents the current view of Microsoft Corporation on the issues discussed as of the date of publication. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information presented after the date of publication. This white paper is for informational purposes only. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED, OR STATUTORY, AS TO THE INFORMATION IN THIS DOCUMENT. Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in, or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation. Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this document. Except as expressly provided in any written license agreement from Microsoft, the furnishing of this document does not give you any license to these patents, trademarks, copyrights, or other intellectual property. 2010 Microsoft Corporation. All rights reserved. Microsoft, SQL Server, and Windows are trademarks of the Microsoft group of companies. All other trademarks are property of their respective owners.

Table of Contents
Siemens Teamcenter Oracle-to-SQL Server 2008 Migration Guide ....................................................... 1 About This Guide .......................................................................................................................................... 5 Overview of the Teamcenter Oracle-to-SQL Server Migration Process ..................................................... 5 Creating the Migration Plan for a Teamcenter Migration .......................................................................... 6 Installing the Teamcenter SQL Server Database ......................................................................................... 6 Installing the Oracle Client ......................................................................................................................... 10 Creating a Linked Server ............................................................................................................................ 19 Installing SQL Server Migration Assistant for Oracle V4.0 ........................................................................ 22 Determining Teamcenter Customizations ................................................................................................. 37 Migrating the Data into the Teamcenter Database .................................................................................. 47 Performing Post-Migration Tasks .............................................................................................................. 62 Validating the Migration Through Teamcenter Use Cases ....................................................................... 64 Appendix A: Basic Data Type Conversions .................................................................................................. 65 Appendix B: Sample Teamcenter Table Creation Scripts............................................................................ 68 Appendix C: Links for Further Information ................................................................................................. 73

About This Guide


The Siemens Teamcenter Oracle-to-SQL Server 2008 Migration Guide describes how to migrate Teamcenter data from an Oracle database to a Microsoft SQL Server 2008 database. This guide is intended for technical staff members who are experienced in application installations and familiar with the installation hardware system. The Oracle-to-SQL Server migration process assumes an entry-level database administration skill level for both Oracle and SQL Server databases.

Related Documentation
The following documentation may be helpful in learning more about implementing Teamcenter on SQL Server: Siemens PLM-Related Documents o Installation on Windows Servers Guide o Business Modeler IDE Guide o Best Practices for Running Siemens Teamcenter on SQL Server

Overview of the Teamcenter Oracle-to-SQL Server Migration Process


The overall objective of the migration process is to extract the Teamcenter data from an Oracle database to a SQL Server 2008 database. The migration of an existing Teamcenter Oracle database to SQL Server involves multiple steps using both supplied PL/SQL and Transact-SQL (T-SQL) scripts and the downloadable SQL Server Migration Assistant (SSMA) 2008 for Oracle V4.0. You use the supplied scripts to determine and adjust the schema of an empty SQL Server Teamcenter database to match that of the existing Oracle database. After the schema is adjusted, you use the SSMA tool to migrate the data. The primary steps involved in the migration process are as follows: 1. Create a migration plan that includes a recovery strategy in case of failure and use-case scenarios to test data correctness. 2. Install a default production installation of SQL Server that includes all components currently installed on your Oracle instance on a new server and then create a Teamcenter SQL Server database. 3. Back up both the production SQL Server and the Oracle databases. 4. Create an empty SQL Server database called INFODBA for data staging. 5. Create copies of the PPOM_ATTRIBUTES and PPOM_CLASS and tables in the production database on the staging database. 6. Delete any existing data in the production SQL Server database. 7. Install the Oracle Client. 8. Create a SQL Server linked server to Oracle from the staging database. 9. Install the SQL Server Migration Assistant for Oracle and SSMA Extension Pack on your SQL Server server 10. Migrate the schema from Oracle to the SQL Server staging database using SSMA. 11. Adjust the schema of the SQL Server staging database to allow for data movement.

12. Migrate the data from the Oracle database to the staging database using T-SQL scripts through the linked server. 13. Create a backup of the staging database. 14. Create a database snapshot of the staging database. 15. Execute object and column rename scripts against the staging database. 16. Migrate the data from the staging database to the production database using T-SQL scripts. 17. Perform post-migration tasks using the supplied scripts and SQL Server Management Studio. 18. Validate the success of the migration with use-case scenarios. 19. Back up both the production SQL Server database.

Creating the Migration Plan for a Teamcenter Migration


A good migration plan is simple yet encompasses the appropriate steps to help ensure a successful migration. The migration plan should list dates, times, staff, testing procedures, recovery options, and validation procedures to be used during and after the migration of the database. A migration plan should answer the following questions: Who is going to build the new server and install Windows and SQL Server? When is the due date for the new server build-out? Who is going to install Teamcenter and create the new SQL Server Teamcenter database? When is the due date for the database? Who will verify the SQL Server installation and Teamcenter database against the Teamcenter on SQL Server installation guidelines? Does the migration process require an Oracle DBA as well as a SQL Server DBA? Will you need to hire an outside firm to help with the migration? What DBA and development resources do you have to test and perform the migration? What are the migration testing optionsnew server or existing server? When should all migration testing be finished? What is the date for the actual migration? Will all required staff be available on the migration date? How much downtime will be required for the migration? What is the plan if the new database is not available due to issues during the migration? During the creation of your migration plan, you will also need to determine several use cases to use in testing against your migrated database. As you can see from these bullet points, a migration plan does not have to be overly complex. However, you will need to answer some essential questions, document those answers, and then combine those answers with the steps contained in this document.

Installing the Teamcenter SQL Server Database


There are no special requirements for installing SQL Server on your new server or creating the SQL Server database for the migration. However, you should follow all guidelines for installing and using 6

Teamcenter with a SQL Server database as described in the Siemens Teamcenter 8 Installation on Windows Server Guide. Any additional requirements are listed in the following sections of this white paper. If you know the customizations you have made to your Oracle database and can recreate those customizations in your SQL Server database, you should do so before the migration process to speed the process and reduce its complexity. After installing SQL Server and creating the database for the migration, you need to change the recovery model of the SQL Server Teamcenter database to Simple for the migration process. After the migration process, you will need to set the recovery model back to Full. 1. To change the recovery model, open SQL Server Management Studio (SSMS) by going to Start, All Programs, Microsoft SQL Server 2008, SQL Server Management Studio. 2. Connect to your instance.

3. In the Object Explorer, expand the instance and then expand the Databases folder.

4. Right-click your database and choose Properties.

5. Change to the Options Page on the right, and use the Recovery model drop-down box to change the recovery model to Simple.

After changing the database to the Simple recovery model, right-click the database again and select Tasks. Select Back Up and create a backup of your database. You should also Export your Oracle database at this time. Next, you should create a SQL Server staging database called INFODBA, which will allow you to make any needed schema changes and test those changes before migrating the changed schema and data to your actual database:
CREATE DATABASE INFODBA

Note: The above CREATE DATABASE script uses the model database to create a simple database. You will need to adjust the script to create a database on different directories than the model for your own installations. Or you can use SSMS to create the database. You also need to set the recovery model of the staging database to the Simple recovery model:

After you create your SQL Server staging database, you need to capture the PPOM_CLASS and PPOM_ATTRIBUTES tables from your production database into your staging database in order to rename tables and columns later in the migration process. Simply execute the following script against your SQL Server database to save a copy of these tables in your staging database:
USE INFODBA -- Change to your staging database GO SELECT * INTO PPOM_CLASS_saved FROM tc..PPOM_CLASS SELECT * INTO PPOM_ATTRIBUTE_saved FROM tc..PPOM_ATTRIBUTE

After the two needed tables, you need to empty any pre-populated tables in the production SQL Server database. You can accomplish by truncating all tables that contain data. Simply execute the following script against your SQL Server database to build the table truncate statements and then execute the results of the scripts:
USE tc -- Change to your production database GO DECLARE @loop INT DECLARE @command NVARCHAR(4000) BEGIN TRY DROP TABLE #commands END TRY BEGIN CATCH END CATCH CREATE TABLE #commands ( colid INT IDENTITY(1,1) ,command NVARCHAR(4000)

) INSERT INTO #commands (command) SELECT 'TRUNCATE TABLE dbo.' + so.name FROM sys.partitions sp INNER JOIN sys.objects so ON sp.object_id = so.object_id WHERE sp.rows > 0 AND so.is_ms_shipped = 0 AND so.type = 'u' AND sp.index_id IN (0,1) ORDER BY so.name SET @loop = @@ROWCOUNT WHILE @loop > 0 BEGIN SELECT @command = command FROM #commands WHERE colid = @loop exec dbo.sp_executesql @command SET @loop -= 1 END

(Note: DO NOT delete any data in your existing Oracle production database.)

Installing the Oracle Client


To migrate the data between Oracle and SQL Server 2008, you need to install the latest Oracle Client software on your SQL Server 2008 server, as follows. 1. Obtain the latest Oracle Client (the version found at the at the download site is an example and may be different from the one you will need to obtain and install): Then accept the License Agreement.

2. Make sure you download the correct version of the client. Note: This whitepaper refers to the x64 10g release; your version may be different.

10

11

3. Unzip the saved file and execute Setup.exe to start the installation.

12

4. Choose to install the InstantClient.

13

5. Enter the Home Destination details, which specify the name for the installation and the path where you want the client installed, and click Next.

14

6. Correct any issues that the Installer finds.

15

7. Click Install to install the client.

16

17

8. When the installation process is complete, simply exit the Installer.

18

Creating a Linked Server


One of the easiest methods to compare two different databases is through the use of a linked server. Using a SQL Server linked server allows you to directly compare the objects in your Oracle database to the objects in your SQL Server database. This will enable you to capture all missing objects and recreate them in SQL Server. Here are the steps for creating a linked server: 1. Open SSMS by going to Start, All Programs, Microsoft SQL Server 2008, SQL Server Management Studio. 2. Open the Server Objects folder in the object tree. 3. Right-click Linked Servers New Linked Server. 4. Create a new linked server by providing a name and data source details (note your connection may be different from this example).

19

5. Click the Security Tab and input the login and password you will use to connect to the Oracle database. (This example recreates the infodba login on SQL Server to use for the linked server: infodba with password of infodba. This login should be made a member of the dbo_owners group.)

6. Click OK, and then Test your new linked server by expanding the tables of the linked server.

20

21

Installing SQL Server Migration Assistant for Oracle V4.0


SQL Server Migration Assistant (SSMA) 2008 for Oracle is a suite of tools that reduces the effort, cost, and risk of migrating from Oracle to SQL Server 2008. Follow these instructions for installing SSMA: 1. Download SSMA V4.0 and the Extension pack to your SQL Server server here. 2. Unzip the downloaded file and double-click the SSMA installer, SSMA 2008 for Oracle.4.0.Install.exe, and at the Welcome screen click Next.

22

3. Accept the License Agreement and click Next.

4. Determine if you want to send a Usage Report to Microsoft and then click Next.

23

5. Choose the Typical installation.

6. Click Install.

24

7. When the installation is complete, click Finish.

8. Youre not ready to install the Extension Pack by unzipping the downloaded file and doubleclicking the SSMA installer called SSMA 2008 for Oracle Extension Pack.4.0.Install.exe.

25

9. At the Welcome screen, click Next.

10. Accept the License Agreement and click Next.

26

11. Again, choose the Typical installation option.

27

12. Click Install to begin the installation.

28

13. When the first step of the installation is complete, click Next to install SQL scripts on the instances you specify.

29

14. Choose the SQL Server database instance (your instance name will be different from the example) and click Next.

30

15. Enter the connection information required to connect to your SQL Server database instance and click Next.

31

16. At the screen that asks whether you would like to install the Extension Pack Utilities Database on your instance, leave at the default settings to install them and click Next.

17. The installation will run and execute several scripts.

32

18. After the installation is finished, click No; you do not want to install the Utilities Database on another instance.

33

Configuring SQL Server to Use SQLCLR


After installing SSMA and the SSMA extension pack, you need to configure the SQL Server instance to utilize the SQL Server Common Language Runtime (SQLCLR). 1. Open a new query window using SSMS and execute the following script to enable CLR in the SQL Server instance:
EXEC sp_configure 'clr enabled',1 GO reconfigure GO

Installing the SSMA License Key


You are now ready to install the SSMA license key. 1. Start SSMA from the Start menu.

34

2. Obtain the License Key by using the provided license registration page link. Note: You will need to register with Microsoft to obtain the license key. This registration is used to help provide support for SSMA and notify you of SSMA updates and fixes.

3.

Save the license file.

35

4. Change the directory to the location where you placed the downloaded license key and refresh the license.

36

Determining Teamcenter Customizations


Teamcenter allows for the addition of features and functionalities beyond the base installation. These new features and functionalities often create new database objects in the Teamcenter database. To maintain your existing data in the migration, you need to add these new components to your SQL Server installation of Teamcenter after the base installation of Teamcenter is complete. Your SQL Server database schema will then be as close to your Oracle database as possible. If you have objects in your Oracle database that have not been created in the SQL Server installation, it is important to capture these new objects and recreate them in the destination SQL Server database. Capturing the new objects created through customizations is achieved by reading the Oracle metadata tables and comparing the existing Oracle tables, constraints, and indexes with those found in the SQL Server database. After the comparisons are made, you need to script out the objects and create them in your SQL Server 2008 production database. To ease the migration effort, SSMA is used to create the schema in a staging database for the initial migration to SQL Server. This converted schema will then be used to compare to the production database to identify missing objects. Note: Teamcenter provides the functionality to automatically create needed indexes on Teamcenter database objects. If you have custom objects not created through the Teamcenter application, you need to manually recreate those objects in your Teamcenter SQL Server database. For objects created through Teamcenter, Teamcenter offers functionality that will recreate the indexes on those objects.

Migrating the Teamcenter Schema to the Staging Database


1. Start SSMA by going to Start, All Programs, Microsoft SQL Server Migration Assistant for Oracle, Microsoft SQL Server Migration Assistant for Oracle.
2. Start a new project in SSMA by clicking New Project and then giving the project a name and

location.

37

38

3. Connect to the Oracle database.

5. Expand the INFODBA schema and select Tables.

6. Choose the Type Mapping tab to edit the default mappings.

39

7. Edit the following mappings as noted by highlighting the data type and clicking Edit.
Source Type Float Number Numeric Real Target Type Float Float Float Float

40

8. Click OK and repeat for each data type in the above table. 9. Now connect to the SQL Server database.

10. Expand the SQL Server database.

41

11. Highlight the Oracle schema to be converted.

12. Click the Convert Schema tab.

13. Reconnect to the Oracle database. 14. The SSMA will analyze the metadata and create the conversion files.

15. When the analysis is finished, you will see that the output shows both errors and warnings. The 23 errors are for function-based indexes, and the warnings are for loss of precision. The indexes 42

have been accounted for in the production schema, and we are not using SSMA to migrate the data. You can ignore these errors and warnings. Note: Your exact output may be different depending on the amount of customizations you have to your Oracle Teamcenter installation.

16. Once the conversion is finished, you need to synchronize the SQL Server database with the Oracle database by right-clicking the SQL Server database and selecting Synchronize with Database.

17. Click OK.

43

18. Because the INFODBA database is a staging database, the indexes are not needed. To speed the data migration process, you should drop all indexes in the staging database. Execute the following script in SSMS to create your index drop statements, and then execute the results of the script to drop the indexes:
USE INFODBA GO DECLARE @commandtable TABLE(colid INT IDENTITY(1,1),commandstr VARCHAR(2000)) DECLARE @command NVARCHAR(2000) DECLARE @loop_count INT INSERT INTO @commandtable (commandstr) SELECT 'DROP INDEX ' + si.name + ' ON ' + so.name FROM sys.indexes si INNER JOIN sys.objects so ON si.object_id = so.object_id WHERE so.is_ms_shipped = 0 AND si.index_id > 0 SET @loop_count = @@ROWCOUNT WHILE @loop_count > 0 BEGIN SELECT @command = commandstr FROM @commandtable WHERE colid = @loop_count

44

BEGIN TRY EXEC dbo.sp_executesql @command END TRY BEGIN CATCH END CATCH SET @loop_count -= 1 END

Determining Table Object Differences


To determine schema object differences, use the following script to compare the schema created by SSMA in the INFODBA database with the schema from the production SQL Server database:
USE INFODBA GO SELECT TABLE_NAME FROM ORACLETC..SYS.USER_TABLES WHERE TABLE_NAME NOT IN (SELECT name COLLATE Latin1_General_BIN FROM sys.objects WHERE type = 'u' AND is_ms_shipped = 0) AND TABLE_NAME <> 'PLAN_TABLE' ORDER BY TABLE_NAME

Execute the results of the above script to identify tables that you will need to create. Once you have a list of tables that are missing from the SQL Server database, you need to connect to Oracle to describe the tables so that you can recreate them in SQL Server. Do not use the schema created by SSMA to recreate the missing tables; use the Oracle schema. In Oracle For each table in the output from the above script, describe the table and create a T-SQL table creation script for the table. For example, lets say that the PCOUNTER_TAGS_1 table was not found in SQL Server. To get the description, execute the following:
desc PCOUNTER_TAGS_1;

The output is as follows: Name Null? Type NOT PUID NULL VARCHAR2(15) NOT PSEQ NULL NUMBER(38) PVALU_0 VARCHAR2(15) PVALC_0 NUMBER(38) This output of the table description will translate into the SQL Server CREATE TABLE statement below (using lower case for column names): 45

Note: Use the SQL Server-to-Oracle data type conversion table in Appendix A to correctly map the data types from Oracle to SQL Server.
BEGIN TRY DROP TABLE PCOUNTER_TAGS_1 END TRY BEGIN CATCH END CATCH CREATE TABLE PCOUNTER_TAGS_1 ( puid NVARCHAR(15) NOT NULL ,pseq FLOAT NOT NULL ,pvalu_0 NVARCHAR(15) ,pvalc_0 FLOAT ) GO

Note: You can find sample CREATE TABLE scripts in Appendix B for creating tables missing from base installations. In SQL Server Execute the CREATE TABLE statements you created for each missing Oracle table in your production database.

Determining Index Object Differences


After identifying and creating missing table objects, you need to create any indexes for those missing table objects. You can use the Teamcenter index tool (index_verifier) to recreate any missing indexes in your production database. You do not need to create indexes in your staging database.

Determining Constraint Object Differences


After determining and creating missing table objects, you need to create any constraints for those missing table objects.
--This script will determine difference in constraints SELECT * FROM ORACLETC..SYS.USER_CONSTRAINTS WHERE TABLE_NAME IN ('PCOUNTER_TAGS_1' ,'SYS_IMPORT_SCHEMA_01' ,'PFIXED_COST_TAGLIST_0' ,'POM_UID_SCRATCH' ,'PREFAUNIQUEEXPRID2' ,'PVISSTRUCTURERECEIPE' ,'PVISMANAGEDDOCUMENT' ,'PPATTERN_0' ,'POCCURRENCE_LIST' ,'PVISSTRUCTURECONTEXT' ,'PJTCONTENTFORMSTORAGE' ,'EIM_UID_GENERATOR_ROOT')

46

ORDER BY TABLE_NAME;

Now you can create those missing constraints on your tables in your production SQL Server production database:
-- Change to your production database USE tc GO CREATE UNIQUE NONCLUSTERED INDEX SYS_C009756 ON SYS_IMPORT_SCHEMA_01(PROCESS_ORDER,DUPLICATE)

Note: You do not need to recreate NOT NULL constraints

Migrating the Data into the Teamcenter Database


After you have recreated all missing tables, indexes, and constraints, you are ready to migrate the data from the Oracle database into your staging database (INFODBA) in SQL Server. To migrate the data, you need to create a series of INSERT INTO statements to read the data from the Oracle tables into the SQL Server staging tables. Once you have moved the data into the staging tables, you will then move the data into the production SQL Server tables. Note: Appendix B provides some common table creation scripts. 1. Execute the following script to move the data into the staging database:
USE INFODBA GO SET NOCOUNT ON DECLARE DECLARE DECLARE DECLARE @table_count BIGINT @table_name VARCHAR(255) @strsql nvarchar(4000) @strsql2 VARCHAR(4000)

BEGIN TRY DROP TABLE #tables END TRY BEGIN CATCH END CATCH CREATE TABLE #tables ( colid INT IDENTITY(1,1) ,table_name VARCHAR(255) ) BEGIN TRY DROP TABLE #columns END TRY BEGIN CATCH

47

END CATCH CREATE TABLE #columns ( colid INT IDENTITY(1,1) ,column_name VARCHAR(255) ,system_type_id INT ) INSERT INTO #tables (table_name) SELECT TABLE_NAME FROM ORACLETC..SYS.USER_TABLES WHERE TABLE_NAME <> 'PLAN_TABLE' SET @table_count = @@ROWCOUNT WHILE @table_count > 0 BEGIN TRUNCATE TABLE #columns SELECT @table_name = table_name FROM #tables WHERE colid = @table_count SET @strsql = 'INSERT INTO #columns (column_name,system_type_id) SELECT name, system_type_id FROM sys.columns WHERE object_id = OBJECT_ID(''' + @table_name + ''')' EXEC dbo.sp_executesql @strsql UPDATE #columns SET column_name = 'CAST(' + column_name + ' AS VARCHAR(1000))' WHERE system_type_id IN (108) SET @strsql2 = '' SELECT @strsql2 = @strsql2 + column_name + ',' FROM #columns SET @strsql2 = @strsql2 + '|' --take care of trailing comma SET @strsql2 = REPLACE (@strsql2,',|',' ') SET @strsql = 'INSERT INTO ' + @table_name + ' SELECT * FROM OPENQUERY(ORACLETC,''SELECT ' + @strsql2 + ' FROM INFODBA.' + @table_name + ''') AS ROWSET_1' BEGIN TRY EXEC dbo.sp_executesql @strsql END TRY BEGIN CATCH SELECT @strSQL END CATCH SET @table_count -= 1 END

2. Verify that the rows in the Oracle database and the SQL Server staging database are consistent: 48

USE INFODBA GO SET NOCOUNT ON BEGIN TRY DROP TABLE #ora_tables END TRY BEGIN CATCH END CATCH CREATE TABLE #ora_tables ( colid INT IDENTITY(1,1) ,table_name VARCHAR(255) ,row_count BIGINT ) BEGIN TRY DROP TABLE #sql_tables END TRY BEGIN CATCH END CATCH CREATE TABLE #sql_tables ( colid INT IDENTITY(1,1) ,table_name VARCHAR(255) ,row_count BIGINT ) DECLARE DECLARE DECLARE DECLARE @ora_table_count INT @sql_table_count INT @table_name VARCHAR(255) @strSQL NVARCHAR(4000)

INSERT INTO #sql_tables(table_name) SELECT name FROM sys.objects WHERE is_ms_shipped = 0 AND type = 'U' SET @sql_table_count = @@ROWCOUNT WHILE @sql_table_count > 0 BEGIN SELECT @table_name = table_name FROM #sql_tables WHERE colid = @sql_table_count SET @strSQL = 'UPDATE #sql_tables SET row_count = (SELECT COUNT(*) FROM dbo.' + @table_name + ' WITH (NOLOCK)) WHERE colid = ' + CAST(@sql_table_count AS VARCHAR(20)) EXEC sp_executesql @strSQL SET @sql_table_count -= 1 END INSERT INTO #ora_tables(table_name) SELECT TABLE_NAME FROM ORACLETC..SYS.USER_TABLES SET @ora_table_count = @@ROWCOUNT

49

WHILE @ora_table_count > 0 BEGIN SELECT @table_name = table_name FROM #ora_tables WHERE colid = @ora_table_count SET @strSQL = 'UPDATE #ora_tables SET row_count = (SELECT COUNT(*) FROM dbo.' + @table_name + ' WITH (NOLOCK)) WHERE colid = ' + CAST(@ora_table_count AS VARCHAR(20)) EXEC sp_executesql @strSQL SET @ora_table_count -= 1 END --Tables showing up have different row counts SELECT sc.table_name AS 'Staging Table' ,sc.row_count AS 'Staging row count' ,oc.table_name AS 'Oracle Table' ,oc.row_count AS 'Oracle row count' FROM #sql_tables sc INNER JOIN #ora_tables oc ON sc.table_name = oc.table_name AND sc.row_count <> oc.row_count

3. Verify that the data precision is the same:


USE INFODBA GO SET NOCOUNT ON BEGIN TRY DROP TABLE #ora_tables END TRY BEGIN CATCH END CATCH CREATE TABLE #ora_tables ( colid INT IDENTITY(1,1) ,table_name VARCHAR(255) ,column_name VARCHAR(255) ,ora_data_length INT ,sql_data_length INT ) DECLARE DECLARE DECLARE DECLARE DECLARE @ora_table_count INT @loop INT @table_name VARCHAR(255) @column_name VARCHAR(255) @strSQL NVARCHAR(4000)

SET @loop = 1 INSERT INTO #ora_tables(table_name,column_name) SELECT TABLE_NAME, COLUMN_NAME FROM ORACLETC..SYS.USER_TAB_COLUMNS WHERE DATA_TYPE = 'FLOAT' AND DATA_PRECISION = 126 SET @ora_table_count = @@ROWCOUNT

50

SET @loop = @ora_table_count WHILE @loop > 0 BEGIN SELECT @table_name = table_name, @column_name = column_name FROM #ora_tables WHERE colid = @loop SET @strSQL = 'UPDATE #ora_tables SET ora_data_length = (SELECT ISNULL(MAX(LEN(' + @column_name + ')),0) FROM ORACLETC..INFODBA.' + @table_name + ' WITH (NOLOCK)) WHERE colid = ' + CAST(@loop AS VARCHAR(20)) EXEC sp_executesql @strSQL SET @loop -= 1 END SET @loop = @ora_table_count WHILE @loop > 0 BEGIN SELECT @table_name = table_name, @column_name = column_name FROM #ora_tables WHERE colid = @loop SET @strSQL = 'UPDATE #ora_tables SET sql_data_length = (SELECT ISNULL(MAX(LEN(' + @column_name + ')),O) FROM dbo.' + @table_name + ' WITH (NOLOCK)) WHERE colid = ' + CAST(@loop AS VARCHAR(20)) EXEC sp_executesql @strSQL SET @loop -= 1 END --Should be no result set SELECT * FROM #ora_tables WHERE ora_data_length <> sql_data_length

Note: You may have to manually correct any data precision issues by changing the data types of the production SQL Server database. 4. After migrating the data from the Oracle database into your staging database, create a backup of the staging database. 5. After the backup completes, you will need to create a snapshot database of the database for easier rollback to the migrated staging database in case of errors or issues during the migration to the production database.
USE master GO --Determine file names SELECT name, physical_name FROM sys.master_files WHERE database_id = DB_ID('INFODBA') CREATE DATABASE INFODBA_snapshot ON --You will need to change the path to your SQL Server directory.

51

( NAME = INFODBA , FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Data\INFODBA.ss' ) AS SNAPSHOT OF INFODBA; GO

6. After creating a snapshot of the staging database for rollback purposes, you will now need to rename several tables in the staging database schema to match the production database schema.
--Rename tables in staging to match production USE INFODBA GO SET NOCOUNT ON DECLARE @loop INT DECLARE @loop_1 INT DECLARE @rowcount INT DECLARE @command_1 NVARCHAR(4000) DECLARE @command_2 NVARCHAR(4000) --There will be one set of commands for temporary names --There will be another set of commands for final names SET @loop_1 = 2 --Table to hold rename commands BEGIN TRY DROP TABLE #PLOVP END TRY BEGIN CATCH END CATCH CREATE TABLE #PLOVP ( colid INT IDENTITY(1,1) ,command_1 NVARCHAR(4000) ,command_2 NVARCHAR(4000) )

COLLATE Latin1_General_BIN COLLATE Latin1_General_BIN

--Determine the difference between SQL Server version and Oracle version --Rename to SQL Server version --Need to set tables to temp names to avoid renaming conflicts INSERT INTO #PLOVP (command_1, command_2) SELECT 'exec sp_rename ''' + infodba.pdbname COLLATE Latin1_General_BIN + ''',''' + tc.pdbname + '_new''' ,'exec sp_rename '''+ tc.pdbname + '_new'''+ ',''' + tc.pdbname + '''' FROM (SELECT a.pdbname ,c.pname AS pname_class ,a.pname AS pname_attribute FROM INFODBA..PPOM_CLASS_saved c INNER JOIN INFODBA..PPOM_ATTRIBUTE_saved a ON a.rdefining_classu = c.puid AND a.plength = -1

52

WHERE a.pdbname LIKE 'PLOV_VALUES%') tc INNER JOIN (SELECT a.pdbname ,c.pname AS pname_class ,a.pname AS pname_attribute FROM INFODBA..PPOM_CLASS c INNER JOIN INFODBA..PPOM_ATTRIBUTE a ON a.rdefining_classu = c.puid AND a.plength = -1 WHERE a.pdbname LIKE 'PLOV_VALUES%' AND c.pname LIKE 'ListOf%') infodba ON tc.pname_class = infodba.pname_class COLLATE Latin1_General_BIN AND tc.pname_attribute = infodba.pname_attribute COLLATE Latin1_General_BIN SET @rowcount = @@ROWCOUNT WHILE @loop_1 > 0 BEGIN SET @loop = @rowcount --Execute rename statements WHILE @loop > 0 BEGIN SELECT @command_1 = command_1, @command_2 = command_2 FROM #PLOVP WHERE colid = @loop IF @loop_1 = 2 BEGIN EXEC dbo.sp_executesql @command_1 END IF @loop_1 = 1 BEGIN EXEC dbo.sp_executesql @command_2 END SET @loop -= 1 END SET @loop_1 -= 1 END GO

7. After creating a snapshot of the staging database for rollback purposes, you will now need to rename several columns in the staging database schema to match the production database schema.
--Rename columns in staging to match production USE INFODBA GO SET NOCOUNT ON DECLARE @rowcount INT DECLARE @command_1 NVARCHAR(4000) --Table to hold rename commands

53

BEGIN TRY DROP TABLE #PLOVP END TRY BEGIN CATCH END CATCH CREATE TABLE #PLOVP ( colid INT IDENTITY(1,1) ,command_1 NVARCHAR(4000) )

COLLATE Latin1_General_BIN

--Determine the column naming differences between SQL Server version and Oracle version --Rename to SQL Server version INSERT INTO #PLOVP (command_1) SELECT 'EXEC dbo.sp_rename ''' + info.tabname_class + '.' + info.vla + ''',''' + tc.vla + ''',''COLUMN''' FROM (SELECT 'VLA_'+ CONVERT(VARCHAR, pc.pcpid) + '_' + CONVERT(VARCHAR, pa.papid) AS vla ,pc.PNAME AS oname ,pa.PNAME AS descp ,pc.PTNAME AS tabname_class FROM INFODBA..PPOM_CLASS pc INNER JOIN INFODBA..PPOM_ATTRIBUTE pa ON pa.rdefining_classu = pc.puid AND pa.plength = -1) info INNER JOIN (SELECT 'VLA_'+ CONVERT(VARCHAR, pc.pcpid) + '_' + CONVERT(VARCHAR, pa.papid) AS VLA ,pc.PNAME AS oname ,pa.PNAME AS descp ,pc.PTNAME AS tabname_class FROM INFODBA..PPOM_CLASS_saved pc INNER JOIN INFODBA..PPOM_ATTRIBUTE_saved pa ON pa.rdefining_classu = pc.puid AND pa.plength = -1) tc ON info.oname COLLATE Latin1_General_BIN = tc.oname AND info.descp COLLATE Latin1_General_BIN = tc.descp AND info.tabname_class COLLATE Latin1_General_BIN = tc.tabname_class SET @rowcount = @@ROWCOUNT WHILE @rowcount > 0 BEGIN SELECT @command_1 = command_1 FROM #PLOVP WHERE colid = @rowcount BEGIN TRY EXEC dbo.sp_executesql @command_1 END TRY BEGIN CATCH END CATCH SET @rowcount -= 1 END GO

54

8.

Update columns in the POM_INDEXES table.


--Determine and rename duplicate entries USE INFODBA GO UPDATE TOP (1) d SET d.dbname = d.dbname + '_' + CAST(d.youneek AS VARCHAR(11)) FROM POM_INDEXES d INNER JOIN ( SELECT dbname,cpid,apid FROM POM_INDEXES GROUP BY dbname,cpid,apid HAVING COUNT(*) > 1)db ON db.dbname = d.dbname AND db.cpid = d.cpid AND db.apid = d.apid

9. Drop the temporary PPOM_CLASS_saved and PPOM_ATTRIBUTE_saved tables.


USE INFODBA GO DROP TABLE PPOM_CLASS_saved DROP TABLE PPOM_ATTRIBUTE_saved GO

10. Once you are satisfied that everything is correct and have performed all use-case testing from your migration plan, migrate the data into the production database:
USE tc GO SET NOCOUNT ON DECLARE DECLARE DECLARE DECLARE @table_count BIGINT @table_name VARCHAR(255) @strsql nvarchar(4000) @strsql2 VARCHAR(4000)

BEGIN TRY DROP TABLE #row_count END TRY BEGIN CATCH END CATCH CREATE TABLE #row_count ( colid INT IDENTITY(1,1) ,table_name VARCHAR(255) ,row_count BIGINT )

55

BEGIN TRY DROP TABLE #tables END TRY BEGIN CATCH END CATCH CREATE TABLE #tables ( colid INT IDENTITY(1,1) ,table_name VARCHAR(255) ) BEGIN TRY DROP TABLE #columns END TRY BEGIN CATCH END CATCH CREATE TABLE #columns ( colid INT IDENTITY(1,1) ,column_name VARCHAR(255) ,system_type_id INT ) DECLARE @row_count INT INSERT INTO #row_count(table_name) SELECT name FROM INFODBA.sys.objects WHERE is_ms_shipped = 0 AND type = 'U' SET @row_count = @@ROWCOUNT WHILE @row_count > 0 BEGIN SELECT @table_name = table_name FROM #row_count WHERE colid = @row_count SET @strSQL = 'UPDATE #row_count SET row_count = (SELECT COUNT(*) FROM INFODBA.dbo.' + @table_name + ' WITH (NOLOCK)) WHERE colid = ' + CAST(@row_count AS VARCHAR(20)) EXEC sp_executesql @strSQL SET @row_count -= 1 END INSERT INTO #tables (table_name) SELECT so.name FROM tc.sys.objects so WHERE so.is_ms_shipped = 0 AND so.type = 'U' SET @table_count = @@ROWCOUNT WHILE @table_count > 0 BEGIN TRUNCATE TABLE #columns SET @row_count = 0

56

SELECT @table_name = table_name FROM #tables WHERE colid = @table_count SELECT @row_count = row_count FROM #row_count WHERE table_name = @table_name SET @row_count = ISNULL(@row_count,0) IF @row_count > 0 BEGIN SET @strsql = 'INSERT INTO #columns (column_name) SELECT name FROM tc.sys.columns WHERE object_id = OBJECT_ID(''' + @table_name + ''')' EXEC dbo.sp_executesql @strsql SET @strsql2 = '' SELECT @strsql2 = @strsql2 + column_name + ',' FROM #columns SET @strsql2 = @strsql2 + '|' --take care of trailing comma SET @strsql2 = REPLACE (@strsql2,',|',' ') SET @strsql = 'INSERT INTO ' + @table_name + '(' + @strsql2 + ')' + ' SELECT ' + @strsql2 + ' FROM INFODBA.dbo.' + @table_name BEGIN TRY EXEC dbo.sp_executesql @strsql END TRY BEGIN CATCH SELECT @strSQL AS 'Table did not load investigate manually' END CATCH END SET @table_count -= 1 END

11. The script above will produce a listing of tables that did not have data into them due to column differences between the staging and production databases. You may have to manually create some table insert statements to overcome these column differences, as in these examples:
USE tc GO --Column names different INSERT INTO PPOM_SITE_CONFIG(puid,LS_13_2,pindex ) SELECT puid,LS_388_1,pindex FROM INFODBA.dbo.PPOM_SITE_CONFIG --Column names different INSERT INTO PTC_PREFERENCES(puid,LS_330_2,robject_tagu,robject_tagc ) SELECT puid,LS_473_1,robject_tagu,robject_tagc FROM INFODBA.dbo.PTC_PREFERENCES

57

--Column names are different INSERT INTO PFULLTEXT(puid,LS_739_1,pbody_cleartext,pcontent_type ) SELECT puid,LS_492_1,pbody_cleartext,pcontent_type FROM INFODBA.dbo.PFULLTEXT --Column PIPLISTOFVALUES_1_0 not in SQL Server table INSERT INTO PLISTOFVALUES(puid,plov_name,plov_desc,plov_type,plov_attached_class,pl ov_attached_attr,VLA_427_9,VLA_427_10,VLA_427_11,VLA_427_12,VLA_427_13, VLA_427_14,VLA_427_15,VLA_427_16,VLA_427_17,VLA_427_18,VLA_427_19,VLA_4 27_20,plov_attached_type,plov_value_type,plov_usage ) SELECT puid,plov_name,plov_desc,plov_type,plov_attached_class,plov_attached_at tr,VLA_427_9,VLA_427_10,VLA_427_11,VLA_427_12,VLA_427_13,VLA_427_14,VLA _427_15,VLA_427_16,VLA_427_17,VLA_427_18,VLA_427_19,VLA_427_20,plov_att ached_type,plov_value_type,plov_usage FROM INFODBA.dbo.PLISTOFVALUES --Column PIPIMANQUERY_1_0 not in SQL Server table INSERT INTO PIMANQUERY(puid,pquery_name,pquery_desc,pquery_class,VLA_184_4,pquery_u id_name,pquery_flag,presults_type ) SELECT puid,pquery_name,pquery_desc,pquery_class,VLA_184_4,pquery_uid_name,pqu ery_flag,presults_type FROM INFODBA.dbo.PIMANQUERY --Column PIPITEMREVISION_2_0 not in SQL Server table INSERT INTO PITEMREVISION(puid,pitem_revision_id,VLA_631_8,VLA_631_10,VLA_631_11,VL A_631_12,rsequence_anchoru,rsequence_anchorc,rvariant_expression_blocku ,rvariant_expression_blockc,ritems_tagu,ritems_tagc,psequence_id,pseque nce_limit,phas_variant_module) SELECT puid,pitem_revision_id,VLA_631_8,VLA_631_10,VLA_631_11,VLA_631_12,rsequ ence_anchoru,rsequence_anchorc,rvariant_expression_blocku,rvariant_expr ession_blockc,ritems_tagu,ritems_tagc,psequence_id,psequence_limit,phas _variant_module FROM INFODBA.dbo.PITEMREVISION --Columns PIPOM_KEY_0 and PIPOM_KEY_1 not in SQL Server table INSERT INTO POM_KEY(puid,domain,key_value) SELECT puid,domain,key_value FROM INFODBA.dbo.POM_KEY --Column PIPPOM_USER_1_0 not in SQL Server table INSERT INTO PPOM_USER(puid,puser_id,ppassword,puser_name,pstatus,pdef_acl,pdefault_ group,puser_data_source,plicense_level,plast_login_time,puser_last_sync _date ) SELECT puid,puser_id,ppassword,puser_name,pstatus,pdef_acl,pdefault_group,puse r_data_source,plicense_level,plast_login_time,puser_last_sync_date FROM INFODBA.dbo.PPOM_USER --Column difference INSERT INTO POM_ROOT(site_id,[schema],[path],site_state,cpid,[version],scan_state,s can_name,scan_class,info,can_edit_schema )

58

SELECT site_id,[schema],[path],site_state,cpid,[version],scan_state,scan_name, scan_class,info,can_edit_schema FROM INFODBA.dbo.POM_ROOT --Columns PIPWORKSPACEOBJ_0_0 and PIPWORKSPACEOBJ_1_0 not in SQL Server table INSERT INTO PWORKSPACEOBJECT(puid,pobject_name,pobject_desc,pobject_type,pobject_ap plication,VLA_484_7,pip_classification,VLA_484_10,pgov_classification,V LA_484_13,VLA_484_16,pactive_seq,prevision_number,rwso_threadu,rwso_thr eadc,rowning_organizationu,rowning_organizationc,prevision_limit,rownin g_projectu,rowning_projectc,pdate_released ) SELECT puid,pobject_name,pobject_desc,pobject_type,pobject_application,VLA_484 _7,pip_classification,VLA_484_10,pgov_classification,VLA_484_13,VLA_484 _16,pactive_seq,prevision_number,rwso_threadu,rwso_threadc,rowning_orga nizationu,rowning_organizationc,prevision_limit,rowning_projectu,rownin g_projectc,pdate_released FROM INFODBA.dbo.PWORKSPACEOBJECT

12. Verify the row counts between the staging and production SQL Server databases:
USE INFODBA GO SET NOCOUNT ON BEGIN TRY DROP TABLE #sql_tables_staging END TRY BEGIN CATCH END CATCH CREATE TABLE #sql_tables_staging ( colid INT IDENTITY(1,1) ,table_name VARCHAR(255) ,row_count BIGINT ) BEGIN TRY DROP TABLE #sql_tables_final END TRY BEGIN CATCH END CATCH CREATE TABLE #sql_tables_final ( colid INT IDENTITY(1,1) ,table_name VARCHAR(255) ,row_count BIGINT ) DECLARE DECLARE DECLARE DECLARE @stage_table_count INT @final_table_count INT @table_name VARCHAR(255) @strSQL NVARCHAR(4000)

59

INSERT INTO #sql_tables_staging(table_name) SELECT name FROM sys.objects WHERE is_ms_shipped = 0 AND type = 'U' SET @stage_table_count = @@ROWCOUNT WHILE @stage_table_count > 0 BEGIN SELECT @table_name = table_name FROM #sql_tables_staging WHERE colid = @stage_table_count SET @strSQL = 'UPDATE #sql_tables_staging SET row_count = (SELECT COUNT(*) FROM dbo.' + @table_name + ' WITH (NOLOCK)) WHERE colid = ' + CAST(@stage_table_count AS VARCHAR(20)) EXEC sp_executesql @strSQL SET @stage_table_count -= 1 END INSERT INTO #sql_tables_final(table_name) SELECT name FROM tc.sys.objects --you will have to change to your database name WHERE is_ms_shipped = 0 AND type = 'U' SET @final_table_count = @@ROWCOUNT WHILE @final_table_count > 0 BEGIN SELECT @table_name = table_name FROM #sql_tables_final WHERE colid = @final_table_count SET @strSQL = 'UPDATE #sql_tables_final SET row_count = (SELECT COUNT(*) FROM tc.dbo.' + @table_name + ' WITH (NOLOCK)) WHERE colid = ' + CAST(@final_table_count AS VARCHAR(20)) EXEC sp_executesql @strSQL SET @final_table_count -= 1 END --Tables showing up have different row counts SELECT sc.table_name AS 'Staging Table' ,sc.row_count AS 'Staging Row Count' ,fc.table_name AS 'Production Table' ,fc.row_count AS 'Production Table Row Count' FROM #sql_tables_staging sc INNER JOIN #sql_tables_final fc ON sc.table_name = fc.table_name AND sc.row_count <> fc.row_count

13. Verify that the data precision is the same:


USE INFODBA GO SET NOCOUNT ON BEGIN TRY

60

DROP TABLE #staging_tables END TRY BEGIN CATCH END CATCH CREATE TABLE #staging_tables ( colid INT IDENTITY(1,1) ,table_name VARCHAR(255) ,column_name VARCHAR(255) ,staging_data_length INT ,final_data_length INT ) DECLARE DECLARE DECLARE DECLARE DECLARE @staging_table_count INT @loop INT @table_name VARCHAR(255) @column_name VARCHAR(255) @strSQL NVARCHAR(4000)

SET @loop = 1 INSERT INTO #staging_tables(table_name,column_name) SELECT TABLE_NAME, COLUMN_NAME FROM ORACLETC..SYS.USER_TAB_COLUMNS WHERE DATA_TYPE = 'FLOAT' AND DATA_PRECISION = 126 SET @staging_table_count = @@ROWCOUNT SET @loop = @staging_table_count WHILE @loop > 0 BEGIN SELECT @table_name = table_name, @column_name = column_name FROM #staging_tables WHERE colid = @loop SET @strSQL = 'UPDATE #staging_tables SET staging_data_length = (SELECT ISNULL(MAX(LEN(' + @column_name + ')),0) FROM INFODBA.dbo.' + @table_name + ' WITH (NOLOCK)) WHERE colid = ' + CAST(@loop AS VARCHAR(20)) EXEC sp_executesql @strSQL SET @loop -= 1 END SET @loop = @staging_table_count WHILE @loop > 0 BEGIN SELECT @table_name = table_name, @column_name =LOWER(column_name) FROM #staging_tables WHERE colid = @loop --You will have to change production database name BEGIN TRY SET @strSQL = 'UPDATE #staging_tables SET final_data_length = (SELECT ISNULL(MAX(LEN(' + @column_name COLLATE Latin1_General_BIN + ')),0) FROM tc.dbo.' + @table_name + ' WITH (NOLOCK)) WHERE colid = ' + CAST(@loop AS VARCHAR(20)) EXEC sp_executesql @strSQL END TRY BEGIN CATCH

61

SELECT @table_name AS 'Table Name' , @column_name AS 'Column Name' END CATCH SET @loop -= 1 END --Should be no result set SELECT * FROM #staging_tables WHERE staging_data_length <> final_data_length

Note; You may have to manually correct any data precision issues. 14. Drop the staging database snapshot:
USE MASTER GO DROP DATABASE INFODBA_snapshot GO

Performing Post-Migration Tasks


After you have migrated and verified all your data, you need to perform the following post-migration tasks: 1. Index maintenance 2. Database checks 3. Database backups

Monitoring Index Fragmentation and Defragmenting


SQL Server uses indexes to provide fast access to information when users or applications request it. These indexes are maintained by the Database Engine as the table data grows and/or changes. Over time, the indexes can become fragmented; especially in databases that handle heavy insert, update, and delete activity. An index is fragmented when the physical ordering on disk does not match the logical order of the data (as defined by the index key) or when data pages that contain the index are dispersed across non-adjacent sections of the disk. Fragmentation of an index can reduce the speed of data access and result in slower application performance. It can also cause more disk space to be used than is actually necessary. Index fragmentation can be corrected by reorganizing or rebuilding the index. You can tell which indexes, if any, have fragmentation problems by using the sys.dm_db_physical_stats() system function. This function provides a lot of detail about the physical layout of the index. However, the most important result column for tracking fragmentation is avg_fragmentation_in_percent. This column indicates how fragmented the index is on disk. A low number means low fragmentation (good); a high number means high fragmentation (bad). For example, the following query returns index physical stats for all the indexes in the current database: 62

SELECT OBJECT_NAME(object_id), index_id, page_count, index_type_desc, avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats(db_id(),NULL,NULL,NULL,'LIMITED')

To identify the indexes by name, you can join against the sys.indexes system view. Similar information is also available in the Standard Reports in SSMS. To view this report, right-click the Teamcenter database, select Reports, Standard Reports, and then select the Index Physical Statistics report on the fly-out menu. The rule-of-thumb in the following table indicates how to interpret the avg_fragmentation_in_percent value. Fragmentation < 5% 5% to 30% > 50% Recommended Action Do nothing Reorganize with ALTER INDEX REORGANIZE Rebuild with ALTER INDEX REBUILD WITH (ONLINE=ON) or CREATE INDEX with DROP_EXISTING=ON

Reorganizing an index does not block user access to the index while underway. Rebuilding or recreating the index, however, does prevent user access to the index. The exception to this is if the ALTER INDEX REBUILD statement is used with the ONLINE = ON option. Note that online index rebuild requires the Enterprise Edition of SQL Server 2008. Periodically checking index fragmentation and taking any necessary corrective action is important to maintaining the performance of your Teamcenter deployment. The rate at which fragmentation may occur depends on user activity, but as a general rule Siemens recommends checking index fragmentation at least monthly.

Running the Teamcenter Index-Verifier Tool


Siemens provides an index_verifier utility, packaged with all versions of Teamcenter, to analyze your Teamcenter database for missing indexes. If the index_verifier utility finds missing indexes, it will output SQL statements that you can use to create the indexes.Siemens best practice is to run this utility monthly at a minimum. Siemens also recommends adding additional indexes you may require by using the install command-ine utility provided with Teamcenter. Doing so will register the index as part of the Teamcenter schema, at which point index_verifier can check for its existence. If you manually add indexes to the Teamcenter 63

database, index_verifier will have no knowledge of them and it will be your responsibility to ensure that those indexes are carried forward and maintained. You can find full documentation of these utilities in the Teamcenter Utilities Reference Manual available on the Siemens Teamcenter GTAC Support Site.

Checking Database Integrity


For all the sophisticated data management techniques embedded in the SQL Server database engine, there is still the possibility of some corruption occurring in a database, most notably as the result of a hardware glitch. To head off the potential impact of such problems, you should regularly check the integrity of the database. The statement for doing this in T-SQL is DBCC CHECKDB. It is best to include database integrity checks in a scheduled job that executes DBCC CHECKDB or through a scheduled Maintenance Plan that includes the Check Database Integrity task. If any problems are detected, you can restore from backups (usually the best option) or use one of the REPAIR options on DBCC CHECKDB.

Changing the Recovery Model


After verifying the health of your new SQL Server production database, you need to change the recovery model back to Full.

Creating Database Backups


It is recommended that you create full database backups, differential database backups, and transaction log backups in order to protect your database. You should use SSMS and create custom T-SQL backup scripts and SQL Server Agent jobs or use a Maintenance Plan to back up your database. You should also use a Maintenance Plan to back up your system databases on a weekly basis.

Validating the Migration Through Teamcenter Use Cases


The final step in your migration process is to test the use-case scenarios you determined at the start of your migration planning to verify your new production database. These scenarios should include the addition of new data and the validation of existing data between the Oracle database and the SQL Server database. Some common use cases affected by the migration process are customized form attributes in Teamcenter, as well as custom items, objects, and data types. Testing their functionality is of utmost importance. Further testing should be done on assemblies and viewer data to ensure correctness; however issues here should be rare or related to custom attributes.

64

Appendix A: Basic Data Type Conversions


Numeric Data Types
Oracle Number(19,0) Int or Number(10,0) SmallInt or Number(6,0) Number(3,0) Number(p,0) Number(p,0) Float, Double Precision, or Number(38) Number(1) Number(19,4) Number(10,4) SQL Server BigInt Int SmallInt TinyInt Decimal(p,s) Numeric(p,s) Float Bit Money SmallMoney

Character Data Types


Oracle Data Type Size in bytes SQL Server Data Type Number of Chars Size in bytes

65

Char NChar Varchar

1 to 2000 1 to 2000 (fewer chars) 1 to 4000

Char NChar Varchar Varchar(max) NVarchar NVarchar(max)

1 to 8000 1 to 4000 1 to 8000 1 to 231-1 1 to 4000 1 to 230-1

1 to 8000 fixed 2 to 8000 fixed 0 to 8000 0 to 2GB 0 to 8000 0 to 2GB

NVarchar Varchar2 NVarchar2 LONG CLOB NCLOB

1 to 4000 (fewer chars) 1 to 4000 1 to 4000 (fewer chars) 1 to 231 1 to 232 1 to 232

Text, Varchar(max) Text, Varchar(max) Ntext, Nvarchar(max)

1 to 231-1 1 to 231-1 1 to 230-1

0 to 2GB 0 to 2GB 0 to 2GB

Date and Time Data Types


Oracle Data Type Date Values Date and time to seconds SQL Server Data Type SmallDateTime DateTime Calendar Date and time with fractional seconds (9 digits) Values Date and time to seconds Date and time with fractional seconds to 1/300 or 3.33 milliseconds 01/01/1753 AD 01/06/9999 AD(DMY) Date and time with fractional seconds (7 digits)

Timestamp (TS)

DateTime2 (DT2)

Timestamp with time zone (TSTZ)

Like TS with zones

66

Timestamp with local time zone (TSLTZ) Calendar Julian Daylight Saving Time Support

Like TS with relative zones to users 01/01/4712 BCE 12/31/4712 CE Yes

DateTimeOffset

Like DT2 with time zone offsets

Calendar Gregorian Daylight Saving Time Support DATE TIME

01/01/0001 AD 31/12/9999 AD (DMY)

No Date only Gregorian Time only with fractional seconds (7 digits)

Binary Data Types


Oracle SQL Server

BLOB

Image, Varbinary(max)

Raw

Image, Varbinary(max)

Long Raw

Image, Varbinary(max)

BFile

Similar to Varbinary(max) Filestream Binary(n)

BLOB/Raw(n)

BLOB/Raw(n)

Varbinary(n), Varbinary(max)

67

Appendix B: Sample Teamcenter Table Creation Scripts


--Change to your production database name USE tc GO BEGIN TRY DROP TABLE PCOUNTER_TAGS_1 END TRY BEGIN CATCH END CATCH CREATE TABLE PCOUNTER_TAGS_1 ( puid VARCHAR(30) NOT NULL ,pseq FLOAT NOT NULL ,p_valu_0 VARCHAR(30) NULL ,p_valc_0 FLOAT NULL ) GO BEGIN TRY DROP TABLE SYS_IMPORT_SCHEMA_01 END TRY BEGIN CATCH END CATCH CREATE TABLE SYS_IMPORT_SCHEMA_01 ( PROCESS_ORDER FLOAT NULL ,DUPLICATE FLOAT NULL ,DUMP_FILEID FLOAT NULL ,DUMP_POSITION FLOAT NULL ,DUMP_LENGTH FLOAT NULL ,DUMP_ALLOCATION FLOAT NULL ,COMPLETED_ROWS FLOAT NULL ,ERROR_COUNT FLOAT NULL ,ELAPSED_TIME FLOAT NULL ,OBJECT_TYPE_PATH VARCHAR(200) ,OBJECT_PATH_SEQNO FLOAT NULL ,OBJECT_TYPE VARCHAR(30) NULL ,IN_PROGRESS CHAR(1) NULL ,OBJECT_NAME VARCHAR(500) ,OBJECT_LONG_NAME VARCHAR(4000) NULL ,OBJECT_SCHEMA VARCHAR(30) NULL ,ORIGINAL_OBJECT_SCHEMA VARCHAR(30) NULL ,PARTITION_NAME VARCHAR(30) NULL ,SUBPARTITION_NAME VARCHAR(30) NULL ,FLAGS FLOAT NULL ,PROPERTY FLOAT NULL ,COMPLETION_TIME DATE NULL ,OBJECT_TABLESPACE VARCHAR(30) NULL

NULL

NULL

68

,SIZE_ESTIMATE FLOAT NULL ,OBJECT_ROW FLOAT NULL ,PROCESSING_STATE CHAR(1) NULL ,PROCESSING_STATUS CHAR(1) ,BASE_PROCESS_ORDER FLOAT NULL ,BASE_OBJECT_TYPE VARCHAR(30) NULL ,BASE_OBJECT_NAME VARCHAR(30) NULL ,BASE_OBJECT_SCHEMA VARCHAR(30) ,ANCESTOR_PROCESS_ORDER FLOAT NULL ,DOMAIN_PROCESS_ORDER FLOAT NULL ,PARALLELIZATION FLOAT NULL ,UNLOAD_METHOD FLOAT NULL ,GRANULES FLOAT NULL ,SCN FLOAT NULL ,GRANTOR VARCHAR(30) NULL ,XML_CLOB VARCHAR(MAX) NULL ,NAME VARCHAR(30) NULL ,VALUE_T VARCHAR(4000) NULL ,VALUE_N FLOAT NULL ,IS_DEFAULT FLOAT NULL ,FILE_TYPE FLOAT NULL ,USER_DIRECTORY VARCHAR(4000) ,USER_FILE_NAME VARCHAR(4000) ,FILE_NAME VARCHAR(4000) NULL ,EXTEND_SIZE FLOAT NULL ,FILE_MAX_SIZE FLOAT NULL ,PROCESS_NAME VARCHAR(30) NULL ,LAST_UPDATE DATE NULL ,WORK_ITEM VARCHAR(30) NULL ,OBJECT_NUMBER FLOAT NULL ,COMPLETED_BYTES FLOAT NULL ,TOTAL_BYTES FLOAT NULL ,METADATA_IO FLOAT NULL ,DATA_IO FLOAT NULL ,CUMULATIVE_TIME FLOAT NULL ,PACKET_NUMBER FLOAT NULL ,OLD_VALUE VARCHAR(4000) NULL ,SEED FLOAT NULL ,LAST_FILE FLOAT NULL ,USER_NAME VARCHAR(30) NULL ,OPERATION VARCHAR(30) NULL ,JOB_MODE VARCHAR(30) NULL ,CONTROL_QUEUE VARCHAR(30) NULL ,STATUS_QUEUE VARCHAR(30) NULL ,REMOTE_LINK VARCHAR(4000) ,VERSION FLOAT NULL ,DB_VERSION VARCHAR(30) NULL ,TIMEZONE VARCHAR(64) NULL ,STATE VARCHAR(30) NULL ,PHASE FLOAT NULL ,GUID VARBINARY(16) NULL ,START_TIME DATE NULL ,BLOCK_SIZE FLOAT NULL ,METADATA_BUFFER_SIZE FLOAT NULL ,DATA_BUFFER_SIZE FLOAT NULL ,DEGREE FLOAT NULL ,PLATFORM VARCHAR(101) NULL

NULL

NULL

NULL NULL

NULL

69

,ABORT_STEP FLOAT NULL ,INSTANCE VARCHAR(60) NULL ) GO CREATE UNIQUE NONCLUSTERED INDEX SYS_C009756 ON SYS_IMPORT_SCHEMA_01(PROCESS_ORDER,DUPLICATE) GO BEGIN TRY DROP TABLE PFIXED_COST_TAGLIST_0 END TRY BEGIN CATCH END CATCH CREATE TABLE PFIXED_COST_TAGLIST_0 ( puid VARCHAR(30) NOT NULL ,pseq FLOAT NOT NULL ,p_valu_0 VARCHAR(30) NULL ,p_valc_0 FLOAT NULL ) GO BEGIN TRY DROP TABLE POM_UID_SCRATCH END TRY BEGIN CATCH END CATCH CREATE TABLE POM_UID_SCRATCH ( puid VARCHAR(30) NOT NULL ) GO BEGIN TRY DROP TABLE PREFAUNIQUEEXPRID2 END TRY BEGIN CATCH END CATCH CREATE TABLE PREFAUNIQUEEXPRID2 ( puid VARCHAR(30) NOT NULL ) GO BEGIN TRY DROP TABLE PVISSTRUCTURERECEIPE END TRY BEGIN CATCH END CATCH CREATE TABLE PVISSTRUCTURERECEIPE ( puid VARCHAR(30) NOT NULL

70

,pvis_id ) GO

VARCHAR(64) NULL

BEGIN TRY DROP TABLE PVISMANAGEDDOCUMENT END TRY BEGIN CATCH END CATCH CREATE TABLE PVISMANAGEDDOCUMENT ( puid VARCHAR(30) NOT NULL ,pvis_id VARCHAR(64) NULL ) GO BEGIN TRY DROP TABLE PPATTERN_0 END TRY BEGIN CATCH END CATCH CREATE TABLE PPATTERN_0 ( puid ,pseq ,pval_0 ) GO

VARCHAR(30) NOT NULL FLOAT NOT NULL VARCHAR(480) NULL

BEGIN TRY DROP TABLE POCCURRENCE_LIST END TRY BEGIN CATCH END CATCH CREATE TABLE POCCURRENCE_LIST ( puid VARCHAR(30) NOT NULL ,pseq FLOAT NOT NULL ,pvall FLOAT NULL ,pval VARCHAR(MAX) NULL ) GO BEGIN TRY DROP TABLE PVISSTRUCTURECONTEXT END TRY BEGIN CATCH END CATCH CREATE TABLE PVISSTRUCTURECONTEXT ( puid VARCHAR(30) NOT NULL ,vla_847_2 FLOAT NULL ,rstatic_structure_fileu VARCHAR(30) NULL

71

,rstatic_structure_filec FLOAT NULL ) GO BEGIN TRY DROP TABLE PJTCONTENTFORMSTORAGE END TRY BEGIN CATCH END CATCH CREATE TABLE PJTCONTENTFORMSTORAGE ( puid VARCHAR(30) NOT NULL ,pjt_id VARCHAR(480) NOT NULL ,pgeometry FLOAT NOT NULL ,pmaterial_overrides FLOAT NOT NULL ,pcad_attributes FLOAT NOT NULL ,ppmi FLOAT NOT NULL ) GO BEGIN TRY DROP TABLE EIM_UID_GENERATOR_ROOT END TRY BEGIN CATCH END CATCH GO CREATE TABLE EIM_UID_GENERATOR_ROOT ( MOST_RECENT_TIME FLOAT NULL ,SITE_ID FLOAT NOT NULL ) GO

72

Appendix C: Links for Further Information


For general information, visit the Siemens PLM Software home page. SQL Server information can be found in Books Online: SQL Server 2008 Books Online SQL Server 2005 Books Online See the SQL Server Best Practices portal for technical white papers, the SQL Server Best Practices Toolbox, Top 10 Lists, and other resources. For Siemens and Microsoft news, events, and further information, see the Siemens/Microsoft Alliance page. Following is a list of technical white papers that were tested and validated by the SQL Server development team. These can help you learn more about specific SQL Server topics. Guide to Migrating from Oracle to SQL Server 2008 Practical SQL Server 2008 for Oracle Professionals Motorola Migrates Mission Critical Oracle Database in One Day with 100 Percent Accuracy Whitepaper: Best Practices for Running Siemens Teamcenter on SQL Server SQL Server Oracle Migration Site Siemens Teamcenter on SQL Server blog SQL Server Web site SQL Server TechCenter SQL Server DevCenter Storage Top 10 Best Practices (for SQL Server) Predeployment I/O Best Practices Tuning the Performance of Change Data Capture in SQL Server 2008 The Data Loading Performance Guide Best Practices for Migrating Non-Unicode Data Types to Unicode The Impact of Changing Collations and of Changing Data Types from Non-Unicode to Unicode XML Best Practices for Microsoft SQL Server 2005 SQL Server 2005 Security Best Practices - Operational and Administrative Tasks Partial Database Availability Comparing Tables Organized with Clustered Indexes versus Heaps SQL Server 2005 Deployment Guidance for Web Hosting Environments Implementing Application Failover with Database Mirroring TEMPDB Capacity Planning and Concurrency Considerations for Index Create and Rebuild 73

Getting Started with SQL Server 2008 Failover Clustering SQL Server 2008 Failover Clustering Database Mirroring Best Practices and Performance Considerations SQL Server 2005 Waits and Queues Troubleshooting Performance Problems in SQL Server 2005

74

You might also like