You are on page 1of 120

GE FANUC AUTOMATION

Professional Services SQL Programming Guidelines

Prepared by Matthew Wells


Professional Services

GE Fanuc Automation

SQL Programming Guidelines

Page 1 of 120

All rights reserved. No part of this publication may be reproduced in any form or by any electronic or mechanical means, including photocopying and recording, without permission in writing from GE Fanuc Automation. Disclaimer of Warranties and Liability The information contained in this manual is believed to be accurate and reliable. However, GE Fanuc Automation assumes no responsibilities for any errors, omissions or inaccuracies whatsoever. Without limiting the foregoing, GE Fanuc Automation disclaims any and all warranties, expressed or implied, including the warranty of merchantability and fitness for a particular purpose, with respect to the information contained in this manual and the equipment or software described herein. The entire risk as to the quality and performance of such information, equipment and software, is upon the buyer or user. GE Fanuc Automation shall not be liable for any damages, including special or consequential damages, arising out of the user of such information, equipment and software, even if GE Fanuc Automation has been advised in advance of the possibility of such damages. The user of the information contained in the manual and the software described herein is subject to the GE Fanuc Automation standard license agreement, which must be executed by the buyer or user before the use of such information, equipment or software. Notice GE Fanuc Automation reserves the right to make improvements to the products described in this publication at any time and without notice. 2007 GE Fanuc Automation. All rights reserved. Microsoft is a registered trademark of Microsoft Corporation. Any other trademarks herein are used solely for purposes of identifying compatibility with the products of GE Fanuc Automation. Proficy is a trademark of GE Fanuc Automation.

GE Fanuc Automation

SQL Programming Guidelines

Page 2 of 120

Table of Contents
1.0 Introduction...................................................................................................................................7
1.1 Purpose ...................................................................................................................................................7 1.2 Terminology ...........................................................................................................................................7 1.3 Contact....................................................................................................................................................7

2.0 General Guidelines .......................................................................................................................8


2.1 Permissions.............................................................................................................................................8 2.2 Database Updates...................................................................................................................................8 2.3 Deadlocks................................................................................................................................................8

3.0 SQL Syntax Formatting................................................................................................................9


3.1 General....................................................................................................................................................9 3.2 Stored Procedure Names.....................................................................................................................10 3.3 Comments .............................................................................................................................................11 3.4 Queries ..................................................................................................................................................11 3.5 Program Flow ......................................................................................................................................12

4.0 Programming Tips ......................................................................................................................13


4.1 General..................................................................................................................................................13
4.1.01 SET vs SELECT ...........................................................................................................................................13 4.1.02 Matching Data Types ....................................................................................................................................13 4.1.03 Checking for Table Records By EXISTS() vs COUNT() .............................................................................14 4.1.04 SET ROWCOUNT n vs SELECT TOP n.....................................................................................................14

4.2 NOLOCK..............................................................................................................................................14 4.3 Indexes ..................................................................................................................................................15 4.4 Joins.......................................................................................................................................................17 4.5 Transactions .........................................................................................................................................17 4.6 Temporary Tables................................................................................................................................18 4.7 Cursors..................................................................................................................................................18 4.8 Dynamic SQL .......................................................................................................................................19
4.8.01 Scope ............................................................................................................................................................20 4.8.02 Building Strings Within Strings ....................................................................................................................20 4.8.03 Output Values ...............................................................................................................................................21

4.9 SP Recompiles ......................................................................................................................................21 4.10 NOLOCK............................................................................................................................................24

5.0 General Plant Applications Methods .........................................................................................25


5.1 Data Types and Conversions ..............................................................................................................25

GE Fanuc Automation

SQL Programming Guidelines

Page 3 of 120

5.1.01 DateTime ......................................................................................................................................................25 5.1.02 Numeric ........................................................................................................................................................25

5.2 Custom Parameters .............................................................................................................................26


5.2.01 Site and User Parameters ..............................................................................................................................26 5.2.02 Event Model User-Defined Properties ..........................................................................................................26 5.2.03 Production Unit User-Defined Properties .....................................................................................................29 5.2.04 Execution Path User-Defined Properties.......................................................................................................30

5.3 Result Sets.............................................................................................................................................31


5.3.01 Debugging Result Sets ..................................................................................................................................33 5.3.02 Result Sets DateTime Formats.....................................................................................................................33

5.4 Debug Messages ...................................................................................................................................34 5.5 Multilingual Support ...........................................................................................................................34 5.6 History Tables ......................................................................................................................................35 5.7 Column_Updated_Bitmask Field .......................................................................................................35

6.0 SQL Historian .............................................................................................................................38


6.1 BrowseSQL...........................................................................................................................................38 6.2 DeleteSQL.............................................................................................................................................39 6.3 InsertSQL .............................................................................................................................................39 6.4 ReadAfterSQL .....................................................................................................................................39 6.5 ReadBeforeSQL ...................................................................................................................................40 6.6 ReadBetweenSQL ................................................................................................................................41

7.0 Calculation Stored Procedures...................................................................................................43 8.0 Event Model Stored Procedures .................................................................................................45
8.1 Models and Historian Tags .................................................................................................................45
8.1.01 Historian Data Query ....................................................................................................................................45 8.1.02 Multiple Trigger Tags ...................................................................................................................................46

8.2 Model Execution Multithreading and Order .................................................................................46 8.3 Error Messages ....................................................................................................................................47 8.4 SQL Historian ......................................................................................................................................48

9.0 Report Stored Procedures ...........................................................................................................49


9.1 Report Parameters...............................................................................................................................49 9.2 Debug Messages ...................................................................................................................................50

10.0 Testing, Debugging and Troubleshooting ...............................................................................51


10.1 Execution Plan....................................................................................................................................51 10.2 Table Performance ............................................................................................................................51
10.2.01 Index Fragmentation ...................................................................................................................................51 10.2.02 Table Statistics............................................................................................................................................52

GE Fanuc Automation

SQL Programming Guidelines

Page 4 of 120

10.3 Parallelism ..........................................................................................................................................53

11.0 Database Structure....................................................................................................................55


11.1 Product/Grade Changes ....................................................................................................................55
11.1.01 Querying An Events Product .....................................................................................................................55

11.2 Production Tracking .........................................................................................................................55


11.2.01 Production Event Quantity..........................................................................................................................56 11.2.02 Production Event Status ..............................................................................................................................57 11.2.03 Available Inventory.....................................................................................................................................58 11.2.04 Net Production ............................................................................................................................................59 11.2.05 Production Event Product ...........................................................................................................................60 11.2.06 Inventory Locations ....................................................................................................................................61 11.2.07 Event History ..............................................................................................................................................62

11.3 Production Schedule Execution (Process Orders) ..........................................................................63


11.3.01 Schedule Execution.....................................................................................................................................64 11.3.02 Process Order Quantity ...............................................................................................................................64

11.4 Genealogy Links (Event_Components) ...........................................................................................65


11.4.01 Raw Material Consumption ........................................................................................................................66 11.4.02 Multiple Parent/Child Links........................................................................................................................67 11.4.03 Circular Parent/Child Links ........................................................................................................................68 11.4.04 How to identify what Input was Load, Unload or Complete.......................................................................68

11.5 Downtime............................................................................................................................................69
11.5.01 Querying Downtime Duration.....................................................................................................................70 11.5.02 Calculating Uptime .....................................................................................................................................70 11.5.03 Determining Primary and Split Records .....................................................................................................71 11.5.04 Querying Fault Selection.............................................................................................................................71 11.5.05 Querying Reason Selection .........................................................................................................................72 11.5.06 Querying Category Selection ......................................................................................................................72

11.6 Waste...................................................................................................................................................72 11.7 Quality ................................................................................................................................................73


11.7.01 Comparing Values To Specification Limits ................................................................................................73 11.7.02 Creating Specifications and Transactions ...................................................................................................74

11.8 Crew Schedule....................................................................................................................................75 11.9 Interfaces To External Systems........................................................................................................76 11.10 User-Defined Properties (UDP) ......................................................................................................76 11.11 Language...........................................................................................................................................79
11.11.01 Querying a Users Language.....................................................................................................................80 11.11.02 Querying Language Prompts and Overrides .............................................................................................81 11.11.03 Querying Global and Local Description ...................................................................................................82

12.0 Revision History ........................................................................................................................83 13.0 References .................................................................................................................................84 14.0 Appendix A: Result Sets...........................................................................................................85
14.1 Production Events..............................................................................................................................86
14.1.01 Example ......................................................................................................................................................86

GE Fanuc Automation

SQL Programming Guidelines

Page 5 of 120

14.2 Variable Values ..................................................................................................................................88


14.2.01 Example ......................................................................................................................................................88

14.3 Grade Changes ...................................................................................................................................90


14.3.01 Example ......................................................................................................................................................90

14.4 Downtime Events ...............................................................................................................................92


14.4.01 Example ......................................................................................................................................................92

14.5 Alarms.................................................................................................................................................95
14.5.01 Example ......................................................................................................................................................96

14.6 Sheet Columns..................................................................................................................................101


14.6.01 Example ....................................................................................................................................................101

14.7 User Defined Events ........................................................................................................................102


14.7.01 Example ....................................................................................................................................................103

14.8 Waste Event......................................................................................................................................104


14.8.01 Example ....................................................................................................................................................105

14.9 Production Event Details ................................................................................................................106


14.9.01 Example ....................................................................................................................................................107

14.10 Genealogy Event Components......................................................................................................108


14.10.01 Example ..................................................................................................................................................108

14.11 Genealogy Input Events ................................................................................................................109 14.12 Defects .............................................................................................................................................110 14.13 Output File .....................................................................................................................................112
14.13.01 Example ..................................................................................................................................................112

15.0 Appendix B: Defragmenting Indexes....................................................................................114 16.0 Appendix C: Monitor Blocking/Parallelism .........................................................................119

GE Fanuc Automation

SQL Programming Guidelines

Page 6 of 120

1.0 Introduction
1.1 Purpose
This document is intended for users who will be writing SQL code in any form or application against the Plant Applications database. The principles contained within this document are important to follow as they are the reflection of many users experience and are stated with the goal of avoiding bad performance and achieving a common framework for support. The contents of this document are for informational reference only and are not supported by GE Fanuc. GE Fanuc reserves the right to change the contents of this document at any time.

1.2 Terminology
Term MES PA ERP BOM Definition Manufacturing Execution System Plant Applications Enterprise Resource Planning Bill of Material

1.3 Contact
Any questions, comments or desired additions to this document should be forwarded to: Matthew Wells Project Team Leader GE Fanuc (905) 858-6555 Matthew.Wells@ge.com

GE Fanuc Automation

SQL Programming Guidelines

Page 7 of 120

2.0 General Guidelines


Failure to follow these guidelines could result in deadlocks which may result in lost data or worse, blocking, which would result in the Proficy Server ceasing to function.

2.1 Permissions
The SQL comxclient user has to be granted Execute permission for all stored procedures run by Proficy. This applies to all custom event and calculation stored procedures.

2.2 Database Updates


Always use result sets to make modifications to the database: Most blocking tends to occur when a custom stored procedure attempts (ie. through a calculation or event mode) to directly update or delete records in the database and it conflicts with the DatabaseMgr service. The Proficy DatabaseMgr service is responsible for making updates to the database so by using result sets the DatabaseMgr service will make the changes for you and situations where both processes are attempting to access the same record locks will be avoided.

2.3 Deadlocks
Deadlocks can be avoided through the use of result sets and not modifying the database directly. An example situation of where deadlocks could occurs is a stored procedure that inserts a record into the Events table and then immediately attempts to update it. There is a trigger defined on the events table that upon record insertion also attempts to update it. Subsequently, there is a situation where 2 statements are simultaneously attempting to lock the record of update and it ends up in a deadlock.

GE Fanuc Automation

SQL Programming Guidelines

Page 8 of 120

3.0 SQL Syntax Formatting


For support purposes it is wise to maintain a consistent coding style so everyone can easily read the same code. The recommended style is as follows:

3.1 General
The following should be taken into account when writing stored procedures: Stored procedures names should always begin with spLocal_. This identifies them to the Proficy Administrator as local custom stored procedures. By default, the Administrator will search for stored procedures starting with the spLocal_ prefix when the stored procedure button is clicked in the calculation or event model properties configuration dialogs. Never start a stored procedure name with sp_ as it tells SQL Server to look for it in the master database first, before the local database so there is a slight performance hit. All Transact-SQL reserved words (i.e. SELECT, FROM, WHERE, DECLARE, IF, ELSE, BEGIN, END, etc.) should be upper case. All SQL data types (i.e. int, float, datetime, etc.) should be lower case and declared variables and their data types should be listed vertically and tab indented. For example:
DECLARE @Condition @Action @Value1 @Value2 @Value3 int, int, float, datetime, varchar(25)

All SQL functions (i.e. datediff, ltrim, nullif, etc.) should be lower case. Variables should not contain any underscores (i.e. @MyNewVariable vs @My_New_Variable) Every permanent object referenced in a stored procedure should have dbo. in front of it, including the declaration of the stored procedure itself. This is essential to prevent unnecessary recompiles of the stored procedure. For example,
CREATE PROCEDURE dbo.spLocal_MyStoredProcedure

All temporary tables should be created together at the beginning of the stored procedure and then collectively dropped at the end of the stored procedure. This will prevent multiple recompiles within the stored procedure.

GE Fanuc Automation

SQL Programming Guidelines

Page 9 of 120

When simultaneously assigning multiple values to multiple variables, a single SELECT statement should be used instead of multiple SET or SELECT statements as there is a relatively significant performance advantage. For example,
SELECT @MyVariable1 @MyVariable2 @MyVariable3 = 5, = 6, =7

However, when assigning a single value to a single variable there is marginal performance advantage to using the SET statement and it is currently the recommended approach by Microsoft. For example,
SET @MyVariable1 =5

The default tab size in the SQL editor should be set to 4 characters.

3.2 Stored Procedure Names


The name of the stored procedure should follow this format: spLocal_CCCSSSTTTDDDD Where, CCC SSS TTT DDDD = Corporation (i.e. GEF for GE Fanuc) = Site (i.e. TO for Toronto) or Cmn for globally reused stored procedures = Type of stored procedure (i.e. Calc for variable calculations or Rpt for reports) = Details of the stored procedure (i.e. Availability for an OEE calculation)

There is no restriction on the size of the abbreviation or the number of characters to use for each clause. The following is a list of common types that can be reused:
Abbreviation PE ME UDE Calc Rpt SDK WEBS WEBD IF Description Production event model Movement event model User-definted event model Variable calculation Report Stored procedure called from SDK via ExecuteCommand or ExecuteSQL Stored procedure call from a web service Stored procedure called from a web dialog (i.e. asp page) Interface stored procedure

For example, spLocal_GEFTOCalcAvailability

GE Fanuc Automation

SQL Programming Guidelines

Page 10 of 120

3.3 Comments
Every stored procedure should have a commented header which describes the basic functionality, author, calling applications, and change history. For example:
/* Stored Procedure: Author: Date Created: SP Type: Editor Tab Spacing: spLocal_RptMfgDaily Matt Wells (MSI) 04/23/02 Model 603 4

Description: ========= This procedure generates the data for a daily manufacturing report. CALLED BY: RptMfgDaily.xlt (Excel/VBA Template) Revision ======== 0.1 */ Date ===== 5/17/02 Who ==== MKW What ===== Added new production counter

There should be a tons of comments describing the purpose of each section of code and major sections should be divided by a sub-header. For example:
/********************************************************************************* * Section 1 * *********************************************************************************/

3.4 Queries
The following should be taken into account when writing queries: Primary query clauses should all be at the same indentation level Columns in any SELECT, INSERT, FETCH or VALUES statement should be listed vertically and indented. Any value assignments should also be indented together. Multiple conditions in a WHERE clause should be listed vertically and indented with the condition leading the line. Joins should be explicitly referenced using the JOIN clause as opposed to querying from multiple tables and joining in the WHERE clause. Joins should be indented and multiple join conditions should be listed vertically and indented with the condition leading the line in the same manner as a WHERE clause

For example,
SELECT @Value1 @Value2 @Value3 @Value4 FROM dbo.Table t = Column1, = Column2, = Column3, = Column4

GE Fanuc Automation

SQL Programming Guidelines

Page 11 of 120

INNER JOIN dbo.Table2 t2 ON WHERE Column1 = 5 AND Column2 IS NOT NULL OR Column3 LIKE Bob% ORDER BY Column 1 ASC

t.Column1 = t2.Column1 AND t.Column2 = t2.Column2

3.5 Program Flow


SQL statements following a condition or loop (i.e. IF, ELSE, WHILE, etc.) should be tab indented (preferably by 5 characters) and the BEGIN and END statements should be indented at the same level. Even if there is only one statement (which is the only time they arent necessary), they should be surrounded by BEGIN and END. For example:
IF @Condition = 1 BEGIN SELECT @Action = 2 END

GE Fanuc Automation

SQL Programming Guidelines

Page 12 of 120

4.0 Programming Tips


4.1 General
The following are some general tips for writing efficient SQL code: Using SET NOCOUNT ON and SET NOCOUNT OFF inside the the stored procedure after the CREATE statemente and before the last RETURN reduces the number of reads considerably on large stored procedures. All the objects that are referenced within the same stored procedure should all be owned by the same object owner (preferably dbo). Avoid using NOT IN, which offers poor performance because the SQL Server optimizer has to use a nested table scan to perform this activity, and instead try to use one of the following options: Use EXISTS or NOT EXISTS, Use IN, Perform a LEFT OUTER JOIN and check for a NULL condition OR my preferred option use a calculated field so you can use an "=" in the WHERE clause. Avoid using the SUBSTRING function and use the LIKE condition instead. When there is a choice of using the IN or the BETWEEN clauses in your Transact-SQL, the BETWEEN is generally more efficient. When there is a choice of using the IN or the EXISTS clause in your Transact-SQL, the EXIST clause is generally more efficient. The GROUP BY clause can be used with or without an aggregate function but for situations without an aggregate function the SELECT DISTINCT option should be used instead. The table hint NOLOCK (i.e. WITH (NOLOCK)) should be used as much as possible.

4.1.01 SET vs SELECT


For a single variable value assignment, SET is the preferred method as it has a marginal performance advantage and is recommended by Microsoft. However, for multiple simultaneous variable assignments, a single SELECT statement is more efficient that multiple SET and/or SELECT statements.

4.1.02 Matching Data Types


The data type of any parameters referenced in a query should always match exactly the data type of the table field, otherwise SQL may choose to perform a scan instead of a seek in the execution plan, thereby affecting the performance of the query. For example, in the query below, the data type of @EventNum should be varchar(50) to match the data type of the Event_Num field in the Events table.
SELECT * FROM Events

GE Fanuc Automation

SQL Programming Guidelines

Page 13 of 120

WHERE Event_Num = @EventNum

4.1.03 Checking for Table Records By EXISTS() vs COUNT()


When just checking to see if a table, temp table or table variable has any records in it, the EXISTS() function offers better performance than a straight COUNT() function and provides the same result. For example,
IF EXISTS(SELECT * FROM Events) PRINT 'yes' ELSE PRINT 'no' END IF

The use of EXISTS() in the above query is more efficient that using a COUNT() function.
IF (SELECT COUNT(*) FROM Events) > 0 PRINT 'yes' ELSE PRINT 'no' END IF

4.1.04 SET ROWCOUNT n vs SELECT TOP n


Both the SET ROWCOUNT and the SELECT TOP statements allow the amount of rows returned to be limited to a specific number. The SET ROWCOUNT command persists for the entire connection session so it has to be reset in every case (i.e SET ROWCOUNT 0) while the TOP option is valid for only the particular query in which its referenced. One particular advantage of SET ROWCOUNT is that the argument can be a variable (i.e. SET ROWCOUNT @NumberOfRows) whereas the TOP option only accepts literal values (i.e. SELECT TOP 10). While in most cases, using either option works equally efficiently, there are some instances (such as rows returned from an unsorted heap) where the TOP operator is more efficient than using SET ROWCOUNT. Because of this, using the TOP operator is preferable to using SET ROWCOUNT to limit the number of rows returned by a query. On SQL 2005 this is straightforward in static SQL, as with a slight syntax change, TOP accepts expressions for the argument:
SELECT TOP(@n) col1, col2 FROM tbl

4.2 NOLOCK
NOLOCK is a table hint that should be used as much as possible as it improves the performance of queries and eliminates blocking through lock contention. Essentially, NOLOCK performs uncommitted reads, which can be detrimental in certain applications but typically not in Plant Applications. An uncommitted read means that the query is taking the data as is, even though the transaction creating/updating the data may not have completed. In a banking application this could have a serious impact as most reporting is done on the current balances. However, in Plant Applications, most reporting is done after on existing data that is not modified to a large degree (typically only manually).

GE Fanuc Automation

SQL Programming Guidelines

Page 14 of 120

For example, if a report summarizing downtime was run at 8:00:00 AM and the operator changed a fault assignement at exactly the same time as well, an uncommitted read would miss the fault change. However, since the fault change transaction only takes a few milliseconds, the window for this situation to occur is extremely miniscule. So, if the operator changed the fault at 8:00:01, then the report would have to be rerun anyways. The syntax for NOLOCK is as follows:
SELECT * FROM dbo.Timed_Event_Details ted WITH (NOLOCK) JOIN dbo.Timed_Event_Faults tef WITH (NOLOCK) ON ted.TEFault_Id = tef.TEFault_Id

The following website further describes the functionality and performance benefits of NOLOCK http://www.sqlservercentral.com/columnists/WFillis/2764.asp

4.3 Indexes
Use the table indexes! They are designed to facilitate fast retrieval of a subset of data so use them as much as possible. Generally, you should try to design your overall data retrieval strategy to take advantage of indexed queries as much as possible but it also often means that adding seemingly useless conditions to your WHERE clause can greatly speed up the execution of your query. The MSSQL query optimizer generally does the best optimization but it works within a set of parameters defined by the query itself. As such, it may seem like a black art but by adding extra clauses or fields in the result set, you can give query optimizer the opportunity to utilize an index in a situation where it normally wouldnt have. Each tables indexes can be viewed through the MSSQL Server Enterprise Manager and the Execution Plan displayed in query analyzer shows the actual usage of indexes in a particular query. The key is too look for the operation with the highest Query Percentage cost within a given execution plan, an then try to figure out how to improve that performance. A situation where high Query Percentage cost often occurs is when the table indexes arent properly utilized and too many rows are initially selected from a large table before they are filtered out by other means (i.e. a Join). This is shown selected in the Execution Plan by the high number of rows selected. Modification of the query can cause the query optimizer to use a different index and reduce the number of initial rows selected. In any query, it is also import to keep the WHERE clause as definite as possible and avoid too many OR statements that bring multiple unconnected columns into play. This may cause the Query Optimizer to become confused and then not use any indexes at all. The key to identifying this situation is to look at your WHERE clause and determine whether there are multiple conditions that could apply to the same record. For example: The following queries the Variables table looking for a specific string in the Extended_Info field, which is a non-indexed field.

GE Fanuc Automation

SQL Programming Guidelines

Page 15 of 120

SELECT @Unload_Date_Var_Id = Var_Id FROM Variables WHERE PU_Id = @PU_Id AND Extended_Info Like '%' + @Flag + '%'

This query can actually be made faster by included a search against the Var_Desc field, which is an indexed field. Peculiarly, to achieve the best performance this query needs the search string to be in a variable as opposed to a constant.
SELECT @Wildcard = % SELECT @Unload_Date_Var_Id = Var_Id FROM Variables WHERE PU_Id = @PU_Id AND Var_Desc Like @Wildcard AND Extended_Info Like '%' + @ Flag + '%'

Basically, you need to pick the indexes based on what you're putting into your WHERE clause or your JOIN statements. To properly design a database, you need to write the queries at the same time. In theory, you could create an index for every column or column combination but every index you create adds space to the database and ultimately affects performance. So there's a balance but unfortunately there's no rules about it. SQL provides an 'Index Tuning Wizard' which may be off assistance. Fundamentally, there are 3 choices you have to make.... 1) Primary Key - You should always have a unique primary key. This is the unique column or column combination within the table. In Plant Applications, this is almost always an identity field and, because of this, we usually make it a non-clustered index (as there's no point in having a clustered index on a single column with unique values). 2) Clustered Index - You can only have 1 clustered index and it should always be a multi-column index that is the most commonly selected key. For example, in the Plant Applications Events table, while Event_Id is the unique primary key, the clustered index is on PU_Id and TimeStamp b/c that is the most commonly selected combination. Clustered indexes act as a tree so they're very fast at retrieving data (i.e. they search for all PU_Id records first before drilling down to Timestamp). The order of the columns in the clustered index is important (as it is for any index). 3) Non-clustered indexes - You can have many of these. Typically they should be for commonly selected columns or column combinations other than the Clustered Index or Primary Key. The best way to choose your indexes is by writing the queries you need and figuring out what's the most commonly selected columns. As you test the queries you look at the Execution Plan in Query Analyser and see which indexes the query is using. You will see things like: a) Table Scan - this means the query is not using any index and is checking every single row for a match. In small tables this may not be a bad thing but since most table are large it's generally a very bad thing. As the table grows larger the query will take longer and performance will degrade. b) Index Scan - this means that the query is scanning the full index for the rows it wants. This is better than a table scan but still means the query has to check every single element. Because indexes are smaller than the actual table, Index Scans are much faster than Table Scans. As the table grows larger, the scan will take longer and performance will degrade.

GE Fanuc Automation

SQL Programming Guidelines

Page 16 of 120

c) Index Seek - this means the index is being fully utilized in the search. This is what you're aiming for and means that the query performance should remain stable as the table grows larger. As the table grows larger, performance should remain stable. Obviously, larger tables mean slower performance in all cases but the effect will be much less pronounced with Index Seeks.

4.4 Joins
Avoid the use of unnecessary Joins as they will impact the performance of a query. One of the obvious advantages of stored procedures is that you can break down your query into modular components and use variables instead of Joins. Dont exceed a maximum of 15 simultaneous joins as the query performance is significantly impacted. When writing an inner join, try to make the exclusions as one-sided as possible. This will vastly improve the performance of your query. Exclusions on both sides of an inner join can prove to be expensive. Also, never substitute a variable for a joinable field. In the following example the first join uses the variable @PUId instead of joining to the field in the Events table. This will cause the query to run much slower because it takes longer for the query engine to merge the rows together. For example,
FROM Events e JOIN Variables v ON v.PU_Id = @PUId AND v.Var_Desc = MyVariable

vs
FROM Events e JOIN Variables v ON v.PU_Id = e.PU_Id AND v.Var_Desc = MyVariable

When writing a query with Joins, put the table with the smallest number of rows last in the list and the table with the largest number of rows first.

4.5 Transactions
Avoid the use of SQL transactions: If you have to then make the SQL transaction as short as possible. SQL transactions help to ensure database integrity by performing all the actions at once. As such, if the required lock cannot be acquired, the whole processes stops until its released. This is a leading cause of blocking.

GE Fanuc Automation

SQL Programming Guidelines

Page 17 of 120

4.6 Temporary Tables


Temporary tables should be avoided if possible. The reason for this is that they are created and then dropped in tempdb database. For this happen the SQL process needs to acquire an exclusive lock on the tempdb database each time. If too many temporary tables are used, performance can degrade as the various processes have to wait on each other for access. A good alternative to temporary tables are table variables. Table variables reside in memory only and are a much more efficient alternative. However, because they reside only in memory, table variables should only be used for small data sets (< 1000 records). For example,
DECLARE @MyTable TABLE ( Id int UNIQUE IDENTITY, PU_Id int, TimeStamp datetime, PRIMARY KEY(PU_Id, TimeStamp))

Single or multiple column clustered indexes can be created by declaring a PRIMARY KEY. Additional, non-clustered indexes can be created using the UNIQUE constraint keyword. For large datasets, temporary tables should be used. For good performance, temporary tables should always have a clustered index on them. One thing to remember about temporary tables is that they are declared globally and are available to any stored procedures called by the stored procedure that created the table. Any duplicate create statements in the called stored procedures will not generate any error messages and the original table will be used which, if unintended, can generate some unexpected results. Multiple temporary tables should always be created together at the beginning of a stored procedure to reduce recompiles. Temporary tables will be automatically dropped at the end of the stored procedure that created them but they should be explicitly dropped (using the DROP TABLE statement) as soon as they are no longer needed in order to free up system resources. When using temporary tables and/or table variables it is very important to ensure the table has an index. Lack of proper indexes is a leading cause of poor performance.

4.7 Cursors
Cursors have terrible performance and should never be used. They are expensive in terms of processing and also lock the entire dataset when in use. Furthermore, using a temporary table in a cursor is extremely bad because no other process will be able to create or drop temporary tables for the duration of the cursor as it prevents them from acquiring the necessary exclusive locks on the tempdb database. The processes will be forced to wait and will result in poor server performance. Also, referencing a temporary table in a cursor will force the stored procedure to recompile every time.

GE Fanuc Automation

SQL Programming Guidelines

Page 18 of 120

Most cursors are just used for looping through a dataset and performing other actions. A simple loop can easily be accomplished using the automatic increment functionality (i.e. IDENTITY) of a temp table or table variable instead. For example, instead of using the following cursor:
DECLARE CURSOR MyCursor FOR SELECT Field1 FROM DataTable ORDER BY Field1 ASC OPEN MyCursor FETCH NEXT FROM MyCursor INTO @Field1 WHILE @@FETCH_STATUS = 0 BEGIN -- process your data FETCH NEXT FROM MyCursor INTO @Field1 END

Use a table variable instead in the following manner:


DECLARE @MyTable TABLE ( DECLARE @Rows int, @Row int RowId Field1 int IDENTITY, int)

-- Insert data here in the order desired INSERT INTO @MyTable (Field1) SELECT Field1 FROM DataTable ORDER BY Field1 ASC -- Get the total number of rows SELECT @Rows = @@ROWCOUNT, @Row = 0 -- Loop through the rows in the table WHILE @Row < @Rows BEGIN SELECT @Row = @Row + 1 SELECT @Field1 = Field1 FROM @MyTable WHERE RowId = @Row -- Process your data END

4.8 Dynamic SQL


Dynamic SQL consists of building SQL statements in strings and then executing them with either the EXECUTE() command or the system stored procedure sp_execute(). Generally, it is recommended to not use dynamical SQL as it can easily impact performance if not done properly. However, there may be applications where it cannot be avoided or a few cases where it is actually recommended (see the section on stored procedure recompiles). The following website is an excellent resource on the pros and cons of dynamic SQL.
l mt h.l q s _ ci m a n y d/ e s. g o k s r a m m o s. w w w//: ptt h

GE Fanuc Automation

SQL Programming Guidelines

Page 19 of 120

From a performance perspective, one of the main issues with dynamic SQL relates to how SQL Server manages its execution plans. Every query and stored procedure run in SQL Server requires an execution plan, which basically represents the strategy SQL Server is using to search for and retrieve data. When the code is run for the first time, SQL Server builds an execution plan for it (i.e. it compiles the query) and the plan is saved in cache. The plan is reused until it's aged out or it is invalidated for some other reason like the query or stored procedure code is changed and this is where problems start to occur with dynamic SQL. Since dynamic SQL typically involves changing the structure of a query, the execution plan is not reused and must be recompiled each time. The time SQL Server takes to generate an execution plan can be significant so it can result in significant performance degradation. This performance issue can be partially alleviated through the use of the sp_execute() stored procedure as it allows the definition of parameters in the dynamic SQL which can reduce the amount of query modification and allow the plans to be reused. As such, if using dynamic SQL, it is especially important to use sp_execute() in place of the EXECUTE() statement. However, modifying the columns returned and/or the WHERE clause itself may still result in recompiles. Its a best practice to always use a variable to hold the SQL statement. For example,
DECLARE MyString nvarchar(4000) SELECT MyString = SELECT * FROM MyTable EXEC sp_executesql @MyString

4.8.01 Scope
When running EXECUTE() or sp_execute(), the SQL is executed within its own scope and doesnt inherit the scope of the calling stored procedure. This results in the following behaviour: Permissions are not inherited so the calling user must have direct permissions for all the objects involved. There are some options to address this in SQL 2005 (i.e. certificates and/or impersonation) but not in SQL 2000. No direct access to local variables or parameters of the calling stored procedure without passing them. Any USE statement in the dynamic SQL will not affect the calling stored procedure. Temp tables created in the dynamic SQL will not be accessible from the calling procedure since they are dropped when the dynamic SQL exits. The block of dynamic SQL can however access temp tables created by the calling procedure. If you issue a SET command in the dynamic SQL, the effect of the SET command lasts for the duration of the block of dynamic SQL only and does not affect the caller. The query plan for the stored procedure does not include the dynamic SQL. The block of dynamic SQL has a query plan of its own.

4.8.02 Building Strings Within Strings


When building strings for the EXECUTE() or sp_executesql() functions, its commonly required to put a literal string inside a string. This can be accomplished in a number of different ways.

GE Fanuc Automation

SQL Programming Guidelines

Page 20 of 120

A common method is to use double quotes. If the SET QUOTED_IDENTIFIER option is set to off, double quotes can be used as a string delimiter. The default for this setting depends on context, but the preferred setting is ON, and it must be ON in order to use XQuery, indexed views and indexes on computed columns. However, SET commands within dynamic SQL only lasts for the block of dynamic SQL so SET QUOTED_IDENTIFIER can be referenced within the dynamic SQL. For example,
DECLARE @MyString varchar(4000) SELECT MyString = SET QUOTED_IDENTIFIER OFF SELECT * FROM MyTable WHERE Name = Jim EXEC(@sql)

It is not recommended to use double quotes outside of the dynamic SQL statement (i.e. in the calling stored procedure) as they are not supported by default in many SQL editors, which can lead to difficulties in supporting existing stored procedures. Another option is to use direct character references for the quote. While the double quotes may look easier to understand, the char(39) function provides the literal reference for the single quote and is supported by all editors. For example,
DECLARE @MyString varchar(4000) SELECT @MyString = SELECT * FROM MyTable WHERE Name = + char(39) + Jim + char(39) EXEC(@sql)

Lastly, the QUOTENAME() function can be used to return a string with quotes. For example,
DECLARE @MyString varchar(4000) SELECT @MyString = 'SELECT * FROM MyTable WHERE Name = ' + QUOTENAME('Jim','''') SELECT @MyString

4.8.03 Output Values


One advantage that the system stored procedure sp_executesql() has over the standard EXECUTE() statement is that it allows a variable declared in the string statement to be returned to the calling routine. For example,
DECLARE @count as INT DECLARE @SQLx as nvarchar(200) SET @SQLx =N'SELECT @count = COUNT(*) FROM Variables' EXEC sp_executesql @SQLx, N'@count INT OUTPUT', @count= @count OUTPUT SELECT @count

4.9 SP Recompiles
SQL Server performs recompiles as it executes stored procedures in order to optimize them and, to a certain extent, they are a normal part of every databases ongoing operation. Recompiles are evaluated on a statement by statement basis as SQL Server executes a stored procedure, so the number of recompiles performed can easily, and unnecessarily, grow beyond the normal limit if attention is not paid to the way stored procedures are written.

GE Fanuc Automation

SQL Programming Guidelines

Page 21 of 120

When a stored procedure recompiles, it consumes significant system resources for the compilation process and, if done excessively, can impact server performance. In SQL 7.0 and 2000, the entire stored procedure is recompiled, regardless of which part of it caused the recompile, so the larger the procedure is, the greater the impact of recompilation is. Furthermore, while recompiling, SQL Server places a compile lock on all the objects referenced by the stored procedure, and when there are excessive recompiles, the database may experience blocking. Notably, in SQL 2005, only the statement in question will be recompiled. This statement level recompile functionality will significantly improve the performance impact of recompilation. Generally, if a stored procedure is recompiling every time it is executed, it should be rewritten to reduce the likelihood of it recompiling. In extreme cases, poor coding can result in a stored procedure recompiling multiple times within a single run. The following are things that will cause a stored procedure to recompile: 1. Dropping and recreating the stored procedure. 2. Using the WITH RECOMPILE clause in the CREATE PROCEDURE or the EXECUTE statement. 3. Running the sp_recompile system procedure against a table referenced by the stored procedure. 4. The stored procedure execution plan is dropped from the cache. Infrequently used procedures are aged by SQL Server and will be dropped from the cache if the cache memory is needed for other operations. 5. All copies of the execution plan in the cache are in use. 6. The procedure alternates between executing Data Definition Language (DDL) and Data Manipulation Language (DML) operations. When DDL operations (i.e. CREATE statements) are interleaved with DML operations (i.e. SELECT statement), the stored procedure will be recompiled every time it encounters a new DDL operation. 7. Changing the schema of a referenced object (i.e. using an ALTER TABLE or CREATE INDEX). This applies to both permanent and temporary tables. 8. When a stored procedure is compiled and optimized, it is done based on the statistics of the referenced tables at the time it is compiled. Each table in SQL Server (permanent or temporary) has a calculated recompilation threshold and if a large number of row modifications have been made and exceeds the threshold, then SQL Server will recompile the stored procedure to acquire the new statistics and optimize the procedure again. With respect to permanent tables, this is a normal part of database operations. 9. The following SET options are ON by default in SQL Server, and changing the state of these options will cause the stored procedure to recompile: SET ANSI_DEFAULTS SET ANSI_NULLS SET ANSI_PADDING

GE Fanuc Automation

SQL Programming Guidelines

Page 22 of 120

SET ANSI_WARNINGS SET CONCAT_NULL_YIELDS_NULL

While there are not good workarounds for the first four SET options, the last one can be avoided, by using the ISNULL function. Using the ISNULL function and setting any data that might contain a NULL to an empty string will accomplish the same functionality. 10. The stored procedure performs certain operations on temporary tables such as: a. Declaration of temporary tables cause recompiles during the initial compilation. When a stored procedure is initially compiled, temporary tables do not exist so SQL Server will recompile after each temporary object is referenced for the first time. SQL Server will cache and reuse this execution plan the next time the procedure is called and the recompiles for this particular part of the stored procedure will go to zero. However, execution plans can be aged and dropped from the cache so periodically this may reoccur. b. Any DECLARE CURSOR statements whose SELECT statement references a temporary table will cause a recompile. c. Any time a temporary table is created within a control-of-flow statement (i.e. IF..ELSE or WHILE), a recompile will occur. d. Temporary tables have a global scope so they can be created in one stored procedure and then referenced in another. They can also be created using the EXECUTE() statement or with the sp_executesql() routine. However, if the temporary table has been created in a stored procedure (or EXECUTE() statement) other than the one currently referencing it, a recompile will occur every time the temporary table is referenced. e. Any statement containing the name of the temporary table that appears syntactically before the temporary table is created in the stored procedure will cause a recompile. f. Any statements that contain the name of a temporary table which appear syntactically after a DROP TABLE against the temporary table will cause the stored procedure to recompile.

The following practices should be followed to avoid and reduce the impact of recompiles: 1. Put dbo. in front of every permanent object referenced in the stored procedure. While this doesnt prevent recompiles, it will minimize the impact of the recompile by stopping SQL Server from placing a COMPILE lock on the procedure while it determines if all objects referenced in the code have the same owners as the objects in the current cached procedure plan. 2. Most recompile issues involve the use of temporary tables. As such, using table variables instead of temporary tables is the best way to avoid them. Table variables do not have recompilation threshold values, so recompilations do not occur because of changes in the number of rows.

GE Fanuc Automation

SQL Programming Guidelines

Page 23 of 120

3. Place all of the temporary table creation statements together. As mentioned above, during the initial compilation of a stored procedure, SQL Server will recompile each time a temporary table is referenced for the first time throughout the code. By placing them all together, SQL Server will create execution plans for all them at the same time in just one recompile. 4. Make all schema changes (such as index creation) right after your create table statements and before you reference any of the temporary tables. 5. Do not use a temporary table in the SELECT statement for a cursor. Furthermore, cursors should not be used at all. 6. Do not create a temporary table within a control-of-flow statement (i.e. IF..ELSE or WHILE). 7. Do not create a temporary table in an EXECUTE statement or using the system procedure sp_executesql. 8. Do not use a temporary table in a stored procedure other than the one the table was created in. 9. Do not reference a temporary table before it is created. 10. Do not reference a temporary table after it is dropped. 11. Execute SQL statements that are causing recompilation with sp_executesql. Statements using this method are not compiled as part of the stored procedure plan, but have their own plan created. When the stored procedure encounters a statement using sp_executesql, it is free to use one of the statement plans or create a new plan for that statement. Using sp_executesql is preferred to using the EXECUTE because it allows parameterization of the query. This should only be used for specific SQL queries that have been determined to be causing excessive recompiles. Alternatively, a sub-procedure could also be used to execute specific statements that are causing recompilation. While the sub-procedure would be recompiled, the size and scope of the subprocedure is much smaller and, as such, the impact greatly reduced. This mimics the statement level recompile functionality implemented in SQL Server 2005. 12. Do not use query hints such as KEEPFIXEDPLAN. Query hints are only effective in very specific situations where the same dataset is being returned. Using query hints without an indepth analysis of their effectiveness will generally result in slower performance.

4.10 NOLOCK

GE Fanuc Automation

SQL Programming Guidelines

Page 24 of 120

5.0 General Plant Applications Methods


5.1 Data Types and Conversions
5.1.01 DateTime
The SQL datetime data type is easily compatible with the varchar data type. However, there is one thing to be aware of and that is the implicit (default) conversion of datetime to varchar chops off the seconds. As such, you should always use the function convert and ODBC Canonical option to convert from datetime to varchar (i.e. convert(varchar(25), @TimeStamp, 120)). For datetime data, you must use caution with specifically supplied millisecond values. Datetime data does not guarantee precision to the millisecond. Instead, datetime data is precise within 3.33 milliseconds. For example, in the case of 23:59:59.999, this exact time tick is not available and instead the value is rounded to the nearest time tick that is 12:00:00.000am of the following day. As such, a value of 23:59:59.997 should be used instead.

5.1.02 Numeric
Proficy variable values are stored in a varchar format. When selecting data from the tests table, care must be taken to ensure that the values are in the right format before converting the values. Using the isnumeric() function will allow you to verify the datas validity before converting. For example:
DECLARE @Result @Value varchar(25), real

SELECT @Result = Result FROM tests WHERE Test_Id = 123456 IF isnumeric(@Result) = 1 BEGIN SELECT @Value = convert(real, @Result) END

When dealing with integer values, it is often wise to convert the value to a real value before converting to an integer. Conversion of a string representation of a real value directly to an integer will produce an error. However, conversion of that value to a real first will not. This is especially important when summarizing integer values.
DECLARE @Value int SELECT @Value = convert(int, sum(convert(real, Result))) FROM tests WHERE Var_Id = 12345 AND Result_On > '2003-01-01 00:00:00' AND Result_On < '2003-01-01 00:00:00'

This can be especially relevant when writing certain model stored procedures (i.e. Model 603) as data stored in the historian may be retrieved as a real value when its actually an integer value (i.e. 1.0). That particular string value cannot be directly converted to an integer value. However, converting it first to a real or float will allow subsequent conversion to an integer.

GE Fanuc Automation

SQL Programming Guidelines

Page 25 of 120

5.2 Custom Parameters


A common need when writing stored procedures for Plant Applications is to have custom configurable parameters. It also is preferable that these parameters are accessible through the Plant Applications Administrator for ease of management. The following options are available: Site Parameters User Parameters Model User-Defined Properties Production Unit User-Defined Properties Execution Path User-Defined Properties

5.2.01 Site and User Parameters


Site and User Parameters are both sourced from the Parameters table. If you want to add your own custom Parameters, you'll need to ensure the Parm_Id is greater than 10000. To enter a value into an identity column you'll have to set IDENTITY_INSERT on before doing so. Make sure you turn it back off afterwards. For example:
SET IDENTITY_INSERT Parameters ON INSERT INTO Parameters (Parm_Id, Field_Type_Id, Parm_Type_Id, Parm_Name) VALUES (10001, 1, 0, 'Whatever') SET IDENTITY_INSERT Parameters OFF

You can then use that parameters in either Site_Parameters or as User parameters. The Field_Type_Id comes from ED_FieldTypes and the Parm_Type_Id will allow you to restrict the location of the parameters (ie. 0=All, 1=Site, 3=Site Users).

5.2.02 Event Model User-Defined Properties


A common need when writing custom stored-procedure based models is for user-defined model properties. These can now be easily added to custom model templates.

GE Fanuc Automation

SQL Programming Guidelines

Page 26 of 120

When a new model template has been created, the last tab allows for the definition of user-defined properties.

GE Fanuc Automation

SQL Programming Guidelines

Page 27 of 120

The properties are stored in the Event_Configuration_Properties table and can be accessed from the models stored procedure using the standard stored procedure spCmn_ModelParameterLookup provided in the Plant Applications database. The stored procedure takes the following arguments:
Argument @Value @ECId Description Value of the property returned from the procedure. The identity (EC_Id) of the event configuration record in the Event_Configuration table. Each time an event is added to a production unit, a record gets created in this table. For most generic models (i.e. 601, 603, 1052, etc), the EC_Id is passed into the model stored procedure by the EventMgr. Name of the user-defined property (i.e. MyCustomProperty). A default value that will be passed into the @Value field if the model property does not exist or has a value of NULL.

@PropertyName @DefaultValue

For example,
DECLARE @Value @ECId @PropertyName @DefaultValue @ECId @PropertyName @DefaultValue varchar(1000), int, varchar(255), varchar(1000) = 23, = 'MyCustomProperty', = 'Tinkerbell' @Value OUTPUT, @ECId,

SELECT

EXEC dbo.spCmn_ModelParameterLookup

GE Fanuc Automation

SQL Programming Guidelines

Page 28 of 120

@PropertyName, @DefaultValue

5.2.03 Production Unit User-Defined Properties


User-Defined properties can be created on a Production Unit. These are configured through the Plant Applications Administrator in the Production Unit Properties window.

The properties can then be accessed from a stored procedure using the standard stored procedure spCmn_PUParameterLookup provided in the Plant Applications database. The stored procedure takes the following arguments:
Argument @Value @PUId Description Value of the property returned from the procedure. The Production Unit Id (PU_Id) of the record in the Prod_Units table. This id is unique for every Production Unit and is visible in the Administrator by rightclicking on the Production Line in the Plant Model and listing the units in the right-hand window pane. Name of the user-defined property (i.e. MyCustomProperty). A default value that will be passed into the @Value field if the model property does not exist or has a value of NULL.

@PropertyName @DefaultValue

For example,
DECLARE @Value @PUId varchar(1000), int,

GE Fanuc Automation

SQL Programming Guidelines

Page 29 of 120

@PropertyName @DefaultValue SELECT @PUId @PropertyName @DefaultValue

varchar(255), varchar(1000) = 12, = 'MyCustomProperty', = 'Tinkerbell' @Value OUTPUT, @PUId, @PropertyName, @DefaultValue

EXEC dbo.spCmn_ModelParameterLookup

5.2.04 Execution Path User-Defined Properties


User-Defined properties can be created on an Execution Path. The properties would commonly be used to define translation keys for interfacing to ERP systems and are configured through the Plant Applications Administrator in the Execution Path configuration window.

The Execution Path properties are stored in the same place as the Production Unit properties but there is no standard stored procedure to access them. As such, they must be queried directly from the tables.
Table Table_Fields Table_Fields_Values Description Definition of the properties including name and data type. The value of the property.

The Table_Fields_Values table has the following fields that must be referenced:
Field Value Table_Field_Id TableId Description Value of the property Id of the corresponding record in Table_Fields Id of the corresponding record in the Tables table. The value will always be 13 for accessing Execution Path properties as this corresponds to the PrdExec_Path table. For Execution Paths this will be the Path_Id in the PrdExec_Path table.

KeyId

GE Fanuc Automation

SQL Programming Guidelines

Page 30 of 120

For example,
DECLARE @FieldId @FieldDesc @PathId @Value @FieldDesc @PathId int, varchar(50), int, varchar(7000) = 'SAP Route', =1

SELECT

SELECT @FieldId = Table_Field_Id FROM Table_Fields WHERE Table_Field_Desc = @FieldDesc SELECT @Value = Value FROM Table_Fields_Values WHERE Table_Field_Id = @FieldId AND TableId = 13 AND KeyId = @PathId

5.3 Result Sets


In the Plant Applications architecture, the Database Manager service is responsible for making all updates to the SQL database. Clients and other services such as the Calculation Manager, Reader, Summary Manager and the Event Manager do not directly update the database. Instead, they send messages to the Database Manager, via the Message Bus service, to make the updates for them. This allows for a single point of database contact and reduces the likelihood of blocking or deadlocking issues. Messages are also the way Plant Applications clients are notified of database updates. Rather than periodically polling the database for changes, clients subscribe to updates and are automatically notified when changes are made. As such, in order for the clients to be notified of any database changes, a message must be generated and sent. These messages can be generated directly using Result sets returned from calculation and/or custom event model stored procedures. Whenever the Calculation Manager or Event Manager call a stored procedure for a calculation or model, they are watching for any returned recordsets and, if found, the service will attempt to send the returned result set as a message on the Message Bus. For example, in the stored procedure the code for a variable result would be as follows:
DECLARE @Var_Id @PU_Id @Var_Precision int, int, int int DEFAULT 2, int NULL, int NULL, int NULL, int DEFAULT 0, varchar(25) NULL, varchar(25) NULL, int DEFAULT 1, int DEFAULT 0)

-- Create the table variable to hold the result set data DECLARE @VariableResults TABLE ( ResultSetType Var_Id PU_Id User_Id Cancelled Result Result_On TransType Post_Update -- Gather the data SELECT @Var_Id @PU_Id = Var_Id, = PU_Id,

GE Fanuc Automation

SQL Programming Guidelines

Page 31 of 120

@Var_Precision = Var_Precision FROM Variables WHERE Var_Desc = 'MyVariable' -- Create the result set in the table variable INSERT INTO @VariableResults ( Var_Id, PU_Id, Result, Result_On) VALUES ( @Var_Id, @PU_Id, ltrim(str(1.234, 50, @Var_Precision)), convert(varchar(50), getdate(), 120)) -- Output the result set to the calling service SELECT ResultSetType, Var_Id, PU_Id, User_Id, Cancelled, Result, Result_On, TransType, Post_Update FROM @VariableResults

When the calling service (i.e. the Calculation Manager or the Event Manager), runs this stored procedure, it will see the following result set, translate it and then send it out on the Message Bus: 2, 53, 12, NULL, 0, 1.23, 2005-07-05 12:02:30, 1, 0 Alternatively, if the stored procedure was executed from Query Analyzer the above results would appear in the Results Pane but since it wasnt run by the Event Manager or the Calculation Manager, no message would be sent. It is very important that the structure of the result set is correct as improperly formatted result sets could crash the service while it attempts to translate it into a message. The services will attempt to translate the returned results of every SELECT statement in the stored procedure. In order to prevent this, table variables should be created for each result set type in the stored procedure to enforce the column structure. There are 20 different result set types:
Type 1 2 3 5 6 7 8 9 10 11 12 13 14 15 Description Production Events Variable Test Values Grade Change Downtime Events Alarms Sheet Columns User Defined Events Waste Events Event Details Genealogy Event Components Genealogy Input Events Defect Details Historian Read Production Plan

GE Fanuc Automation

SQL Programming Guidelines

Page 32 of 120

16 17 18 19 20 50

Production Setup Production Plan Starts Production Path Unit Starts Production Statistics Historian Write Output File

The standard stored procedure spServer_CmnShowResultSets provides the current list and formats of the result sets available. A more detail listing and description of the result sets is contained in the Appendix. Standard methodology for working with result sets is as follows: 1. Create a table variable for the result set 2. Gather the data for the result set 3. Check to see if the record already exists 4. Create the result set in the table variable 5. Output the results to the calling service Generally, you should always leave the User_Id as NULL in a result set. If its NULL it will be filled out by the service the stored procedure was called from (i.e. the EventMgr or the CalcMgr). However, the CalcMgr cannot overwrite a value entered by ComXClient or a Site User. If you need to overwrite a value entered by a user from a calculation, you can use a result set but the User_Id must be set to a specific Site User. This user must be a non-system custom user (i.e. User_Id > 50) and cannot be ComXClient either. Each result set has a pre-update and post-update option. When the result set has the pre-update option set, the message will be sent to both the Database Manager and any clients. Alternatively, when the post-update option is set, the message will only be sent to the clients and not the Database Manager. This post-update would be used if the data has been inserted directly into the database and there is no reason to send it to the Database Manager but the clients still need to be notified.

5.3.01 Debugging Result Sets


The following things can be done to debug result sets: Run the stored procedure in Query Analyzer and check the result sets returned in the Results Pane Put the Database Manager service in debug mode and check the log file for the messages. The Database Manager will also any errors it encounter while trying to process the result sets. Insert the results sets into a local table at the end of the stored procedure. This will allow you to see exactly what was sent.

5.3.02 Result Sets DateTime Formats


The Proficy services accepts all returned result set data in varchar format. As such, to prevent unexpected conversions, it is wise to ensure that some data is pre-formatted in varchar format. This primarily affects datetime to varchar conversions as the default SQL conversion doesnt include the seconds so unless you format your timestamps using the ODBC canonical datetime format (i.e.

GE Fanuc Automation

SQL Programming Guidelines

Page 33 of 120

convert(varchar(25), @TimeStamp, 120), you will lose the seconds in your timestamps and your data will likely not show up where you expect it.

5.4 Debug Messages


Error messages or warning messages can be reported to the Proficy Message_Log_Header and Message_Log_Detail tables using the Proficy stored procedure spCmn_AddMessage. These messages can then be queried from these tables. SpCmn_AddMessage has the following arguments:

Field Message Object Name Reference Id TimeStamp Client Connection Id

Optional No No Yes Yes Yes

Type varchar(4000) varchar(255) int int int

Type

Yes

int

Description Message to be stored. Usually the name of the calling stored procedure. Custom reference id that could correspond to a line number in the calling stored procedure. Timestamp of the message (defaults to the current server time). The associated connection out of the Client_Connections table. This is mainly used by external applications and is generally not useful for error messages coming from the CalcMgr. A value corresponding to the data in the Message_Types table. It defaults to a 0 (Undefined) but a 2 (Generic) would suffice as well.

For example:
IF @Value <= 0 BEGIN SELECT @Message =

'Error: Invalid value; Inputs:' + convert(varchar(25), @Argument1, 120) + ',' + convert(varchar(25), @Argument2, 120) SELECT @Stored_Procedure_Name = 'spLocal_CalcValue' EXEC spCmn_AddMessage @Message, @Stored_Procedure_Name END

5.5 Multilingual Support


Most tables with a description field have multilingual support which means they have a _Desc field, a _Desc_Local field and a _Desc_Global field. The _Desc field is a calculated field whose value is dependent on the language settings for the Plant Applications server and the contents of the other two fields. The local description is considered the constant one so, if possible, search queries that need to use a description field should be referencing the _Desc_Local field. This is also important because the table indexes are set up to reference the local description. For example,
SELECT Var_Id FROM Variables WHERE PU_Id = 5 AND Var_Desc_Local = My Variable

GE Fanuc Automation

SQL Programming Guidelines

Page 34 of 120

This applies to the following tables:


Table Characteristics Characteristic_Groups Departments Event_Reason_Catagories Prod_Lines Products Production_Status Product_Family Product_Groups Product_Properties Prod_Units PU_Groups Sheets Sheet_Groups Specifications Variables Views View_Groups Description Field Char_Desc Characteristic_Grp_Desc Dept_Desc ERC_Desc PL_Desc Prod_Desc ProdStatus_Desc Product_Family_Desc Product_Grp_Desc Prop_Desc PU_Desc PUG_Desc Sheet_Desc Sheet_Group_Desc Spec_Desc Var_Desc View_Desc View_Group_Desc

5.6 History Tables


History tables are referring to tables that end with _History in PA database. For example, Table Events has history table Event_History, while table Event_Details has table Event_Detail_History. History tables are used to store all information, including both past and present, of their non-history tables. For example, Events table only has the most current information of events, and Event_History table has all information. In order words, history tables usually have large amount of data. Because of the nature of stored data, usually history tables do not have primary key, and indexes are very limited. Querying data from history tables are sometimes very slow. For example, both Event_History and Event_Detail _History tables have only one clustered index key for columns Event_Id and Modified_On. When querying these tables in a custom store procedure, the index key has to be utilized and especially using Index Seek search, please refer to section 4.3 for details. If same set of data querying from History tables is used more than once in store procedure, it is best to put the data in a temp table, and have to set up index for that temp table in order to speed up the querying process. Otherwise, the time to query from the temp table would be the same as query from History tables.

5.7 Column_Updated_Bitmask Field


Each of the history tables (i.e. Event_History) have a Column_Updated_Bitmask field. This field records, in a varchar string, which field(s) were updated in the associated history record. The field actually contains the result of the SQL trigger function COLUMNS_UPDATED().

GE Fanuc Automation

SQL Programming Guidelines

Page 35 of 120

The COLUMNS_UPDATED function returns a varbinary bit pattern with the bits in order from left to right, with the least significant bit being the leftmost. The leftmost bit represents the first column in the table; the next bit to the right represents the second column, and so on. COLUMNS_UPDATED returns multiple bytes if the table on which the trigger is created contains more than 8 columns, with the least significant byte being the leftmost. COLUMNS_UPDATED will return the TRUE value for all columns in INSERT actions because the columns have either explicit values or implicit (NULL) values inserted. The following function fnLocal_ColumnUpdated can be used to check whether a particular column in the record was updated. It accepts 2 arguments, the first being the bit pattern contained in the Column_Updated_Bitmask field and the second being the column number to check and returns a 1 the checked column was modified, and a 0 if it wasnt.
CREATE FUNCTION dbo.fnLocal_ColumnUpdated ( @COLUMNS_UPDATED @OP int) RETURNS int AS BEGIN DECLARE @POS @PRE @RESULT int, int, int binary(8),

SET @PRE = (@OP-1)/8 SET @POS = POWER(2, (@OP-1)) / POWER(2, @PRE*8) IF (SUBSTRING(@COLUMNS_UPDATED, @PRE+1, 1) & @POS <> 0) BEGIN SET @RESULT = 1 END ELSE BEGIN SET @RESULT = 0 END RETURN @RESULT END

It is important to note that the Column_Updated_Bitmask is stored in a varchar format. The field needs to be converted back to a binary format before the bitwise comparison can be performed. In the function above this is implicitly done by receiving the data in a binary(8) argument (@COLUMNS_UPDATED). The column order number is also not always constant from server to server as it depends on the order of the columns in the original SQL statement used to create the table. This has not remained static through the various versions of Plant Applications so a server that was originally installed with a previous version of the software (and then upgraded) may not have the same column order as a server that was installed with a later version. As such, the column order number must be dynamically determined by checking the table schema in the SQL Server system tables. For example,
DECLARE @EDDimXOP smallint

SELECT @EDDimXOP = ORDINAL_POSITION FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'Event_Details' AND COLUMN_NAME = 'Initial_Dimension_X' SELECT dbo.fnLocal_ColumnUpdated(convert(binary(8), ech.Column_Updated_Bitmask), @EDDimXOP) FROM Event_Detail_History WHERE Event_Id = @EventId ORDER BY Modified_On DESC

GE Fanuc Automation

SQL Programming Guidelines

Page 36 of 120

GE Fanuc Automation

SQL Programming Guidelines

Page 37 of 120

6.0 SQL Historian


The SQL Historian has the following entry fields:
Field BrowseSQL Used By Administrator Event Manager Reader Summary Manager Writer Writer Writer Event Manager Event Manager Reader Summary Manager

DeleteSQL InsertSQL ReadAfterSQL ReadBeforeSQL ReadBetweenSQL TimeSQL

It is recommended that a stored procedure be referenced in the SQL historian fields instead of the SQL queries themselves as it is easier to manage than copying an entire SQL statement into the one line.

6.1 BrowseSQL
In addition to the Administrator, the BrowseSQL statement is used by all the historian services. The Administrator uses the BrowseSQL statement when querying for tags in the Tag Search function, while the services use it to validate the tag configured in the variable sheet or the model. BrowseSQL can accept the following arguments in the string:
Argument ?TagId Description If called from the Administrator, contains the search string entered by the user. If called from a service, contains the tag configured in the variable sheet or model.

BrowseSQL must return data in the following format:


Order 1 2 3 4 Field Tag Name Tag Description Engineering Units Data Type Description The tag name that will be stored in the variable sheet or model configuration. An associated description displayed in the Adminstrator Tag Search dialog. Engineering units of the tag. This is not currently used so can be set to a constant.. Data type of the tag. This is not currently used so can be set to a constant.

GE Fanuc Automation

SQL Programming Guidelines

Page 38 of 120

6.2 DeleteSQL
I dont know what this is used for so if you have any thoughts on the matter, please let me know. DeleteSQL can accept the following arguments in the string:
Argument ?TagId ?TimeStamp Description The tag name. Timestamp of the data point to delete.

6.3 InsertSQL
This is used by the Write service to insert data into a historian. InsertSQL can accept the following arguments in the string:
Argument ?TagId Description If called from the Administrator, contains the search string entered by the user. If called from a service, contains the tag configured in the variable sheet or model.

6.4 ReadAfterSQL
ReadAfterSQL can accept the following arguments in the string:
Argument ?TagId Data Type varchar(1000) Description If called from the Administrator, contains the search string entered by the user. If called from a service, contains the tag configured in the variable sheet or model. Timestamp from which to query from. The number of values to be returned. An option that specifies whether to return a data point that is equal to the timestamp (i.e. data points >= ?TimeStamp), as opposed to data that is only greater than the timestamp.

?Timestamp ?NumValues ?IncludeExactTime

datetime int int

?HonorRejects

int

ReadAfterSQL must return data in the following format:


Order 1 2 3 Field Timestamp Value Status Data Type datetime varchar(10) Description Timestamp of the data point Value of the data point. Quality of the data point and has to be one of two values, Good or Bad.

An example stored procedure for the ReadAfterSQL is as follows:


CREATE PROCEDURE dbo.spServer_HistorianReadAfter

GE Fanuc Automation

SQL Programming Guidelines

Page 39 of 120

@TagId @Timestamp @NumValues @IncludeExact @HonorRejects AS

varchar(1000), datetime, int, int, int

SET ROWCOUNT @NumValues SELECT TimeStamp, Value, Good FROM MyHistorianTable WHERE Tag_Id = @TagId AND ( TimeStamp > @Timestamp OR ( TimeStamp = @Timestamp AND @IncludeExact = 1)) ORDER BY TimeStamp ASC

6.5 ReadBeforeSQL
ReadBeforeSQL can accept the following arguments in the string:
Argument ?TagId Data Type varchar(1000) Description If called from the Administrator, contains the search string entered by the user. If called from a service, contains the tag configured in the variable sheet or model. Timestamp from which to query from. The number of values to be returned. An option that specifies whether to return a data point that is equal to the timestamp (i.e. data points >= ?TimeStamp), as opposed to data that is only greater than the timestamp.

?Timestamp ?NumValues ?IncludeExactTime

datetime int int

?HonorRejects

int

ReadBeforeSQL must return data in the following format:


Order 1 2 3 Field Timestamp Value Status Data Type datetime varchar(10) Description Timestamp of the data point Value of the data point. Quality of the data point and has to be one of two values, Good or Bad.

An example stored procedure for the ReadBeforeSQL is as follows:


CREATE PROCEDURE dbo.spServer_HistorianReadBefore @TagId varchar(1000), @Timestamp datetime, @NumValues int, @IncludeExact int, @HonorRejects int AS SET ROWCOUNT @NumValues SELECT TimeStamp, Value, Good

GE Fanuc Automation

SQL Programming Guidelines

Page 40 of 120

FROM MyHistorianTable WHERE Tag_Id = @TagId AND ( TimeStamp < @Timestamp OR ( TimeStamp = @Timestamp AND @IncludeExact = 1)) ORDER BY TimeStamp ASC

6.6 ReadBetweenSQL
ReadBetweenSQL can accept the following arguments in the string:
Argument ?TagId Data Type varchar(1000) Description If called from the Administrator, contains the search string entered by the user. If called from a service, contains the tag configured in the variable sheet or model. Start time from which to query from. End time from which to query up to. The number of values to be returned. An option that specifies whether to return a data point that is equal to the timestamp (i.e. data points >= ?TimeStamp), as opposed to data that is only greater than the timestamp. An option that specifies whether to return a data point that is equal to the timestamp (i.e. data points <= ?TimeStamp), as opposed to data that is only greater than the timestamp.

?StartTime ?EndTime ?NumValues ?IncludeExactStart

datetime datetime int int

?IncludeExactEnd

int

?HonorRejects

int

ReadBeforeSQL must return data in the following format:


Order 1 2 3 Field Timestamp Value Status Data Type datetime varchar(10) Description Timestamp of the data point Value of the data point. Quality of the data point and has to be one of two values, Good or Bad.

An example stored procedure for the ReadBeforeSQL is as follows:


CREATE PROCEDURE dbo.spServer_HistorianReadBetween @TagId varchar(1000), @StartTime datetime, @EndTime datetime, @NumValues int, @IncludeExact int, @HonorRejects int AS SET ROWCOUNT @NumValues SELECT TimeStamp, Value, Good FROM MyHistorianTable WHERE Tag_Id = @TagId AND ( TimeStamp < @Timestamp OR ( TimeStamp = @Timestamp AND @IncludeExact = 1)) ORDER BY TimeStamp ASC

GE Fanuc Automation

SQL Programming Guidelines

Page 41 of 120

GE Fanuc Automation

SQL Programming Guidelines

Page 42 of 120

7.0 Calculation Stored Procedures


Calculation stored procedures must always have @OutputValue as the first argument as that is how the result of the calculation is returned back to the database. After that the number of arguments must match the number of inputs configured for the calculations. For example, For a calculation defined with the following inputs:

The stored procedure arguments should look like:


CREATE PROCEDURE dbo.spLocal_ExportProductionEvent @OutputValue varchar(25) OUTPUT, @EventId int, @FileName varchar(100), @TempPath varchar(100), @FilePath varchar(100) AS

If no value is explicitly assigned to the return value, a value of NULL will be returned. If the return value is NULL, the CalculationMgr service will still create/modify a record in the Tests table and set the result to

GE Fanuc Automation

SQL Programming Guidelines

Page 43 of 120

NULL. However, when the output value for a calculation is set to the text string DONOTHING, then the CalculationMgr will not create a record or modify an existing records value. This is an important consideration when implementing high volume calculations as it is desirable to prevent needless records from being inserted in the Tests table. The @OutputValue should always be set to a varchar(25) to match the format of the Result field in the tests table. The values returned from a calculation go directly into the tests table and are not post-formatted. As such, formatting fields in the Variable sheet (i.e. Precision) must be explicitly utilized in the stored procedure if necessary. For example,
DECLARE @Value @VarId @Precision @Value = 5.12345, @VarId = 2 float, int, int

SELECT

SELECT @Precision = Var_Precision FROM Variables WHERE Var_Id = @VarId SELECT @OutputValue = ltrim(str(@Value, 25, @Precision))

GE Fanuc Automation

SQL Programming Guidelines

Page 44 of 120

8.0 Event Model Stored Procedures


8.1 Models and Historian Tags
The key to writing good stored procedures is to fully understand how the Event Manager service queries data from a historian and then executes the models. The Event Manager service wakes up and queries for new data every 10 seconds. This frequency is not configurable.

8.1.01 Historian Data Query


For each trigger tag defined in a model, the service queries for all historical data where the data timestamp is greater than the cached Event Manager timestamp. When data is returned from the Historian, the cached timestamp is updated to the timestamp of the most recent data point returned, otherwise it remains the original value. This results in the following behaviour: Historian data timestamped in the future will be processed immediately Historian data can be timestamped in the past as long as they are entered in time sequence and the timestamp is greater than the StartupSetBack/ReloadSetBack parameters.

Initially, when the Event Manager service is started, the cached timestamp is calculated by subtracting the value of the StartupSetBack user parameter for the EventMgr system user from the current time. Similarly, if the Event Manager service is reloaded the cached timestamp is set to the timestamp specified by user in the Administrator Control Panel or, if no timestamp was specified, by subtracting the value of the ReloadSetBack user parameter for the EventMgr system user from the current time. For example, if the current time was 4:05 and Event Manager service was reloaded to 3:00, the service queries would look like the following:
Actual Time 4:05:00 4:05:10 4:05:20 4:05:30 4:05:40 4:05:50 Cached TimeStamp 3:00:00 3:00:00 3:00:00 4:05:12 4:05:12 4:05:12 Data Points Returned None None Value=5.37, Timestamp=4:05:12 None None Value=6.34, Timestamp=4:05:43 Value=6.94, Timestamp=4:05:44 Value=7.66, Timestamp=4:05:48 None

4:06:00

4:05:48

For each trigger tag data point retrieved, the Event Manager will then get the last good data point for all the other tags configured in the model based on the timestamp of the trigger tag data point. This set of data is then passed to the model for interpretation (i.e. a stored procedure attached to model 603).

GE Fanuc Automation

SQL Programming Guidelines

Page 45 of 120

8.1.02 Multiple Trigger Tags


When a model is configured with multiple trigger tags, the Event Manager will query for data for each trigger tag, regardless of whether they have the same timestamp in the historian. There is no consolidation of the data returned from the historian. For example, if you have a model with 4 trigger tags and there are 4 data points in the historian with the same timestamp, the model will fire 4 times. While data will always be returned sequentially for individual tags, the data for multiple trigger tags may not be collated together in sequential order. As such, you may see all the data for one tag before you see the data for the next tag. This can cause problems if the state different trigger tags are dependent on each other. For example, if you wrote a custom downtime model using 2 trigger tags, one to open events and the second to close events, you may see all the open event signals before seeing any of the close event signals. This would probably result in the majority of the open event signals being ignored because there is already an open event (i.e. the first one). In cases like this it may be better to write the stored procedure in terms of the transitions themselves as opposed to the current state of the system (i.e. up or down). It would also be helpful to utilize a local table to store the transitions so that the open transitions will still be available when the close transition is finally passed to the stored procedure. Then it is possible to create the entire event at once. However, since the Event Manager processes data on a 10 second frequency this is not really an issue unless you expect to have a lot of events that are under 10 seconds. It is generally only apparent if you reload the Event Manager because the time range can be much larger and a larger number of data points are returned.

8.2 Model Execution Multithreading and Order


The EventMgr is multi-threaded but it starts a fixed number of threads and will run certain classes of models on each thread. For example, one thread will run all the ODBC models while another will run all the standard Historian models. It is important to recognize this as one long running stored procedure model can still impact a number of other models as they are running in sequence on one thread. For a particular production unit, the order in which the models are executed can be controlled setting the event priority. This is only applicable for active models and is also dependent on the models being on the same thread. This feature can be used in a situation where the logic of one model is dependent on the logic of another model. For example, if you dont want production events to be created during a downtime, you would need to ensure that the downtime was created before the production event model executed by setting the priority of the downtime model to be 1 and the production event model to be 2. Both models use historian tags so they will be executed on the same thread. The Up and Down buttons in the Event Configuration dialog allow the model priorities to be set for each active event.

GE Fanuc Automation

SQL Programming Guidelines

Page 46 of 120

8.3 Error Messages


Error messages or warning messages in an event model stored procedure can be reported to the Proficy Message_Log_Header and Message_Log_Detail tables using the same method described for calculation stored procedures. However, messages in an event model stored procedure can also be reported back to the EventMgr log file. Most custom event manager models (i.e. 601, 603, etc.) require the arguments @Success and @ErrMsg as OUTPUT parameters. Any value that is read out of the @ErrMsg field will be reported to the EventMgr log file upon completion of the stored procedure. For example,
CREATE PROCEDURE dbo.spLocal_CreateEvent @Success int OUTPUT, @ErrorMsg varchar(255) OUTPUT, @JumpToTime varchar(30) OUTPUT, @ECId int, @Reserved1 varchar(30), @Reserved2 varchar(30), @Reserved3 varchar(30), @ChangedTagNum int, @ChangedPrevValue varchar(30), @ChangedNewValue varchar(30), @ChangedPrevTime varchar(30), @ChangedNewTime varchar(30), @EventPrevValue varchar(30), @EventNewValue varchar(30), @EventPrevTime varchar(30), @EventNewTime varchar(30) AS DECLARE @Signal int

SELECT @Signal = convert(int, convert(float, @EventNewValue)) IF @Signal > 0 BEGIN -- Create event END ELSE BEGIN

GE Fanuc Automation

SQL Programming Guidelines

Page 47 of 120

SELECT @Success = 0 SELECT @ErrorMsg =

'Bad value (' + @EventNewValue + ') for EC_Id ' + convert(varchar(10), @EC_Id)

END

8.4 SQL Historian


The Event Manager only uses the ReadAfterSQL and ReadBeforeSQL scripts. For trigger tags, the Event Manager will call the ReadAfterSQL every 10 seconds, passing the cached timestamp value. For each data point returned, it will call the ReadBeforeSQL script for all other tags configured in the model to get the associated data.

GE Fanuc Automation

SQL Programming Guidelines

Page 48 of 120

9.0 Report Stored Procedures


9.1 Report Parameters
Report parameters can be added and modified in the report template. Custom parameters can be added via the Web Administrator and then added to the report template. For example,

These parameters can then be extracted The arguments for the report should be the report name and then any user-selected parameters from the parameter web pages (i.e. report start time and end time). The other constant parameters defined in the report template are then extracted directly from the database using the standard stored procedure spCmn_GetReportParameterValue. For example,
CREATE PROCEDURE dbo.spLocal_RptData @RptName varchar(255), @RptStartDATETIME varchar(25), @RptEndDATETIME varchar(25)

GE Fanuc Automation

SQL Programming Guidelines

Page 49 of 120

AS DECLARE @RptTitle varchar(4000) @RptName, 'strRptTitle', 'My Report Title', @RptTitle OUTPUT -- Report name -- Parameter name -- Default value -- Actual value

-- Get parameter report title EXEC spCmn_GetReportParameterValue

9.2 Debug Messages


Error messages or warning messages in an event model stored procedure can be reported to the Proficy Message_Log_Header and Message_Log_Detail tables using the same method described for calculation stored procedures. A better way to report errors in a report though is to always include an Errors page in your report template and return any messages directly to the report so they can be viewed by the user.

GE Fanuc Automation

SQL Programming Guidelines

Page 50 of 120

10.0 Testing, Debugging and Troubleshooting


10.1 Execution Plan
The first time SQL Server executes a stored procedure a particular stored procedure in a running session, it generates what is known as an execution plan. The execution plan basically represents the strategy SQL Server is using to search for and retrieve data. This execution plan is then cached and used henceforth as long as the stored procedure doesnt change. The Show Execution Plan and Display Estimated Execution Plan show the true impact of your query. Note, if you have used the method described above where you add a starting query time and ending query time, you may not see the true processing impact as SQL Server may take longer to do the job in calendar time but not in processing time.

10.2 Table Performance


10.2.01 Index Fragmentation
Just like a hard drive gets fragmented, the data and indexes within SQL Server also get fragmented which can affect query performance. As such, regular index rebuilding or defragmentation should be part of any database maintenance plan. The command DBCC SHOWCONTIG (<table name>) WITH ALL_INDEXES will show the fragmentation level for all the indexes within a table. For example, executing DBCC SHOWCONTIG (User_Defined_Events) WITH ALL_INDEXES in Query Analyser will show the following results for each index on the table User_Defined_Events.
DBCC SHOWCONTIG scanning 'User_Defined_Events' table... Table: 'User_Defined_Events' (779149821); index ID: 1, database ID: 7 TABLE level scan performed. - Pages Scanned................................: 66458 - Extents Scanned..............................: 8378 - Extent Switches..............................: 8400 - Avg. Pages per Extent........................: 7.9 - Scan Density [Best Count:Actual Count].......: 98.89% [8308:8401] - Logical Scan Fragmentation ..................: 0.04% - Extent Scan Fragmentation ...................: 96.60% - Avg. Bytes Free per Page.....................: 2307.0 - Avg. Page Density (full).....................: 71.50% DBCC execution completed. If DBCC printed error messages, contact your system administrator.

The primary fields to look at are Scan Density and Logical Scan Fragmentation. Scan Density should be greater than 90% and Logical Scan Fragmentation should be less than 10%. If either is outside those ranges, then defragmentation or rebuilding the index should be considered. Rebuilding an index using the command DBCC REINDEX tends to produce better results but requires a full lock on the table while defragmenting an index using the command DBCC INDEXDEFRAG can run in the background without affecting ongoing activity in the table. As such, it is better to start with defragmentation and only consider rebuilding the index if the defragmentation is ineffective.

GE Fanuc Automation

SQL Programming Guidelines

Page 51 of 120

DBCC INDEXDEFRAG accepts 3 arguments, database name, table name, and index id. The index id can be retrieved from the DBCC SHOWCONTIG command described above and it must be run individually for each index in a table. For example,
DBCC INDEXDEFRAG (GBDB, User_Defined_Events, 1) DBCC INDEXDEFRAG (GBDB, User_Defined_Events, 2) etc

You should always start with the clustered index (which always has an index id of 1) as that affects the other indexes as well.

10.2.02 Table Statistics


Whenever an individual table query is experiencing significantly poor performance or picking the wrong index, the first things that can be done is to update the table statistics. The query optimizer in SQL Server bases its choice of indexes and execution plans based on the current table statistics. If the table statistics are not representative of the actual data in the table, then the query optimizer may end up making bad decisions. By default, SQL Server automatically updates table statistics periodically but the automatic update is not always accurate. Furthermore, the default statistics are calculated based on a small subset of the rows in a table, which may not always give the best result. SQL Server chooses a small subset for performance reasons in that calculating statistics can be a time consuming and resource intensive task depending on the size of the table. Manually updating the statistics based on the full set of rows in the table or even based on a larger percentage of rows (i.e. 50%) may improve query performance. However, care should be taken to ensure that the statistics update is done during low utilization periods. The DBCC SHOW_STATISTICS command will show the date of the last update, the number of rows sampled for the calculations as well as the current statistics themselves. For example, executing the following in Query Analyser,
DBCC SHOW_STATISTICS (User_Defined_Events, UserDefinedEvents_IDX_EventId)

will return the following,


Statistics for INDEX 'UserDefinedEvents_IDX_EventId'. Updated Rows Rows Sampled Steps Density Average key length -------------------- -------------------- -------------------- ------ ------------------------ -----------------------Feb 21 2006 11:50PM 2106647 2106647 174 4.7497781E-7 16.069065

The sample percentage used in the last update can be calculated from the Rows Sampled and the Rows fields. This sample percentage can be used to gauge whether the table should be updated with a larger sample size. The Density number can also be useful but really only when comparing it to the density of another index or to the density of the same index after updating the statistics. A lower density number is better so a successful update of the statistics should result in a lower density number. The query optimizer has a very complex selection process but, generally, when its faced with a choice of indexes it will pick the one with the lowest density number. The UPDATE STATISTICS command can be used to manually update table statistics. For example,
UPDATE STATISTICS User_Defined_Events WITH FULLSCAN

GE Fanuc Automation

SQL Programming Guidelines

Page 52 of 120

Executing the above statement in Query Analyser will recalculate the statistics based on all the rows in the table (i.e. FULLSCAN). Once complete, the SHOW_STATISTICS command will show the current date for Updated, the Rows and Rows Sample columns should have the same value and the Density number may be lower. However, this is a resource intensive operation and on a table with 2 million rows, it could take up to 5 minutes to complete depending on table width, SQL configuration and hardware. It would be wise to test the update with lower percentages (i.e. initially to gauge the impact before committing to the FULLSCAN option. For example,
UPDATE STATISTICS User_Defined_Events WITH SAMPLE 25 PERCENT

10.3 Parallelism
Parallelism is when SQL Server utilizes multiple processors for executing a query. Normally, SQL Server will execute different parts of a query serially but if the query is costly enough, it will split it into different streams and then execute them in parallel. Plant Applications has a dual role, which is affected differently by parallelism. It is both an OnlineTransaction Processing (OLTP) applications and a reporting application at the same time. OLTP applications involve lots of small transactions where data is created, updated and deleted while reporting applications typically involved lots of large complex queries to extract and analyze data. For reporting applications, parallelism is typically a good thing as it improves the performance of large complex data queries. However, parallelism can be bad for Plant Applications as it consumes multiple processors for a single query, thereby impacting the performance of the rest of the server. This is generally a bad thing as the responsiveness of Plant Applications to operators and other systems is more important than the execution time of a report. Parallelism can also cause performance blocking of a server where a complex query consumes all the resources on a server and the normal transactional operations of the server cannot complete. Parallelism can be seen in the master..sysprocesses table. For a given spid, you will see multiple records, each with an incremental ecid and different kpid (WinNT process). The record with an ecid of 0 is the master thread that will contain key information about the query (i.e. sql_handle, stmt_start and stmt_end), which will help with diagnosis. The following query will show any current processes that are utilizing parallelism.
SELECT sp.spid, sp.ecid, sp.kpid, sp.* FROM master..sysprocesses sp WITH (NOLOCK) JOIN ( SELECT spid FROM master..sysprocesses WITH (NOLOCK) GROUP BY spid HAVING count(kpid) > 1) p ON sp.spid = p.SPID

Parallelism can be addressed in 3 different ways: 1. Optimize the query to run faster so it doesnt meet the minimum threshold for parallelism. This is the best way to address parallelism.

GE Fanuc Automation

SQL Programming Guidelines

Page 53 of 120

2. Use the MAXDOP option to restrict SQL Server to only use 1 processor. This is generally a safe option to use but will likely result in the query to run longer than without it which is often an acceptable tradeoff (i.e. if its part of a report). For example,
SELECT * FROM Tests WHERE Result_On > @RptStartTime OPTION (MAXDOP 1)

3. Modify the SQL Server settings to either restrict the number of processors available to parallelism, raise the minimum query threshold or disable parallelism altogether. This is the same idea as the MAXDOP query option but with a server-wide scale. These settings should be thoroughly tested before implementing in a production environment.

GE Fanuc Automation

SQL Programming Guidelines

Page 54 of 120

11.0 Database Structure


11.1 Product/Grade Changes
The main data tables for product/grade changes are:
Table Name Production_Starts Description Product/Grade changes

The main configuration tables for product/grade changes are:


Table Name Prod_Units Products Description Production units Products

Product changes are stored in the Production_Starts table and the logical key is on PU_Id, Start_Time and End_Time. Product change records are in a continuous sequence and cannot overlap (i.e. the Start_Time of a record must be the same as the End_Time for the previous record). There must also be a record for every PU_Id and sequence has to start at 1970-01-01 00:00:00.000 (i.e. the first record in the sequence must have a Start_Time of 1970-01-01 00:00:00.000). The End_Time of the current product change record will be NULL. Product changes are related to other data through the PU_Id and time (i.e. to determine what product a production event is associated with, look for the product change record that occurred within the same time frame and on the same unit).

11.1.01 Querying An Events Product


Product relationships are defined by PU_Id and time. For example, the find the product associated with a production event a join on PU_Id and time must be made.
SELECT e.Event_Num, e.TimeStamp, p.Prod_Code

FROM Events e INNER JOIN Production_Starts ps ON e.PU_Id = ps.PU_Id AND e.TimeStamp > ps.Start_Time AND ( e.TimeStamp <= ps.End_Time OR ps.End_Time IS NULL) INNER JOIN Products p ON ps.Prod_Id = p.Prod_Id WHERE e.PU_Id = @PUId AND e.TimeStamp > @ReportStartTime AND e.TimeStamp <= @ReportEndTime

11.2 Production Tracking


Production tracking refers to the creation and reporting of production events. The main data tables for production tracking are:

GE Fanuc Automation

SQL Programming Guidelines

Page 55 of 120

Table Name Events Event_Details Event_Status_Transitions

Description Production events Production event details Start and end time for each production event status transition

The main configuration tables for production tracking are:


Table Name Prod_Units Event_Subtypes Production_Status Description Production units Production event subtypes (i.e. batch, roll, etc..) Defines the production event statuses and whether the status is Good or Bad and whether it should be counted for production/inventory.

11.2.01 Production Event Quantity


The base quantity for a production event is stored in the Initial_Dimension_X field in the Event_Details table. Initial_Dimension_X represents the quantity of the production event when its created while Final_Dimension_X represents the quantity of the production event after its been consumed. The standard reports and web parts will all reference the Initial_Dimenson_X Generally, the Final_Dimension_X field should be decremented by the quantity in the Dimension_X of each attached Event_Component record until the Final_Dimension_X field is 0. For example,
Event_Details.Final_Dimension_X = Event_Details.Initial_Dimension_X - (Event_Components.Dimension_X) - (Waste_Event_Details.Amount)

The following diagram illustrates the relationship between parent production events (A) and child production events (B,C and D) and what the quantities should be:
Production Event A: Initial_Dimension_X = 100 Final_Dimension_X = 100 Production Event A: Initial_Dimension_X = 100 Final_Dimension_X = 25

Event Component Dimension_X = 20 Event Component Dimension_X = 20

Production Event B: Initial_Dimension_X = 20

Production Event C: Initial_Dimension_X = 20

Event_Component Dimension_X = 20 Waste Event: Amount = 15

Production Event D: Initial_Dimension_X = 20

There is no standard functionality within Plant Applications to execute this quantity calculation. As such, the recommended way of implementing this calculation is to use a set of calculations that are triggered by the production events, event components and waste events. Refer to the

GE Fanuc Automation

SQL Programming Guidelines

Page 56 of 120

Consumption Calculation Best Practice document for more details on how to implement these calculations.

11.2.02 Production Event Status


A key consideration for this is the status of the production event status. For example, the production event status of Complete has a status of Good while the production event status of Hold has a status of Bad. The status of the production status is defined in the Production Status Editor. For example,

The production status Complete has a status of Good so any event with a status of Complete will be included in the Net Production calculation.

Determines whether to count the production event with this status in production calculations.

Determines whether to count the production event with this status in inventory calculations.

Production event statuses are stored in the Production_Status table. The key fields are as follows:
Field ProdStatus_Id ProdStatus_Desc Status_Valid_For_Input Description Unique identifier that linked to the other tables (i.e. Events.Event_Status) Text of the status (i.e. Complete, Consumed, Hold) Boolean value defining whether the status is Good/Bad (i.e. 1/0). Correspondinly, this defines whether the production event is Good or Bad. Determines whether to count the event in net production. Determines whether to count the event as inventory.

Count_For_Production Count_For_Inventory

There are a number of default reserved statuses. Of particular importance is the Consumed status, which defines whether a production event (i.e. WIP material) has been consumed in the creation of finished goods.

GE Fanuc Automation

SQL Programming Guidelines

Page 57 of 120

Production Event Status Transitions The Event_Status_Transitions table provides an easy way to query the timestamps of each status change for a production event. Whenever the production event status changes, a record will be created in the Event_Status_Transitions table that records the start and end of the status. This simplifies queries for status changes that would have previously been made against the Event_History table. For the Start_Time and End_Time fields in the Event_Status_Transitions record, the value is retrieved from the Entry_On field in the Events table and not the TimeStamp field. This allows for capture of manual status changes by the operators where the TimeStamp of the event record doesnt change. However, for certain applications such as automatically tracking the status of a batch by changing the status, it can cause some unexpected behaviour because the Events.Entry_On is automatically set when the change is committed to the database, not when the change was effected. As such, there would typically be a slight lag between the Start_Time in the Event_Status_Transitions record and the TimeStamp in the Events record.

11.2.03 Available Inventory


Available inventory queries are typically executed for a particular production unit and represent the amount of material that is available to be consumed on that production unit. The query below is an example of a query for number of parts that are available based on the assumption that each production event represents a single part. Since each production is an individual part, we dont need to incorporate the quantity of the production event itself.
SELECT COUNT(e.Event_Id) FROM Events e JOIN Production_Status ps ON ps.ProdStatus_Id = e.Event_Status WHERE e.PU_Id = 6 AND ps.Count_For_Inventory = 1

For production events that can have partial quantities (i.e. roll of paper, batch of liquid, basket of parts, etc), we need to incorporate the dimension of the event itself, along with any waste and consumed quantities (via genealogy). The query below is an example of a query for available production events and their respective available quantities.
SELECT e.Event_Num, e.TimeStamp, ISNULL(ed.Initial_Dimension_X, 0) - SUM(ISNULL(ec.Dimension_X,0)) - SUM(ISNULL(wed.Amount,0))

AS Final_Dimension_X FROM Events e JOIN Production_Status ps ON ps.ProdStatus_Id = e.Event_Status LEFT JOIN Event_Details ed ON ed.Event_Id = e.Event_Id LEFT JOIN Event_Components ec ON e.Event_Id = ec.Source_Event_Id LEFT JOIN Waste_Event_Details wed ON wed.PU_Id = e.PU_Id -- utilizes clustered index AND wed.Event_Id = e.Event_Id WHERE e.PU_Id = 6 AND ps.Count_For_Inventory = 1 -- Filters out non-inventory production events GROUP BY e.Event_Num, -- Allows the summarization of wate and component dimensions ed.Initial_Dimension_X, e.TimeStamp HAVING (ISNULL(ed.Initial_Dimension_X, 0) -- Filters out events that have 0 available quantity - SUM(ISNULL(ec.Dimension_X,0))

GE Fanuc Automation

SQL Programming Guidelines

Page 58 of 120

- SUM(ISNULL(wed.Amount,0))) > 0

The above query recalculates Final_Dimension_X for each of the queried production events. If Final_Dimension_X is dynamically calculated as each waste and/or component record is created the query could be simplified to the following:
SELECT e.Event_Num, e.TimeStamp, ed.Final_Dimension_X

FROM Events e JOIN Production_Status ps ON ps.ProdStatus_Id = e.Event_Status LEFT JOIN Event_Details ed ON ed.Event_Id = e.Event_Id WHERE e.PU_Id = 6 AND ps.Count_For_Inventory = 1 AND ed.Final_Dimension_X > 0

However, as previously mentioned, there is no default functionality to dynamically calculate Final_Dimension_X so it would require custom calculations and/or consumption models to determine the correct number.

11.2.04 Net Production


Net production is based on the Production tab configuration in the Unit Properties Production Metrics.
Production is calculated from the production events Initial Dimension X.

Production is calculated from a variable value.

If Production is Accumulated From Event Dimensions is selected then Net Production is the summary of the production events Initial Dimension X field (i.e. Event_Details.Initial_Dimension_X) where the production event timestamp (i.e. Event.TimeStamp) falls within the report range and the status of the production status is defined as Count for Production. The production quantity is pro-rated over the report time frame. The quantity of any event that crosses the report start and/or end time will be multiplied by the ratio of the event duration and the portion of the event that is within the report period. It is assumed that the rate at which material was added to the production event was constant over the duration of the event.

GE Fanuc Automation

SQL Programming Guidelines

Page 59 of 120

Production Event
Report Time Frame

The following is an example of a query that is used to calculate Net Production. The query uses a standard View of the Events table available in 4.3+.
SELECT Net_Production = SUM( ed.Initial_Dimension_X * datediff(s,CASE WHEN e.Actual_Start_Time < @ReportStartTime THEN @ReportStartTime ELSE e.Actual_Start_Time END, CASE WHEN e.TimeStamp > @ReportEndTime THEN @ReportEndTime ELSE e.TimeStamp END) / datediff(s, e.Actual_Start_Time, e.TimeStamp)) FROM dbo.Events_With_StartTime e JOIN dbo.Event_Details ed ON ed.Event_Id = e.Event_Id JOIN dbo.Production_Status ps ON e.Event_Status = ps.ProdStatus_Id WHERE e.PU_Id = @ReportUnit AND e.TimeStamp > @ReportStartTime AND e.Actual_Start_Time < @ReportEndTime AND ps.Count_For_Production = 1

If Production is Accumulated From a Variable is selected then Net Production is the summary of the variable value where the variable timestamp (i.e. Tests.Result_On) falls within the report range.
SELECT Net_Production = SUM(convert(real, t.Result_On)) FROM dbo.Tests t WHERE t.Var_Id = @ProductionVariableId AND t.Result_On > @ReportStartTime AND t.Result_On <= @ReportEndTime

11.2.05 Production Event Product


Normally, the product a production event is associated with is the product change record that corresponds to the events timestamp (described in more detail in section 10.01.01). However, a production event can also be individually allocated to a particular product. The event specific product is stored in the Applied_Product field in the Events table. For example, a line is running Grade A. The operators make product change to Grade A. However, quality tests reveal that one of the production events has failed the specifications for Grade A but meet the specifications for Grade B. Rather than rejecting the event or creating a waste record, the operator can choose to apply the Grade B product to the particular event in question. To find the product associated with an event the query should include a reference to the Applied_Product for the event. For example,
SELECT e.Event_Num, e.TimeStamp, p.Prod_Code

FROM Events e INNER JOIN Production_Starts ps ON e.PU_Id = ps.PU_Id AND e.TimeStamp > ps.Start_Time AND ( e.TimeStamp <= ps.End_Time OR ps.End_Time IS NULL)

GE Fanuc Automation

SQL Programming Guidelines

Page 60 of 120

-- NOTE: the following includes a COALESCE() function so the join will reference the Applied_Product field if it -- exists, otherwise it will reference the normal product INNER JOIN Products p ON p.Prod_Id = coalesce(e.Applied_Product, ps.Prod_Id) WHERE e.PU_Id = @PUId AND e.TimeStamp > @ReportStartTime AND e.TimeStamp <= @ReportEndTime

11.2.06 Inventory Locations


Inventory locations provide some basic warehouse management functionality within Plant Applications. Rather than creating a different production unit for each location in a warehouse, a single Inventory Point production unit can be created and multiple inventory locations created instead. Thus, when a production event is created on an inventory point, it can be assigned one of those multiple inventory locations but still allow the standard reports to report on total inventory. Furthermore, if the material represented by the production event is moved around the warehouse the inventory location can just be updated in the production event record instead of creating new production events on different production units and the genealogy links between them. A common application of inventory points in Plant Applications is for in-process inventory. In manufacturing a lot of production lines have inventory locations on the plant floor (as opposed to the warehouse). Often, raw material delivered to a line is allocated to one of these inventory locations. Furthermore, there may be Work-In-Progress (WIP) buffers on the plant floor that can be represented as inventory points. These intermediate products are stored in a variety of locations on the plant floor before being consumed by the next step in the process. When a production unit is made an Inventory Point or a Staging Point, the Inventory Locations are enabled for the production unit. For example,

Inventory locations are unique by production unit and location code. All the other fields are optional. For example,

The main configuration table for inventory locations is:


Table Name Description

GE Fanuc Automation

SQL Programming Guidelines

Page 61 of 120

Unit_Locations

Inventory locations

The main data tables for inventory locations are:


Table Name Event_Details Description A production event can be allocated to an inventory location by filling out the Location_Id field with one from the Unit_Locations table. An inventory location can be designated at the source for a particular item in the BOM.

Bill_Of_Material_Formulation_Item

The database structure is as follows:


Event_Details PK Unit_Locations PK Location_Id PU_Id Location_Code etc... FK1 Event_Id Location_Id etc...

Bill_Of_Material_Formulation_Item PK FK1 BOM_Formulation_Item_Id Location_Id etc...

11.2.07 Event History


When retrieving inventory value that is in the past, Event_History and Event_Detail_History tables are used, as values in Events and Event_Details tables only reflect current inventory values. There is no enforced relation between the Event and Event_History tables. Event table stores the most current information for events while Event_History table stores histories of all events. If an event is deleted, there will be no record in the Event table but histories of that deleted event can be obtained from Event_History table. In order to lookup the history of a particular event, Event_Id can be used, as this is the only column that exists in Event, Event_Details, Event_History, and Event_Detail_History tables. Based on table diagrams below, Event_History and Event_Detail_History tables do not have any primary or unique key, while Event_Id is the primary key in both Event and Event_Details tables. It is possible to have more than one record in Event_History table for a specific Event_Id. As mentioned before, Event_History stores all past records on a particular event. In order to look for a particular event for a past timeframe, Modified_On column is often used with the Event_Id column in the query as this provides the time order.
Event PK Event_Id PU_Id Event_Num Timestamp etc PK FK Event_Details Event_Id

PU_Id Initial_Dimension_X Final_Dimension_X etc

GE Fanuc Automation

SQL Programming Guidelines

Page 62 of 120

Event_History Event_Id PU_Id Event_Num Modified_On etc

Event_Detail_History Event_Id PU_Id Event_Num Modified_On etc

Following is a sample query getting all available events that are 5 days ago but not older than 30 days for PU_Id 1:
SELECT eh.* FROM Event_History eh WITH (NOLOCK), (SELECT Event_Id, MAX(Modified_On) 'ModifiedOn' FROM Event_History WITH (NOLOCK) WHERE Start_Time < DATEADD(day, -5, GETDATE()) AND (Timestamp > DATEADD(day,-30,GETDATE()) OR Timestamp IS NULL) AND PU_id=1 GROUP BY Event_Id) r WHERE eh.Event_Id = r.Event_Id AND eh.Modified_On = r.ModifiedOn

The key is to get the last updated record for the event using MAX(Modified_On) within a given timeframe for every event.

11.3 Production Schedule Execution (Process Orders)


The main data tables for production schedule execution are:
Table Name Production_Plan Production_Plan_Starts Description Production schedule (process order) The start and end times that a process order was made active on a particular production unit. The units the process runs on is generally governed by the execution path. Process order sequence Process order sequence patterns

Production_Setup Production_Setup_Details

The main configuration tables for production tracking are:


Table Name Prod_Units PrdExec_Paths PrdExec_Path_Units Description Production units Execution Paths Production units assigned to an execution path

GE Fanuc Automation

SQL Programming Guidelines

Page 63 of 120

11.3.01 Schedule Execution


The execution of a production schedule is tracked in the Production_Plan_Starts table. The Schedule Manager service uses the records in this table as the basis for the production statistics (i.e. Actual Quantity, Forecast End Time, Production Efficiency, Time Efficiency, etc). Depending on the setting of the Path Controlled By A Schedule in the Execution Path configuration, records will be created in this table to record the start and end times a process order was run on a particular unit. Within the standard functionality of Plant Applications, the units a process order can be run on is limited to the units included in the Execution Path bound to the process at that time. However, a process order can be unbound from one path and bound to another and the existing history in Production_Plan_Starts will be retained. As such, you can run a process order on multiple paths and the standard process order statistics calculations will report data based on all of them. Records in the Production_Plan_Starts table can be created programmatically using either result set 17 or the Plant Applications SDK. Any Production_Plan_Starts messages sent on the Message Bus will be picked up by the Schedule Manager service which will in turn react by recalculating the process order statistics. For process orders, the status Active has special functionality. When the status of a process order is changed to Active then immediately a record is created in the Production_Plan_Starts table for the production unit (PU_Id) that corresponds to the Schedule Point unit in the Execution Path. Furthermore, depending on the settings of the Path Controlled By A Schedule, additional records may also be created in Production_Plan_Starts for the other units in the path.

11.3.02 Process Order Quantity


By default, Process Orders quantities are calculated by the Schedule Manager service using the Production_Plan_Starts table. Using the start and end times for each unit the process order was run on, the Schedule Manager will summarize the relevant production data for all units in the Execution Path where the Production Point has been set to True. The type of production data summarized depends on the Production Metrics properties for each unit. The Production Metrics settings are as follows:

The selection is stored in the Prod_Units table in the Production_Type field. The functionality of the two options is as follows:
Option Production is Accumulated From Event Dimensions Production_Type Value 0, NULL Description Summarizes all the Event_Details.Initial_Dimension_X fields for units where the Execution Path Production Point has been set to True and the timestamp of the event falls within the start and end time of the Production_Plan_Starts record. In addition, Schedule Manager also includes all events where the PP_Id is filled out in the Event_Details

GE Fanuc Automation

SQL Programming Guidelines

Page 64 of 120

Production is Accumulated From a Variable

table. For all units where the Execution Path Production Point has been set to True, summarizes all the result values for the defined Variable where the result timestamp falls within the start and end time of the Production_Plan_Starts record. The variable is stored in the Production_Variable field.

For accumulating quantity from the event details the query would be as follows:
SELECT pp.Process_Order, SUM(ed.Initial_Dimension_X) FROM Production_Plan_Starts pps JOIN Production_Plan pp ON pps.PP_Id = pp.PP_Id JOIN dbo.Prod_Units pu ON pps.PU_Id = pu.PU_Id AND ( Production_Type IS NULL OR Production_Type = 0) JOIN dbo.Prdexec_Path_Units ppu ON ppu.PU_Id = pu.PU_Id AND ppu.Is_Production_Point = 1 JOIN dbo.Events e ON e.PU_Id = ppu.PU_Id AND e.TimeStamp >= pps.Start_Time AND ( e.TimeStamp < pps.End_Time OR pps.End_Time IS NULL) JOIN dbo.Event_Details ed ON e.Event_Id = ed.Event_Id WHERE pps.PP_Id @@PPId GROUP BY pps.PP_Id

If the PP_Id is filled out in the Event_Details table, the Schedule Manager will exclude it from its standard query using the Production_Plan_Starts timestamps. However, if the event is on a unit that is a production point in the path and that unit has been active (i.e. it has at least one record in the Production_Plan_Starts table), it will be include it in the quantity total regardless of the timestamps. The functionality is primarily for allocating production events to different patterns associated with a process order. For accumulating quantity from a variable the query would be as follows:
SELECT pp.Process_Order, SUM(isnull(convert(real, t.Result), 0)) FROM dbo.Production_Plan_Starts pps JOIN dbo.Production_Plan pp ON pps.PP_Id = pp.PP_Id JOIN dbo.Prod_Units pu ON pps.PU_Id = pu.PU_Id AND Production_Type = 1 JOIN dbo.Prdexec_Path_Units ppu ON ppu.PU_Id = pu.PU_Id AND ppu.Is_Production_Point = 1 JOIN dbo.Tests t ON t.Var_Id = pu.Production_Variable AND t.Result_On >= pps.Start_Time AND ( t.Result_On < pps.End_Time OR pps.End_Time IS NULL) WHERE pps.PP_Id = 1 GROUP BY pp.Process_Order

11.4 Genealogy Links (Event_Components)


Genealogy links are stored as branches of a family tree. At a minimum, there will be a record for every link between a parent event and child event. For example, if a parent event is linked to 3 child events, there will be 3 records in the Event_Components table. Alternatively, if there are 2 parent events each linked to 3 child events, there will be 6 records in the Event_Components table.

GE Fanuc Automation

SQL Programming Guidelines

Page 65 of 120

Events: Event_Id = 5 Event_Components: Source_Event_Id = 5 Event_Id = 10 Event_Components: Source_Event_Id = 5 Event_Id = 10

Event_Components: Source_Event_Id = 5 Event_Id = 10

Events: Event_Id = 10

Events: Event_Id = 11

Events: Event_Id = 12

Genealogy records can be created either by a genealogy model, based on the configuration and use of the Raw Material inputs, or they can simply be created on their own. The main data tables for genealogy are:
Table Name Event_Components PrdExec_Input_Event PrdExec_Input_Event_History Description Genealogy links Current state of the Raw Material Inputs (i.e. what production event is in the Running or Staged position). Historical and current state of the Raw Material Inputs (i.e. what production event is in the Running or Staged position).

The main configuration tables for genealogy are:


Table Name PrdExec_Inputs Description Primary Raw Material Input configuration.

The logical unique key for the Event_Components table is:


Field Event_Id Source_Event_Id Description Event_Id of the child event Event_Id of the parent event

11.4.01 Raw Material Consumption


One of the major reasons for tracking genealogy is for consumption reporting (i.e. reporting the amounts and types of raw materials consumed to make a product). As such, quantity is an important measure for genealogy. The amount of material used from the parent production event to create the child production event should be recorded in the Dimension_X field of the Event_Component record. In addition, most processes have several steps between the raw materials and the final product, creating and consuming various intermediate products along the way. As such, to indicate which

GE Fanuc Automation

SQL Programming Guidelines

Page 66 of 120

record actually refers to the raw material to report on, the Event_Components.Report_As_Consumption field should be set to 1. For example,

Raw Material

Event_Components: Report_As_Consumption = 1

Work In Progress (WIP) or Intermediate Product

Event_Components: Report_As_Consumption = 0

Final Product

The Enterprise Connector service within Plant Applications will utilize this field to calculate raw material consumption for a particular process order.

11.4.02 Multiple Parent/Child Links


It is possible to have multiple genealogy links between parent and child events. A common application of this is if a batch is feeding another downstream batch in incremental amounts over the course of the batch run as opposed to a single transfer. Rather than updating the Dimension_X field of the original link (and losing the history of the transfers), it is better to create multiple genealogy links, each with their own timestamp and quantity. The Genealogy View will correctly display all of the genealogy links when there are more than one. Futhermore, it is possible to have multiple genealogy links in a multi-level family tree. In these cases, the Event_Components table has a field called Parent_Component_Id that can be used to record which material transfer into the intermediate event is related to the material transfer out of the intermediate event. For example,

Component_Id = 5

Component_Id = 10 Parent_Component_Id = 5 Component_Id = 11 Parent_Component_Id = 6

Component_Id = 6

Component_Id = 7

Component_Id = 12 Parent_Component_Id = 7

GE Fanuc Automation

SQL Programming Guidelines

Page 67 of 120

11.4.03 Circular Parent/Child Links


Circular parent/child links are a very bad thing and should be avoided at all costs. Creating circular genealogy links will result: The Genealogy View will crash. Processing tasks out of the pending tasks table will cease. Calculations that reference genealogy variables throughout the circular chain will execute in an infinite loop as they will constantly retrigger themselves.

11.4.04 How to identify what Input was Load, Unload or Complete


As the table PrdExec_Input_Event only contains information of the current events in runing and stage positions that are load we need to look at the PrdExec_Input_Event_History table in order to identify what was the last event that was Load, Unload or Complete This table has key fields that will help us to identify the Unit, if the event was load, Unload or complete: When an event is been Load the PrdExec_Input_Event_History gets one record where: o o o o o
Event_Id

Event_id indicates what event is been load it. Event_Id_Updated is set to 1 Timestamp_Updated is set to 1 Unloaded field is set to 0 Unloaded_Update field is set to 0
Unloaded_Updated Event_Id_Updated Timestamp_Updated Unloaded

----------- ---------------- ---------------- ----------------- -------30069 0 1 1 0

When an event is been Unload the PrdExec_Input_Event_History gets two records where: o o o o o On the first record Event_id indicates what event is been Unload and on the second is Null. Event_Id_Updated is set to 0 on the first record and 1 on the second Timestamp_Updated is set to 1 in the first record and 0 on the second Unloaded field is set to 1 on the first record and 0 on the second Unloaded_Update field is set to 1 on the 2 records
Event_Id_Updated ---------------0 1 Timestamp_Updated ----------------1 0 Unloaded -------1 0

Event_Id ----------30069 NULL

Unloaded_Updated ---------------1 1

GE Fanuc Automation

SQL Programming Guidelines

Page 68 of 120

When an event is been Complete the PrdExec_Input_Event_History gets one record where: o o o o o Event_id is set to null. Event_Id_Updated is set to 1 Timestamp_Updated is set to 1 Unloaded field is set to 0 Unloaded_Update field is set to 0

Event_Id Unloaded_Updated Event_Id_Updated Timestamp_Updated Unloaded ----------- ---------------- ---------------- ----------------- -------NULL 0 1 1 0

The following is a query example to help identify what event was running depending on a Line, Unit and input position
SELECT Prod_Lines.PL_Desc_Local AS Line, Prod_Units.PU_Desc_Local AS Unit, PrdExec_Inputs.Input_Name, PrdExec_Input_Positions.PEIP_Desc AS Position, Events.Event_Id AS EventId, Events.Event_Num AS [Even Number], PrdExec_Input_Event_History.Unloaded AS Unload FROM PrdExec_Inputs INNER JOIN PrdExec_Input_Event_History ON PrdExec_Inputs.PEI_Id = PrdExec_Input_Event_History.PEI_Id INNER JOIN Prod_Units ON PrdExec_Inputs.PU_Id = Prod_Units.PU_Id INNER JOIN Prod_Lines ON Prod_Units.PL_Id = Prod_Lines.PL_Id INNER JOIN Events ON Prod_Units.PU_Id = Events.PU_Id INNER JOIN PrdExec_Input_Positions ON PrdExec_Input_Event_History.PEIP_Id = PrdExec_Input_Positions.PEIP_Id WHERE (Prod_Lines.PL_Desc_Local = 'XXX') -- Line Name AND (Prod_Units.PU_Desc_Local = 'YYY') -- Unit Name AND (PrdExec_Inputs.Input_Name = 'ZZZ') -- Input Name AND (PrdExec_Input_Positions.PEIP_Desc = 'Running') -- Position AND (PrdExec_Input_Event_History.Unloaded_Updated = 1) -- This indicates if the input was change or not AND (PrdExec_Input_Event_History.Unloaded = 1) -- The change is Unload.

11.5 Downtime
The main data tables for downtime are:
Table Name Timed_Event_Details Description Downtime records

The main configuration tables for downtime are:


Table Name Description

GE Fanuc Automation

SQL Programming Guidelines

Page 69 of 120

Prod_Units Timed_Event_Faults Event_Reasons Event_Reason_Tree_Data Event_Reason_Category_Data Event_Reason_Catagories

Production units Downtime faults Reasons Reason tree Reason tree category assignments Categories

Records in the Timed_Event_Details table are unique by PU_Id and Start_Time. They must also always be in sequence and cannot overlap (i.e. the Start_Time of a record cannot be less than the End_Time of the previous record). In the Timed_Event_Details table the End_Time will be NULL for an open downtime record. Duration should always be calculated as the difference between the Start_Time and End_Time fields as the Duration field is not consistently updated.

11.5.01 Querying Downtime Duration


A common need in querying downtime is to select all downtime for a given period of time but only report the duration of the downtime thats fallen within the reporting period. In these situations, the start of the reporting period falls between the start and end time of the downtime record.
Downtime

Report Time Frame

The following query will return all downtime records that fall within a given reporting period, even if the Start_Time or End_Time is outside of it and, by using a Case statement, restricts the calculation of duration to within the reporting period.
WHEN Start_Time < @ReportStartTime THEN @ReportStartTime ELSE Start_Time END, CASE WHEN End_Time > @ReportEndTime OR End_Time IS NULL THEN @ReportEndTime ELSE End_Time END) AS Duration FROM Timed_Event_Details WHERE PU_Id = @PUId AND Start_Time < @ReportEndTime AND ( End_Time > @ReportStartTime OR End_Time IS NULL) SELECT datediff(s, CASE

11.5.02 Calculating Uptime


Uptime is the time between the Start_Time of one event and the End_Time of the previous event. There is no way to join the previous event directly out of the Timed_Event_Details table so a temporary table must be used. The following code demonstrates the best way to calculate uptime.
DECLARE @Downtime TABLE ( DowntimeId StartTime EndTime Downtime UpTime INSERT @Downtime ( int PRIMARY KEY IDENTITY, datetime, datetime, int, int)

StartTime,

GE Fanuc Automation

SQL Programming Guidelines

Page 70 of 120

EndTime, Downtime) SELECT Start_Time, End_Time, datediff(s, Start_Time, End_Time) FROM Timed_Event_Details WHERE PU_Id = @PUId AND Start_Time < @ReportEndTime AND (End_Time > @ReportStartTime OR End_Time IS NULL) ORDER BY Start_Time ASC UPDATE d1 SET Uptime = datediff(s, d2.EndTime, d1.StartTime) FROM @Downtime d1 INNER JOIN @Downtime d2 ON d2.DowntimeId = (d1.DowntimeId - 1) WHERE d1.DowntimeId > 1 SELECT StartTime, EndTime, Downtime, Uptime FROM @Downtime

11.5.03 Determining Primary and Split Records


Split records are not differentiated in the table any differently than non-split records except for the fact that the Start_Time of a split record will equal the End_Time of the previous record. To determine whether a record is a split, non-split or the first record in a sequence of split records (i.e. the primary downtime), a join can be made to the same table to check for the existence of a split.
SELECT d1.Start_Time, d1.End_Time, CASE WHEN d2.TEDet_Id IS NULL THEN 'Primary' ELSE 'Split' END FROM Timed_Event_Details d1 LEFT JOIN Timed_Event_Details d2 ON d1.PU_Id = d2.PU_Id AND d1.Start_Time = d2.End_Time WHERE d1.PU_Id = @PUId AND d1.Start_Time < @ReportEndTime AND ( d1.End_Time > @ReportStartTime OR d1.End_Time IS NULL)

11.5.04 Querying Fault Selection


Fault configurations are stored in the Timed_Event_Faults table.
SELECT ted.PU_Id, ted.Start_Time, ted.End_Time, tef.Fault_Desc FROM Timed_Event_Details ted LEFT JOIN Timed_Event_Faults tef ON ted.TEFault_Id = tef.TEFault_Id WHERE ted.PU_Id = @PUId AND ted.Start_Time < @ReportEndTime AND ( ted.End_Time > @ReportStartTime OR ted.End_Time IS NULL)

GE Fanuc Automation

SQL Programming Guidelines

Page 71 of 120

11.5.05 Querying Reason Selection


Reason selections are stored directly in the table and are not tied to the reason tree. This allows for the reason trees to be changed without affecting the history of the reason selections.
SELECT PU_Id, Start_Time, End_Time, r1.Event_Reason_Name, r2.Event_Reason_Name, r3.Event_Reason_Name, r4.Event_Reason_Name FROM Timed_Event_Details ted LEFT JOIN Event_Reasons r1 ON ted.Reason_Level1 = r1.Event_Reason_Id LEFT JOIN Event_Reasons r2 ON ted.Reason_Level2 = r2.Event_Reason_Id LEFT JOIN Event_Reasons r3 ON ted.Reason_Level3 = r3.Event_Reason_Id LEFT JOIN Event_Reasons r4 ON ted.Reason_Level4 = r4.Event_Reason_Id WHERE PU_Id = @PUId AND Start_Time < @ReportEndTime AND ( End_Time > @ReportStartTime OR End_Time IS NULL)

11.5.06 Querying Category Selection


Categories are tied to the reason tree and are not stored directly within the Timed_Event_Details table. To retrieve the categories associated with a particular downtime record or filter downtime by a particular category, the query must check the reason tree structure in the Event_Reason_Category_Data table.
SELECT Start_Time, End_Time, ERC_Desc FROM Timed_Event_Details ted LEFT JOIN Event_Reason_Category_Data ercd ON ted.Event_Reason_Tree_Data_Id = ercd.Event_Reason_Tree_Data_Id LEFT JOIN Event_Reason_Catagories erc ON ercd.ERC_Id = erc.ERC_Id WHERE PU_Id = @PUId AND Start_Time < @ReportEndTime AND ( End_Time > @ReportStartTime OR End_Time IS NULL)

11.6 Waste
The main data tables for waste are:
Table Name Waste_Event_Details Description Waste event records

The main configuration tables for downtime are:


Table Name Prod_Units Waste_Event_Faults Waste_Event_Meas Waste_Event_Type Event_Reasons Event_Reason_Tree_Data Description Production units Waste event faults Waste event measurements Waste event types Reasons Reason tree

GE Fanuc Automation

SQL Programming Guidelines

Page 72 of 120

Event_Reason_Category_Data Event_Reason_Catagories

Reason tree category assignments Categories

Waste event records are very similar in concept to downtime, except that instead of quantifying faults by duration, they are quantifying them by an amount of material lost.

11.7 Quality
The main data tables for quality are:
Table Name Events Tests Var_Specs Production_Starts Description Production events Variable results Variable specifications Product/Grade changes

The main configuration tables for quality are:


Table Name Variables Products Description Variable configuration Product configuration

Variable specifications are stored in the Var_Specs table. The logical key for the table is as follows:
Field Var_Id Prod_Id Effective_Date Expiration_Date Description The variable the specifications were entered on. The product the specifications were entered on. The timestamp of when the specification transaction was approved. The timestamp of when the specification transaction expired. For the current time, this is normally NULL and is set when a new transaction is created. However, if a timed-limited transaction is created it will be preset.

The behaviour of the Var_Specs table is similar to that of the Production_Starts table in that the specification records have to be in sequence and cannot overlap (i.e. the Effective_Time of a record must be the greater than or equal to the Expiration_Date of the previous record). This behaviour is enforced by the Plant Applications when creating transactions through the Administrator. Central specifications are stored in the Active_Specs table and are automatically copied to the Var_Specs table when changes are made. As such, Var_Specs is the preferred table to report on.

11.7.01 Comparing Values To Specification Limits


Comparing variable values to specifications limits involves a multi-level JOIN involving the variable, product and the timestamp of the value. Variable values in the Tests table and limits in the Var_Specs table are both stored in varchar(25) format so a proper explicit conversion of all fields should be made to ensure the correct comparison. Otherwise, SQL may implicitly make the wrong conversion (i.e. varchar to integer instead of varchar to real) and cause unexpected behaviour.

GE Fanuc Automation

SQL Programming Guidelines

Page 73 of 120

Consideration of the SpecificationSetting site parameter (Parm_Id = 13) should also be taken. This site parameter controls whether a value equal to a limit is out-of-spec or not. It primarily affect the way specification deviations are displayed in the Plant Applications Clients. SpecificationSetting has two possible values and they are as follows:
Value 1 2 Description The value is considered out-of-spec if Lower Limit > Value > Upper Limit The value is considered out-of-spec if Lower Limit >= Value >= Upper Limit

The following is an example query against specifications limits


DECLARE @SpecificationSetting int

SELECT @SpecificationSetting = Value FROM Site_Parameters WHERE Parm_Id = 13 SELECT v.Var_Desc, t.Result, vs.L_Reject, vs.L_Warning, vs.L_User, vs.Target, vs.U_User, vs.U_Warning, vs.U_Reject, CASE @SpecificationSetting WHEN 1 THEN CASE WHEN convert(float, t.Result) > convert(float, vs.U_Warning) THEN ' WARNING' ELSE '' END WHEN 2 THEN CASE WHEN convert(float, t.Result) >= convert(float, vs.U_Warning) THEN 'WARNING' ELSE '' END END

FROM Tests t JOIN Variables v ON t.Var_Id = v.Var_Id JOIN Production_Starts ps ON v.PU_Id = ps.PU_Id AND t.Result_On >= ps.Start_Time AND ( t.Result_On < ps.End_Time OR ps.End_Time IS NULL) LEFT JOIN Var_Specs vs ON t.Var_Id = vs.Var_Id AND ps.Prod_Id = vs.Prod_Id AND t.Result_On >= vs.Effective_Date AND ( t.Result_On < vs.Expiration_Date OR vs.Expiration_Date IS NULL) WHERE t.Var_Id = @ReportVarId AND t.Result_On > @ReportStartTime AND t.Result_On < @ReportEndTime

11.7.02 Creating Specifications and Transactions


When creating variable specifications programmatically using SQL, data should not be inserted directly into the Var_Specs table. Instead, transactions should be created and then the Plant Applications transaction approval routine should be called. This is especially true when dealing with central specifications as the transaction approval routine will handle all the inheritance that is based on the characteristic tree and central to variable specification relationships. The main data tables for transactions are:

GE Fanuc Automation

SQL Programming Guidelines

Page 74 of 120

Table Name Transactions Transaction_Groups Trans_Variables Trans_Products Trans_Properties Trans_Characteristics Trans_Char_Links

Description Transactions Transaction groups Variable specification changes Product to unit assignments Central specification changes Characteristic to product/unit assignments Characteristic tree

An example of how to create and approve a transaction is as follows:


-- Create the transaction EXEC spEM_CreateTransaction My Transaction, - Transaction description NULL, -- Corporate transaction id, 1, -- Transaction type; corresponds to the Transaction_Type table NULL, -- Corporate transaction description 1, -- User id @Trans_Id OUTPUT

-- Fill out the transaction data tables -- Approve the transaction EXEC spEM_ApproveTrans @Trans_Id, 1, 1, NULL, @ApprovedDate OUTPUT, @Effective_Date OUTPUT

-- User id -- Transaction group id -- Deviation date -- Approved date -- Effective date

11.8 Crew Schedule


The main data tables for the crew and shift schedule are:
Table Name Crew_Schedule Description Crew and shift changes

The main configuration tables for the crew and shift schedule are:
Table Name Prod_Units Description Production units

The Crew_Schedule table holds detailed records for each shift change. Instead of containing a pattern or formula for calculating crew and shift, it contains a time-stamped record for every shift change. This table is normally filled out by hand in Excel and imported for a defined amount of time determined by the plant actual crew schedule forecast. Shift/Crew changes are related to other data through the PU_Id and time (i.e. to determine what shift a production event is associated with, look for the shift change record that occurred within the same time frame and on the same unit).

GE Fanuc Automation

SQL Programming Guidelines

Page 75 of 120

11.9 Interfaces To External Systems


Some common tables used in interfacing to external systems are:
Table Name Data_Sources Data_Source_Xref Description Data sources (i.e. Historian, Acquidata, SAP, etc) A cross reference table that allows Plant Applications objects to be assigned different text values for use in identifying them to external systems.

11.10 User-Defined Properties (UDP)


The main data tables for user-defined properties are:
Table Name Table_Fields_Values Tables Description Main data table containing user-defined property values. Fixed content table that contains the definition of each table within the database that can have UDPs associated with it. Contains the UDP definitions (name and data type). Fixed content table that contains the data types of event model parameters and user-defined properties.

Table_Fields ED_FieldTypes

The database structure is as follows:

GE Fanuc Automation

SQL Programming Guidelines

Page 76 of 120

ED_FieldTypes PK ED_Field_Type_Id Field_Type_Desc PK

Tables TableId TableName

Table_Fields PK FK1 Table_Field_Id ED_Field_Type_ID Table_Field_Desc

Table_Fields_Values PK,FK1 PK,FK2 PK TableId Table_Field_Id KeyId Value

If TableId = 1 (Events) Then KeyId = Event_Id If TableId = 13 (PrdExec_Paths) Then KeyId = Path_Id If TableId = 43 (Prod_Units) Then KeyId = PU_Id

Events PK Event_Id Event_Num ...

PrdExec_Paths PK Path_Id Path_Desc ...

Prod_Units PK PU_Id PU_Desc ...

The main data table for UDPs is Table_Field_Values. The logical key for the table is as follows:
Field TableId Table_Field_Id KeyId Description The table the UDP is associated with (i.e. Prod_Units, Department, etc) The UDP itself. A particular UDP can be associated with multiple different tables. The unique primary key identifier for the table defined by TableId.

The UDP value is always stored as a string so the field type is generally not necessary when retrieving the UDP value, as the data type is already known. As such, the field type is only important for creating a new UDP. The KeyId value is dependent on which table the UDP is for. For example, if the TableId corresponded to the Prod_Units table (i.e. Production Units), then the KeyId will be equal to a particular PU_Id. If the TableId corresponded to the PrdExec_Paths table (i.e. Execution Paths) then the KeyId will be equal to a particular Path_Id. The following is an example of the configuration of a UDP for a Production Unit in the Administrator:

GE Fanuc Automation

SQL Programming Guidelines

Page 77 of 120

The following is an example query of how to retrieve the UDP value for the above configuration:
SELECT tfv.Value FROM Table_Fields_Values tfv JOIN Tables t ON tfv.TableId = t.TableId JOIN Table_Fields tf ON tf.Table_Field_Id = tfv.Table_Field_Id JOIN Prod_Units pu ON tfv.KeyId = pu.PU_Id -- This depends on the TableId being referenced WHERE t.TableName = 'Prod_Units' AND tf.Table_Field_Desc = 'MyUDP' AND pu.PU_Desc = 'Machine 1'

The following is an example configuration of a UDP for an Execution Path in the Administrator:

The following is an example query of how to retrieve the UDP value for the above configuration:
SELECT tfv.Value FROM Table_Fields_Values tfv JOIN Tables t ON tfv.TableId = t.TableId JOIN Table_Fields tf ON tf.Table_Field_Id = tfv.Table_Field_Id JOIN PrdExec_Paths p ON tfv.KeyId = p.Path_Id WHERE t.TableName = 'PrdExec_Paths' AND tf.Table_Field_Desc = 'MyUDP' AND p.Path_Code = 'M1'

GE Fanuc Automation

SQL Programming Guidelines

Page 78 of 120

While only a few of the tables have an interface (either in the Client or Administrator) there are many tables currently defined in Tables and more are continually being added. Another thing to note is that while UDPs are typically created for custom configuration, they can also be used to track additional information in data tables (i.e. Events). The following table lists a subset of the available SQL tables and the corresponding keys that are reference in Table_Fields_Values.
Table Id 1 2 3 4 5 6 7 8 9 10 11 13 43 12 14 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 Table Events Production_Starts Timed_Event_Details Waste_Event_Details PrdExec_Input_Event PrdExec_Input_Event_History Production_Plan Production_Setup Production_Setup_Detail Event_Components User_Defined_Events PrdExec_Paths Prod_Units Production_Plan_Starts Event_Details Departments Prod_Lines PU_Groups Variables Product_Family Product_Groups Products Event_Reasons Event_Reason_Catagories Bill_Of_Material_Formulation Subscription Bill_Of_Material_Formulation_Item Subscription_Group PrdExec_Path_Units Report_Types Report_Definitions Report_Runs Production_Plan_Statuses Key Event_Id Start_Id TEDet_Id WED_Id Input_Event_Id Input_Event_History_Id PP_Id PP_Setup_Id PP_Setup_Detail_Id Component_Id UDE_Id Path_Id PU_Id PP_Start_Id Event_Id Dept_Id PL_Id PUG_Id Var_Id Product_Family_Id Product_Grp_Id Prod_Id Reason_Id ERC_Id BOM_Formulation_Id Subscription_Id BOM_Formulation_Item_Id Subscription_Group_Id PEPU_Id Report_Type_Id Report_Id Run_Id PP_Status_Id

11.11 Language
Language support in Plant Applications has 2 flavours: 1) The standard client components have a defined list of translations that can be installed during setup. These will be referenced depending on the language setting of the user. 2) Most of the configuration tables support both a local and a global language which allows users to see their configuration in one of 2 languages.

GE Fanuc Automation

SQL Programming Guidelines

Page 79 of 120

The above 2 options are described in more detail in the Administrator documentation under MultiLingual Support. The main data tables for multi-lingual support are:
Table Name Languages Language_Data Description Fixed content table that lists the available languages. Contains the prompts and translations for the client application components (i.e. Plant Applications Client, Web Server and Excel Add-In). This table contains data for the installed languages. Contains the default language reference for all users. The LanguageNumber parameter (Parm_Id = 8) contains the Language_Id from the Languages table. Contains the language reference for a particular user. The LanguageNumber parameter (Parm_Id = 8) contains the Language_Id from the Languages table. Each configuration table (i.e. Reasons, Prod_Units, Variables, etc) contains a _Local and _Global description field, which allows 2 translation options for created configuration items.

Site_Parameters

User_Parameters

All configuration tables

The database structure is as follows:


Language_Data Languages PK Language_Id Language_Desc PK FK1 Language_Data_Id Language_Id Prompt_Id Prompt_String

For Parm_Id = 8 (LanguageNumber) Value = Language_Id

Site_Parameters PK FK1 Parm_Id Value

User_Parameters PK PK FK1 User_Id Parm_Id Value

11.11.01 Querying a Users Language


The default language setting for all users is defined in the Site_Parameters table in the LanguageNumber parameter. The value of the LanguageNumber parameter corresponds to the Language_Id in the Languages table. In addition, the language can be configured for individual users in the User_Parameters table with the same LanguageNumber parameter. This setting defines the language that is shown in the Plant Applications Client, Web Server and Excel Add-In. For example,
DECLARE @LanguageId int

SELECT @LanguageId = convert(int, Value)

GE Fanuc Automation

SQL Programming Guidelines

Page 80 of 120

FROM Site_Parameters WHERE Parm_Id = 8 SELECT @LanguageId = coalesce(convert(int, Value), @LanguageId) FROM User_Parameters WHERE Parm_Id = 8 AND User_Id = 1

11.11.02 Querying Language Prompts and Overrides


The default language values stored in the Language_Data can be overridden via the Plant Applications Administrator.

The override value is stored in the Language_Data table in an additional record but the Language_Id is set to a negative value instead (i.e. Language_Id = 2 becomes Language_Id = -2). As such, to retrieve the value for a particular prompt, 2 records must be selected. For example,
SELECT coalesce(ldo.Prompt_String, lds.Prompt_String) FROM Language_Data lds LEFT JOIN Language_Data ldo ON ldo.Language_Id = (-lds.Language_Id) AND ldo.Prompt_Number = lds.Prompt_Number WHERE lds.Language_Id = 2 -- French AND lds.Prompt_Number = 30001

Additional records can be added to both the Languages table and Language_Data tables to support new languages and/or custom reports. For the Languages table, new records should start with a Language_Id > 5000 while new records in the Language_Data table should start with a Prompt_Number > 500000.

GE Fanuc Automation

SQL Programming Guidelines

Page 81 of 120

11.11.03 Querying Global and Local Description


In the standard client functionality, the LanguageNumber parameter determines whether the global or local descriptions will be used but the value of the LanguageNumber itself (i.e. which language is configured for the user) has no effect on the selection. If the LanguageSetting site parameter is different than the user's LanguageSetting parameter, then the user will see the global descriptions, is there are any. If there are no global descriptions, then the user will see the local descriptions. If the LanguageSetting site parameter is the same as the user's LanguageSetting parameter, then the user will see the local descriptions. For example,
DECLARE @SiteLanguageId int, @UserLanguageId int

SELECT @SiteLanguageId = convert(int, Value) FROM Site_Parameters WHERE Parm_Id = 8 SELECT @UserLanguageId = coalesce(convert(int, Value), @SiteLanguageId) FROM User_Parameters WHERE Parm_Id = 8 AND User_Id = 1 SELECT CASE @UserLanguageId <> @SiteLanguageId OR @UserLanguageId IS NULL THEN coalesce(Var_Desc_Global, Var_Desc_Local) ELSE Var_Desc_Local END WHEN

FROM Variables WHERE Var_Id = 50

GE Fanuc Automation

SQL Programming Guidelines

Page 82 of 120

12.0 Revision History


Date 2005-07-12 2005-12-11 2006-02-21 2006-05-04 2006-08-17 2006-08-24 2006-10-12 2007-01-24 2008-04-08 Version 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.9 Action By Matthew Wells Matthew Wells Matthew Wells Matthew Wells Matthew Wells Matthew Wells Matthew Wells Matthew Wells Donna Lui Action

Added languages Added inventory and net production Added NOLOCK, Monitor Blocking and more information on index choices Added Event_History section

GE Fanuc Automation

SQL Programming Guidelines

Page 83 of 120

l mt h.l q s _ ci m a n y d/ e s. g o k s r a m m o s. w w w//: ptt h

e l c i t r A = b a T y a l p s i D & 9 3 0 8 3 = D I e l ci t r A ? m f c . x e d n I / s e l c i t r A / m o c . o r p t i s w o d n i w . w w w / / : p t t h

mt h.t c el e s _ d n a _t e s _ n e e wt e b _ s e c n e r effi d/ m o c. d o pi rt. n k s a y v//: ptt h mt h. w ei v r e v o _ eli p m o c e r p s/ eli p m o c e r p s/ el p m a s/ st c u d o r p/ m o c.tf o si di. w w w//: ptt h p s a. s eli p m o c e r _ p s _ g ni zi mit p o _ d r/ m o c. e c n a m r of r e p - r e v r e s -l q s. w w w//: ptt h m o c. e c n a m r of r e p - r e v r e s -l q s. w w w

GE Fanuc Automation NOLOCK Dynamic SQL: Use of EXISTS vs COUNT(*):

http://www.sqlservercentral.com/columnists/WFillis/2764.asp

Microsoft SQL Server Books Online SQL Programmer: Transact-SQL

SQL Programming Guidelines

13.0 References

Page 84 of 120

14.0 Appendix A: Result Sets


There are 20 different result set types:

Type 1 2 3 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 50

Description Production Events Variable Test Values Grade Change Downtime Events Alarms Sheet Columns User Defined Events Waste Events Event Details Genealogy Event Components Genealogy Input Events Defect Details Historian Read Production Plan Production Setup Production Plan Starts Production Path Unit Starts Production Statistics Historian Write Output File

The standard stored procedure spServer_CmnShowResultSets provides the current list and basic formats of the result sets available. More detailed information is contained within this Appendix.

GE Fanuc Automation

SQL Programming Guidelines

Page 85 of 120

14.1 Production Events


Order 0 1 2 Field Name Result Set Type Result Set Order Transaction Type Values/Table Reference 1 1 = Add 2 = Update 3 = Delete Events.Event_Id Events.Event_Num Events.PU_Id Prod_Units.PU_Id Events.TimeStamp Events.Applied_Product Events.Source_Event Events.Event_Status Users.User_Id Events.User_Id 0 = Pre-Update 1 = Post-Update

3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

Event Id Event Number Unit Id Timestamp Applied Product Source Event Event Status Confirmed User Id Update Type Conformance TestPctComplete Start Time Transaction Number Testing Status Comment Id Event Sub Type Id Entry TimeStamp

Events.Start_Time

Events.Comment_Id Comments.Comment_Id Events.Entry_On

The genealogy is best done using the Genealogy Event Components result sets and tables. However, the Source_Event_Id provides some functionality on its own. If the Source_Event_Id field is filled out and the parent event is deleted, then the child event will also be deleted automatically.

14.1.01 Example
DECLARE @PUId @EventStatus int, int int DEFAULT 1, int IDENTITY, int DEFAULT 1, int NULL, varchar(50) NULL, int NULL, varchar(50) NULL, int NULL, int NULL, int NULL, int DEFAULT 1, int NULL,

DECLARE @Events TABLE ( ResultSetType Id TransType EventId EventNum PUId TimeStamp AppliedProduct SourceEventId EventStatus Confirmed UserId

GE Fanuc Automation

SQL Programming Guidelines

Page 86 of 120

PostUpdate SELECT @PUId = PU_Id FROM Prod_Units WHERE PU_Desc = 'MyUnit'

int DEFAULT 0)

SELECT @EventStatus = ProdStatus_Id FROM Production_Status WHERE ProdStatus_Desc = 'Complete' INSERT INTO @Events ( EventNum, PUId, TimeStamp, EventStatus)

VALUES ('ABC123', @PUId, convert(varchar(50), getdate(), 120), @EventStatus) -- Output results SELECT ResultSetType, Id, TransType, EventId, EventNum, PUId, TimeStamp, AppliedProduct, SourceEvent, EventStatus, Confirmed, UserId, PostUpdate FROM @Events ORDER BY Id ASC

GE Fanuc Automation

SQL Programming Guidelines

Page 87 of 120

14.2 Variable Values


Order 0 1 2 3 4 5 6 7 8 Field Name Result Set Type Variable Id Production Unit Id User Id Cancelled Value Timestamp Transaction Type Update Type Values/Table Reference 2 Tests.Var_Id Variables.PU_Id Tests.User_Id 0 = False 1 = True Tests.Result Tests.Result_On 1 = Add 2 = Update 0 = Pre-Update 1 = Post-Update

The following should be taken into consideration when using the Variable Values result set: There is no delete functionality with the Variable Values result set so to effectively delete variable values you must update the value to NULL. If the update type is set to post-update, calculations that depend on the variable value will not be fired.

14.2.01 Example
DECLARE @VarId @PUId @VarPrecision int, int, int

DECLARE @VariableResults TABLE ( ResultSetType int DEFAULT 2, VarId int NULL, PUId int NULL, UserId int NULL, Cancelled int DEFAULT 0, Result varchar(50) NULL, ResultOn varchar(50) NULL, TransType int DEFAULT 1, PostUpdate int DEFAULT 0) SELECT @VarId @PUId @VarPrecision = Var_Id, = PU_Id, = Var_Precision

FROM Variables WHERE Var_Desc = 'MyVariable' INSERT INTO @VariableResults ( VarId, PUId, Result, ResultOn)

VALUES (

@VarId, @PUId, ltrim(str(1.234, 50, @VarPrecision)), convert(varchar(50), getdate(), 120))

-- Output results SELECT ResultSetType,

GE Fanuc Automation

SQL Programming Guidelines

Page 88 of 120

VarId, PUId, UserId, Cancelled, Result, ResultOn, TransType, PostUpdate FROM @VariableResults

GE Fanuc Automation

SQL Programming Guidelines

Page 89 of 120

14.3 Grade Changes


The Grade Change result set is used for creating, modifying and/or deleting grade changes. There is no End_Time field in this result set so when it is issued the DBMgr will create the record with either a NULL End_Time (i.e. its the current grade selection) or it will use the start time of the next grade change as the end time. If the current grade is the same as the specified product id in the result set then it will be ignored. If the next grade change already exists and is the same product as that specified in the result set, then the Start_Time of the next grade change will be modified to the one specified in the result set. To update a grade change record with a different product, query the Start_Id out of the Production_Starts table and include that in the result set (along with the other fields) along with a different Prod_Id. As stated above, if you want to modify the Start_Time to an earlier then, simply issue the result set with the same product and an earlier start time. To modify it with a later time, you must delete and then recreate the grade change record from scratch. To delete a grade change, you must find the product id of the previous grade change and then update the target grade change with the previous grade changes product id. The DBMgr will then delete the target grade change.

Order 0 1 2 3 4 5 6

Field Name Result Set Type Grade Change Id Unit Id Product Id TimeStamp Update Type User Id

Values/Table Reference 3 Production_Starts.Start_Id Production_Starts.PU_Id Prod_Units.PU_Id Products.Prod_Id Production_Starts.Start_Time 0 = Pre-Update 1 = Post-Update Production_Starts.User_Id Users.User_Id

14.3.01 Example
DECLARE @PUId @ProdId int int,

DECLARE @ProductionStarts TABLE ( ResultSetType int DEFAULT 3, StartId int NULL, PUId int NULL, ProdId int NULL, StartTime varchar(50) NULL, PostUpdate int DEFAULT 0) SELECT @PUId = PU_Id FROM Prod_Units WHERE PU_Desc = 'MyUnit' SELECT @ProdId = Prod_Id FROM Products WHERE Prod_Code = 'MyProductCode'

GE Fanuc Automation

SQL Programming Guidelines

Page 90 of 120

INSERT INTO @ProductionStarts ( VALUES (

PU_Id, ProdId, StartTime)

@PUId, @ProdId, convert(varchar(50), getdate(), 120))

-- Output results SELECT ResultSetType, StartId, PUId, ProdId, StartTime, PostUpdate FROM @ProductionStarts

GE Fanuc Automation

SQL Programming Guidelines

Page 91 of 120

14.4 Downtime Events


Order 0 1 2 3 4 5 6 7 8 9 10 11 Field Name Result Set Type Unit Id Location Id Status Id Fault Id Reason 1 Id Reason 2 Id Reason 3 Id Reason 4 Id Production Rate Duration Transaction Type Values/Table Reference 5 Timed_Event_Details.PU_Id Prod_Units.PU_Id Timed_Event_Details.Source_PU_Id Prod_Units.PU_Id Timed_Event_Details.TEStatus_Id Timed_Event_Status.TEStatus_Id Timed_Event_Details.TEFault_Id Timed_Event_Faults.TEFault_Id Timed_Event_Details.Reason_Level1 Event_Reasons.Reason_Id Timed_Event_Details.Reason_Level2 Event_Reasons.Reason_Id Timed_Event_Details.Reason_Level3 Event_Reasons.Reason_Id Timed_Event_Details.Reason_Level4 Event_Reasons.Reason_Id Timed_Event_Details.Production_Rate Timed_Event_Details.Duration 1 = Add 2 = Update 3 = Delete 4 = Close Timed_Event_Details.Start_Time Timed_Event_Details.End_Time Timed_Event_Details.TEDet_Id

12 13 14

Start Time End time Downtime Event Id

The Downtime result sets do not have a post-update option so you shouldnt attempt to manually insert records and then issue a result set. If you do, you risk undoing changes that youve already made. For example, if you open an event and issue the result set and then close the event a second later before the DBMgr has had a chance to process the result set, the DBMgr will end up reopening the event when it does get around to processing the result set.

14.4.01 Example
DECLARE @MachineDown @PUId @LocationId @ReasonId1 @StartTime int, int, int, int, datetime

DECLARE @DowntimeEvents TABLE ( ResultSetType int DEFAULT 5, PUId int NULL, SourcePUId int NULL, StatusId int NULL, FaultId int NULL, ReasonLevel1 int NULL, ReasonLevel2 int NULL,

GE Fanuc Automation

SQL Programming Guidelines

Page 92 of 120

ReasonLevel3 ReasonLevel4 ProdRate Duration TransType StartTime EndTime TEDetId SELECT @PUId = PU_Id FROM Prod_Units WHERE PU_Desc = 'MyUnit' SELECT @LocationId = PU_Id FROM Prod_Units WHERE PU_Desc = 'MyLocation'

int NULL, int NULL, int NULL, float NULL, int Default 1, varchar(50) NULL, varchar(50) NULL, int NULL)

SELECT @ReasonId1 = Event_Reason_Id FROM Event_Reasons WHERE Event_Reason_Name = 'MyReason' IF @MachineDown = 1 BEGIN -- The following opens the downtime event INSERT INTO @DowntimeEvents ( PUId, SourcePUId, ReasonLevel1, StartTime) VALUES (@PUId, @LocationId, @ReasonId1, convert(varchar(50), getdate(), 120)) END ELSE BEGIN -- Get the current downtime event SELECT @StartTime = Start_Time FROM Timed_Event_Details WHERE PU_Id = @PUId AND Start_Time <= getdate() AND End_Time IS NULL -- The following closes the downtime event INSERT INTO @DowntimeEvents ( TransType, PUId, StartTime, EndTime) VALUES ( 4, @PUId, @StartTime, convert(varchar(50), getdate(), 120)) END -- Output results SELECT ResultSetType, PUId, SourcePUId, StatusId, FaultId, ReasonLevel1, ReasonLevel2, ReasonLevel3, ReasonLevel4, ProdRate, Duration , TransType, StartTime, EndTime,

GE Fanuc Automation

SQL Programming Guidelines

Page 93 of 120

TEDetId FROM @DowntimeEvents

GE Fanuc Automation

SQL Programming Guidelines

Page 94 of 120

14.5 Alarms
The Alarm result set is used for notifying clients about alarms. The result set alone will not create the alarm so the alarm record has to be created manually in the table before the result set is issued. Furthermore, the alarm has to be started and then ended separately for the alarm result set to work (i.e. the alarm must be opened by issuing a result set with a Null End Time and then closed by issuing a result set with the End Time filled out).

Order 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27

Field Name Result Set Type Update Type Transaction Number Alarm Id Alarm Template Data Id Start Time End Time Duration Acknowledged Acknowledged Timestamp Acknowledged By Starting Value Ending Value Minimum Value Maximum Value Cause 1 Cause 2 Cause 3 Cause 4 Cause Comment Id Action 1 Action 2 Action 3 Action 4 Action Comment Id Research User Id Research Status Id Research Open Date

Values/Table Reference 6 0 = Pre-Update 1 = Post-Update Alarms.Alarm_Id Alarm_Template_Var_Data.ATD_Id Alarms.Start_Time Alarms.End_Time Alarms.Duration Alarms.Ack Alarms.Ack_On Alarms.Ack_By Alarms.Start_Result Alarms.End_Result Alarms.Min_Result Alarms.Max_Result Alarms.Cause1 Event_Reasons.Reason_Id Alarms.Cause2 Event_Reasons.Reason_Id Alarms.Cause3 Event_Reasons.Reason_Id Alarms.Cause4 Event_Reasons.Reason_Id Alarms.Cause_Comment_Id Comments.Comment_Id Alarms.Action1 Event_Reasons.Reason_Id Alarms.Action1 Event_Reasons.Reason_Id Alarms.Action1 Event_Reasons.Reason_Id Alarms.Action1 Event_Reasons.Reason_Id Alarms.Action_Comment_Id Comments.Comment_Id Alarms.Research_User_Id Users.User_Id Alarms.Research_Status_Id Alarms.Research_Open_Date

GE Fanuc Automation

SQL Programming Guidelines

Page 95 of 120

28 29 30 31 32 33 34

Research Close Date Research Comment Id Source PU Id Alarm Type Id Key Id Alarm Description Transaction Type

35 36 37 38 39

Template Variable Comment Id Alarm Priority Id Alarm Template Id Variable Comment Id Cutoff

Alarms.Research_Close_Date Alarms.Research_Comment_Id Comments.Comment_Id Alarms.Source_PU_Id Prod_Units.PU_Id Alarms.Alarm_Type_Id Alarm_Types.Alarm_Type_Id Variables.Var_Id Alarm_Template_Var_Data.Var_Id Alarms.Alarm_Desc 1 = Add 2 = Update 3 = Delete Alarm_Templates.Comment_Id Comments.Comment_Id Alarm_Templates.AP_Id Alarm_Priorities.AP_Id Alarm_Templates.AT_Id Variables.Comment_Id Comments.Comment_Id Alarms.Cutoff

14.5.01 Example
DECLARE @TimeStamp @AlarmId @ATDId @AlarmTypeId @ATId @ATDesc @VarId @VarDesc @AlarmCount datetime, int, int, int, int, varchar(50), int, varchar(50), int int DEFAULT 6, int DEFAULT 0, int DEFAULT 0, int NULL, int NULL, varchar(50) NULL, varchar(50) NULL, float NULL, bit DEFAULT 0, varchar(50) NULL, int NULL, varchar(50) NULL, varchar(50) NULL, varchar(50) NULL, varchar(50) NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL,

DECLARE @Alarms TABLE ( ResultSetType PreUpdate TransNum AlarmId ATDId StartTime EndTime Duration Ack AckOn AckBy StartResult EndResult MinResult MaxResult Cause1 Cause2 Cause3 Cause4 CauseCommentId Action1 Action2 Action3 Action4 ActionCommentId ResearchUserId ResearchStatusId

GE Fanuc Automation

SQL Programming Guidelines

Page 96 of 120

ResearchOpenDate ResearchCloseDate ResearchCommentId SourcePUId AlarmTypeId KeyId AlarmDesc TransType TemplateVariableCommentId APId ATId VarCommentId Cutoff SELECT @ATId @AlarmTypeId FROM Alarm_Templates WHERE AT_Desc = @ATDesc

varchar(50) NULL, varchar(50) NULL, int NULL, int NULL, int NULL, int NULL, char(50), int NULL, int NULL, int NULL, int NULL, int NULL, tinyint NULL) = AT_Id, = Alarm_Type_Id

SELECT @VarId = Var_Id FROM Variables WHERE PU_Id = @PUId AND Var_Desc = @VarDesc SELECT @ATDId = ATD_Id FROM Alarm_Template_Var_Data WHERE Var_Id = @VarId AND AT_Id = @ATId IF @VarId IS NOT NULL AND @ATId IS NOT NULL AND @ATDId IS NOT NULL BEGIN SELECT @AlarmCount = count(Alarm_Id) + 1 FROM Alarms WHERE ATD_Id = @ATDId AND Key_Id = @VarId AND Start_Time = @TimeStamp INSERT Alarms ( ATD_Id, Start_Time, Start_Result, Alarm_Type_Id, Key_Id, Alarm_Desc, User_Id )

VALUES (

@ATDId, @TimeStamp, @AlarmCount, @AlarmTypeId, @VarId, @Message, @UserId) SELECT @AlarmId = @@Identity INSERT @Alarms ( PreUpdate, TransNum, AlarmId, ATDId, StartTime, EndTime, Duration, Ack, AckOn, AckBy, StartResult, EndResult,

GE Fanuc Automation

SQL Programming Guidelines

Page 97 of 120

MinResult, MaxResult, Cause1, Cause2, Cause3, Cause4, CauseCommentId, Action1, Action2, Action3, Action4, ActionCommentId, ResearchUserId, ResearchStatusId, ResearchOpenDate, ResearchCloseDate, ResearchCommentId, SourcePUId, AlarmTypeId, KeyId, AlarmDesc, TransType, TemplateVariableCommentId, APId, ATId, VarCommentId, Cutoff) SELECT 0, 0, a.Alarm_Id, a.ATD_Id, a.Start_Time, a.End_Time, a.Duration, a.Ack, a.Ack_On, a.Ack_By, a.Start_Result, a.End_Result, a.Min_Result, a.Max_Result, a.Cause1, a.Cause2, a.Cause3, a.Cause4, a.Cause_Comment_Id, a.Action1, a.Action2, a.Action3, a.Action4, a.Action_Comment_Id, a.Research_User_Id, a.Research_Status_Id, a.Research_Open_Date, a.Research_Close_Date, a.Research_Comment_Id, a.Source_PU_Id, a.Alarm_Type_Id, a.Key_Id, a.Alarm_Desc, 1, d.Comment_Id, t.AP_Id, d.AT_Id, v.Comment_Id, 0 FROM Alarms a INNER JOIN Variables v ON a.Key_Id = v.Var_Id

GE Fanuc Automation

SQL Programming Guidelines

Page 98 of 120

INNER JOIN Alarm_Template_Var_Data d ON a.ATD_Id = d.ATD_Id INNER JOIN Alarm_Templates t ON d.AT_Id = t.AT_Id WHERE a.Alarm_Id = @AlarmId UPDATE Alarms SET End_Time = dateadd(minute, 1, @TimeStamp) WHERE Alarm_Id = @AlarmId INSERT @Alarms ( PreUpdate, TransNum, AlarmId, ATDId, StartTime, EndTime, Duration, Ack, AckOn, AckBy, StartResult, EndResult, MinResult, MaxResult, Cause1, Cause2, Cause3, Cause4, CauseCommentId, Action1, Action2, Action3, Action4, ActionCommentId, ResearchUserId, ResearchStatusId, ResearchOpenDate, ResearchCloseDate, ResearchCommentId, SourcePUId, AlarmTypeId, KeyId, AlarmDesc, TransType, TemplateVariableCommentId, APId, ATId, VarCommentId, Cutoff)

SELECT

0, 0, a.Alarm_Id, a.ATD_Id, a.Start_Time, a.End_Time, a.Duration, a.Ack, a.Ack_On, a.Ack_By, a.Start_Result, a.End_Result, a.Min_Result, a.Max_Result, a.Cause1, a.Cause2, a.Cause3, a.Cause4, a.Cause_Comment_Id, a.Action1, a.Action2,

GE Fanuc Automation

SQL Programming Guidelines

Page 99 of 120

a.Action3, a.Action4, a.Action_Comment_Id, a.Research_User_Id, a.Research_Status_Id, a.Research_Open_Date, a.Research_Close_Date, a.Research_Comment_Id, a.Source_PU_Id, a.Alarm_Type_Id, a.Key_Id, a.Alarm_Desc, 2, d.Comment_Id, t.AP_Id, d.AT_Id, v.Comment_Id, 0 FROM Alarms a INNER JOIN Variables v ON a.Key_Id = v.Var_Id INNER JOIN Alarm_Template_Var_Data d ON a.ATD_Id = d.ATD_Id INNER JOIN Alarm_Templates t ON d.AT_Id = t.AT_Id WHERE a.Alarm_Id = @AlarmId SELECT ResultSetType, PreUpdate, TransNum, AlarmId, ATDId, StartTime, EndTime, Duration, Ack, AckOn, AckBy, StartResult, EndResult, MinResult, MaxResult, Cause1, Cause2, Cause3, Cause4, CauseCommentId, Action1, Action2, Action3, Action4, ActionCommentId, ResearchUserId, ResearchStatusId, ResearchOpenDate, ResearchCloseDate, ResearchCommentId, SourcePUId, AlarmTypeId, KeyId, AlarmDesc, TransType, TemplateVariableCommentId, APId, ATId, VarCommentId, Cutoff FROM @Alarms END

GE Fanuc Automation

SQL Programming Guidelines

Page 100 of 120

14.6 Sheet Columns


Order 0 1 2 3 Field Name Result Set Type Sheet Id User Id Transaction Type Values/Table Reference 7 Sheet_Columns.Sheet_Id Sheets.Sheet_Id User.User_Id 1 = Add 2 = Update 3 = Delete Sheet_Columns.Result_On 0 = Pre-Update 1 = Post-Update

4 5

Timestamp Update Type

14.6.01 Example
DECLARE @SheetId int int DEFAULT 7, int NULL, int NULL, int DEFAULT 1, varchar(50) NULL, int DEFAULT 0) CREATE TABLE @SheetColumns ( ResultSetType SheetId UserId TransType TimeStamp PostUpdate SELECT @SheetId = Sheet_Id FROM Sheets WHERE Sheet_Desc = 'MySheet' INSERT INTO @SheetColumns ( SheetId, TimeStamp)

VALUES (@SheetId, convert(varchar(50), getdate(), 120)) -- Output results SELECT ResultSetType, SheetId, UserId, TransType, TimeStamp, PostUpdate FROM @SheetColumns

GE Fanuc Automation

SQL Programming Guidelines

Page 101 of 120

14.7 User Defined Events


Order 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 Field Name Result Set Type Update Type User Defined Event Id User Defined Event Number Unit Id Event Subtype Id Start Time End Time Duration Acknowledged Acknowledged Timestamp Acknowledged By Cause 1 Cause 2 Cause 3 Cause 4 Cause Comment Id Action 1 Action 2 Action 3 Action 4 Action Comment Id Research User Id Research Status Id Research Open Date Research Close Date Research Comment Id User Defined Event Comment Id Transaction Type Values/Table Reference 8 0 = Pre-Update 1 = Post-Update User_Defined_Events.UDE_Id User_Defined_Events.UDE_Desc User_Defined_Events.PU_Id Prod_Units.PU_Id User_Defined_Events.Event_Subtype_Id User_Defined_Events.Start_Time User_Defined_Events.End_Time User_Defined_Events.Duration User_Defined_Events.Ack User_Defined_Events.Ack_On User_Defined_Events.Ack_By User_Defined_Events.Cause1 Event_Reasons.Reason_Id User_Defined_Events.Cause2 Event_Reasons.Reason_Id User_Defined_Events.Cause3 Event_Reasons.Reason_Id User_Defined_Events.Cause4 Event_Reasons.Reason_Id User_Defined_Events.Cause_Comment_Id Comments.Comment_Id User_Defined_Events.Action1 Event_Reasons.Reason_Id User_Defined_Events.Action1 Event_Reasons.Reason_Id User_Defined_Events.Action1 Event_Reasons.Reason_Id User_Defined_Events.Action1 Event_Reasons.Reason_Id User_Defined_Events.Action_Comment_Id Comments.Comment_Id User_Defined_Events.Research_User_Id Users.User_Id User_Defined_Events.Research_Status_Id User_Defined_Events.Research_Open_Date User_Defined_Events.Research_Close_Date User_Defined_Events.Research_Comment_Id Comments.Comment_Id User_Defined_Events.Comment_Id 1 = Add 2 = Update

GE Fanuc Automation

SQL Programming Guidelines

Page 102 of 120

29 30 31

Event Sub Type Description Transaction Number User Id

3 = Delete Event_Subtypes.Event_Subtype_Desc

User_Defined_Events.User_Id Users.User_Id

14.7.01 Example
DECLARE @UserDefinedEvents ( ResultSetType PreUpdate UDEId UDEDesc PUId EventSubTypeId StartTime EndTime Duration Ack AckOn AckBy Cause1 Cause2 Cause3 Cause4 CauseCommentId Action1 Action2 Action3 Action4 ActionCommentId ResearchUserId ResearchStatusId ResearchOpenDate ResearchCloseDate ResearchCommentId CommentId TransType EventSubTypeDesc TransNum UserId int DEFAULT 8, int DEFAULT 1, int NULL, varchar(50) NULL, int NULL, int NULL, datetime NULL, datetime NULL, int NULL, int DEFAULT 0, datetime NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, datetime NULL, datetime NULL, int NULL, int NULL, int DEFAULT 1, varchar(50) NULL, int DEFAULT 0, int NULL)

GE Fanuc Automation

SQL Programming Guidelines

Page 103 of 120

14.8 Waste Event


Order 0 1 2 3 4 Field Name Result Set Type Update Type Transaction Number User Id Transaction Type Values/Table Reference 9 0 = Pre-Update 1 = Post-Update Waste_Event_Details.User_Id Users.User_Id 1 = Add 2 = Update 3 = Delete Waste_Event_Details.WED_Id Waste_Event_Details.PU_Id Prod_Units.PU_Id Waste_Event_Details.Source_PU_Id Prod_Units.PU_Id Waste_Event_Details.WET_Id Waste_Event_Types.WET_Id Waste_Event_Details.WEMT_Id Waste_Event_Meas.WEMT_Id Waste_Event_Details.Reason1 Event_Reasons.Reason_Id Waste_Event_Details.Reason2 Event_Reasons.Reason_Id Waste_Event_Details.Reason3 Event_Reasons.Reason_Id Waste_Event_Details.Reason4 Event_Reasons.Reason_Id Waste_Event_Details.Event_Id Events.Event_Id Waste_Event_Details.Amount Waste_Event_Details.Marker1 Waste_Event_Details.Marker2 Waste_Event_Details.TimeStamp Waste_Event_Details.Action1 Event_Reasons.Reason_Id Waste_Event_Details.Action1 Event_Reasons.Reason_Id Waste_Event_Details.Action1 Event_Reasons.Reason_Id Waste_Event_Details.Action1 Event_Reasons.Reason_Id Waste_Event_Details.Action_Comment_Id Comments.Comment_Id Waste_Event_Details.Research_Comment_Id Comments.Comment_Id Waste_Event_Details.Research_Status_Id Waste_Event_Details.Research_Open_Date Waste_Event_Details.Research_Close_Date

5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27

Waste Event Id PU_Id Location Type Id Measure Id Reason 1 Reason 2 Reason 3 Reason 4 Event Id Amount Marker 1 Marker 2 Timestamp Action 1 Action 2 Action 3 Action 4 Action Comment Id Research Comment Id Research Status Id Research Open Date Research Close Date

GE Fanuc Automation

SQL Programming Guidelines

Page 104 of 120

28 29 30

Waste Event Comment Id TargetProdRate Research User Id

Waste_n_Timed_Comments.WTC_Id Waste_Event_Details.Target_Prod_Rate Waste_Event_Details.Research_User_Id Users.User_Id

If the Transaction Number is set to a 0, it means that a value of 0 returned for any dimension fields will be set to NULL in the database. If the Transaction Number is set to a 2, it means that a value of 0 returned for any dimension fields will be set to 0 in the database.

14.8.01 Example
DECLARE @WasteEvents TABLE ( ResultSetType PreUpdate TransNum UserId TransType WEDId PUId SourcePUId WETId WEMTId Cause1 Cause2 Cause3 Cause4 EventId Amount Marker1 Marker2 TimeStamp Action1 Action2 Action3 Action4 ActionCommentId ResearchCommentId ResearchStatusId ResearchOpenDate ResearchCloseDate CommentId TargetProdRate ResearchUserId int DEFAULT 9, int DEFAULT 1, int DEFAULT 0, int NULL, int DEFAULT 1, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, float NULL, float NULL, float NULL, datetime NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, int NULL, datetime NULL, datetime NULL, int NULL, float NULL, int NULL)

GE Fanuc Automation

SQL Programming Guidelines

Page 105 of 120

14.9 Production Event Details


Order 0 1 2 3 Field Name Result Set Type Update Type User Id Transaction Type Values/Table Reference 10 0 = Pre-Update 1 = Post-Update Event_Details.User_Id Users.User_Id 1 = Add 2 = Update 3 = Delete Event_Details.Event_Id Events.Event_Id Event_Details.PU_Id Prod_Units.PU_Id Event_Details.Primary_Event_Num Event_Details.Alternate_Event_Num Event_Details.Comment_Id Comments.Comment_Id Event_Details.Original_Product Products.Prod_Id Event_Details.Applied_Product Products.Prod_Id Event_Details.Event_Status Event_Details.TimeStamp Event_Details.Entry_On Event_Details.PP_Setup_Detail_Id Event_Details.Shipment_Item_Id Event_Details.Order_Id Event_Details.Order_Line_Id Event_Details.PP_Id Event_Details.Initial_Dimension_X Event_Details.Initial_Dimension_Y Event_Details.Initial_Dimension_Z Event_Details.Initial_Dimension_A Event_Details.Final_Dimension_X Event_Details.Final_Dimension_Y Event_Details.Final_Dimension_Z Event_Details.Final_Dimension_A Event_Details.Orientation_X Event_Details.Orientation_Y Event_Details.Orientation_Z

4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

Transaction Number Event Id Unit Id Primary Event Number Alternate Event Number Comment Id Event Sub Type Id Original Product Applied Product Event Status Timestamp Entry On Production Plan Setup Detail Id Shipment Item Id Order Id Order_Line_Id Production Plan Id Initial_Dimension X Initial_Dimension Y Initial_Dimension Z Initial_Dimension A Final_Dimension X Final_Dimension Y Final_Dimension Z Final_Dimension A Orientation X Orientation Y Orientation Z

If the Transaction Number is set to a 0, it means that a value of 0 returned for any dimension fields will be set to NULL in the database. If the Transaction Number is set to a 2, it means that a value of 0 returned for any dimension fields will be set to 0 in the database.

GE Fanuc Automation

SQL Programming Guidelines

Page 106 of 120

14.9.01 Example
DECLARE @EventDetails TABLE ( ResultSetType PostUpdate UserId TransType TransNum EventId PUId PrimaryEventNum AlternateEventNum CommentId EventType OriginalProduct AppliedProduct EventStatus TimeStamp EnteredOn PPSetupDetailId ShipmentItemId OrderId OrderLineId PPId InitialDimensionX InitialDimensionY InitialDimensionZ InitialDimensionA FinalDimensionX FinalDimensionY FinalDimensionZ FinalDimensionA OrientationX OrientationY OrientationZ int DEFAULT 10, int DEFAULT 1, int DEFAULT 1, int DEFAULT 1, int NULL, int NULL, int NULL, varchar(25) NULL, varchar(25) NULL, int NULL, int NULL, int NULL, int NULL, int NULL, datetime NULL, datetime NULL, int NULL, int NULL, int NULL, int NULL, int NULL, float NULL, float NULL, float NULL, float NULL, float NULL, float NULL, float NULL, float NULL, tinyint NULL, tinyint NULL, tinyint NULL)

GE Fanuc Automation

SQL Programming Guidelines

Page 107 of 120

14.10 Genealogy Event Components


Order 0 1 2 3 Field Name Result Set Type Update Type User Id Transaction Type Values/Table Reference 11 0 = Pre-Update 1 = Post-Update Event_Components.User_Id Users.User_Id 1 = Add 2 = Update 3 = Delete Event_Components.Component_Id Event_Components.Event_Id Events.Event_Id Event_Components.Source_Event_Id Events.Event_Id Event_Components.Dimension_X Event_Components.Dimension_Y Event_Components.Dimension_Z Event_Components.Dimension_A

4 5 6 7 8 9 10 11

Transaction Number Component Id Event Id Source Event Id Dimension X Dimension Y Dimension Z Dimension A

If the Transaction Number is set to a 0, it means that a value of 0 returned for any dimension fields will be set to NULL in the database. If the Transaction Number is set to a 2, it means that a value of 0 returned for any dimension fields will be set to 0 in the database.

14.10.01 Example
DECLARE @EventComponents TABLE ( ResultSetType int DEFAULT 11, PreUpdate int DEFAULT 0, UserId int NULL, TransType int DEFAULT 1, TransNum int NULL, ComponentId int NULL, EventId int NULL, SourceEventId int NULL, DimensionX float NULL, DimensionY float NULL, DimensionZ float NULL, DimensionA float NULL)

GE Fanuc Automation

SQL Programming Guidelines

Page 108 of 120

14.11 Genealogy Input Events


Order 0 1 2 3 Field Name Result Set Type Update Type User Id Transaction Type Values/Table Reference 12 0 = Pre-Update 1 = Post-Update PrdExec_Input_Event.User_Id Users.User_Id 1 = Add 2 = Update 3 = Delete PrdExec_Input_Event..TimeStamp PrdExec_Input_Event.Entry_On PrdExec_Input_Event.Comment_Id Comments.Comment_Id PrdExec_Input_Event.PEI_Id PrdExec_Inputs.PEI_Id PrdExec_Input_Event.PEIP_Id PrdExec_Input_Positions.PEIP_Id PrdExec_Input_Event.Event_Id Events.Event_Id PrdExec_Input_Event..Dimension_X PrdExec_Input_Event..Dimension_Y PrdExec_Input_Event..Dimension_Z PrdExec_Input_Event..Dimension_A PrdExec_Input_Event.Unloaded

4 5 6 7

Transaction Number Timestamp Entry On Comment Id

8 9 10 11 12 13 14 15

Production Event Input Id Production Event Input Position Id Event Id Dimension X Dimension Y Dimension Z Dimension A Unloaded

GE Fanuc Automation

SQL Programming Guidelines

Page 109 of 120

14.12 Defects
Order 0 1 2 Field Name Result Set Type Update Type Transaction Type Values/Table Reference 13 0 = Pre-Update 1 = Post-Update 1 = Add 2 = Update 3 = Delete Defect_Details.Defect_Detail_Id Defect_Details.Defect_Type_Id Defect_Types.Defect_Type_Id Defect_Details.Cause1 Event_Reasons.Reason_Id Defect_Details.Cause2 Event_Reasons.Reason_Id Defect_Details.Cause3 Event_Reasons.Reason_Id Defect_Details.Cause4 Event_Reasons.Reason_Id Defect_Details.Cause_Comment_Id Comments.Comment_Id Defect_Details.Action1 Event_Reasons.Reason_Id Defect_Details.Action1 Event_Reasons.Reason_Id Defect_Details.Action1 Event_Reasons.Reason_Id Defect_Details.Action1 Event_Reasons.Reason_Id Defect_Details.Action_Comment_Id Comments.Comment_Id Defect_Details.Research_Status_Id Defect_Details.Research_Comment_Id Comments.Comment_Id Defect_Details.Research_User_Id Users.User_Id Defect_Details.Event_Id Events.Event_Id Defect_Details.Source_Event_Id Events.Event_Id Defect_Details.PU_Id Prod_Units.PU_Id Defect_Details.Event_Subtype_Id Event_Subtypes.Event_Subtype_Id Defect_Details.User_Id Users.User_Id

3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25

Transaction Number Defect Detail Id Defect Type Id Cause 1 Cause 2 Cause 3 Cause 4 Cause Comment Id Action 1 Action 2 Action 3 Action 4 Action Comment Id Research Status Id Research Comment Id Research User Id Event Id Source Event Id Unit Id Event Subtype Id User Id Visual Start X Visual Start Y

GE Fanuc Automation

SQL Programming Guidelines

Page 110 of 120

26 27 28 29 30 31 32 33 34 35 36 37 38 39 40

Severity Repeat Dimension X Dimension Y Dimension Z Amount Start Position X Start Position Y End Position X End Position Y Research Open Date Research Close Date Start Time End Time Entry On

Defect_Details.Dimension_X Defect_Details.Dimension_Y Defect_Details.Dimension_Z Defect_Details.Amount Defect_Details.Start_Position_X Defect_Details.Start_Position_Y Defect_Details.End_Position_X Defect_Details.Start_Position_Y Defect_Details.Research_Open_Date Defect_Details.Research_Close_Date Defect_Details.Start_Time Defect_Details.End_Time Defect_Details.Entry_On

GE Fanuc Automation

SQL Programming Guidelines

Page 111 of 120

14.13 Output File


The Output File result set builds a file 1 field at a time and, when building the file, the results sets for all the fields must be issued all at once as a single recordset. This is best accomplished by building the result sets in a temporary table and then returning all the records at once.

Order 0 1 2 3 4 5 6 7 8 9 10 11 12 13

Field Name Result Set Type File Number File Name Field Number Field Name Type Length Precision Value Carriage Return Construction Path Final Path Move Mask Add Timestamp

Values/Table Reference 50

Alpha

0 = No carriage return 1 = Carriage return after the field is written

0 = No 1 = Short 2 = Full

The File Number is relevant only when build multiple files in a single block of records. A default value of 1 will be sufficient when building a single file, otherwise increment it as necessary. The Field Number must increment sequentially throughout the creation of the entire file regardless of the fields column/row position. The overall formatting of the file is determined by the sequence of the fields and the placement of the Carriage Returns. The Field Name can be set to any value. The Construction Path cant be the same as the Final Path. If they are the file will not be created. The Move Mask must include the File Name or the file will not end up in the Final Path. The Move Mask will move all files that match it, regardless of where they came from (i.e. if Move Mask = *.log, then all files that match *.log in the Construction Path will be moved to the Final Path). A simple way to limit this is to make the Move Mask the same as the File Name. Add Timestamp option will add a timestamp to the end of the file name extension. If the option is set to 1, a File Name of Output.dat will become Output.dat120102.

14.13.01 Example
DECLARE @FileOutput TABLE ( ResultSetType FileNumber int DEFAULT 50, int DEFAULT 1,

GE Fanuc Automation

SQL Programming Guidelines

Page 112 of 120

FileName FieldNumber FieldName FieldType FieldLength FieldPrecision FieldValue FieldCR FieldBuildPath FieldFinalPath FieldMoveMask AddTimestamp )

varchar(255) NULL, int IDENTITY, varchar(20) DEFAULT '0', varchar(20) DEFAULT 'Alpha', int NULL, int DEFAULT 1, varchar(255) NULL, int DEFAULT 0, varchar(50) NULL, varchar(50) NULL, varchar(50) NULL, int DEFAULT 0

INSERT INTO @FileOutput ( FileName, FieldLength, FieldValue, FieldCR, FieldBuildPath, FieldFinalPath, FieldMoveMask) VALUES ( Output.dat, 255, MyDataFieldValue, 1, C:\Temp\, C:\Output Directory\, Output.dat) -- Output results SELECT * FROM @FileOutput ORDER BY FieldNumber ASC

GE Fanuc Automation

SQL Programming Guidelines

Page 113 of 120

15.0 Appendix B: Defragmenting Indexes


The following SQL code is an example of a manual maintenance utility for defragmenting table indexes.
/* Author: Date Created: Matthew Wells (GE) 2006/02/21

Description: ========= This routine checks the level of index fragmentation, then defragments the indexes and then reindexes them. Change Date =========== */ Who ==== What ===== RowId int IDENTITY PRIMARY KEY, ObjectId int, IndexId int) int IDENTITY PRIMARY KEY, ObjectName varchar(128), ObjectId int, IndexName varchar(128), IndexId int, Level int, Pages int, Rows int, MinimumRecordSize int, MaximumRecordSize int, AverageRecordSize int, ForwardedRecords int, Extents int, ExtentSwitches int, AverageFreeBytes real, AveragePageDensity real, ScanDensity real, BestCount int, ActualCount int, LogicalFragmentation real, ExtentFragmentation real) varchar(128), int, varchar(128), int, int, int, int, datetime, datetime, real, datetime, datetime, real, real, real, real, int, int, float, float, varchar(25), varchar(25),

DECLARE @IndexList TABLE (

CREATE TABLE #Indexes ( RowId

DECLARE

@ObjectName @ObjectId @IndexName @IndexId @Debug @Defragment @Reindex @Start_Time @End_Time @Time @Total_Start_Time @Total_End_Time @Total_Time @Query_Time @Defrag_Time @Reindex_Time @Rows @Row @ScanDensity @LogicalFragmentation @DBCCCOMMAND @DBCCOPTIONS

GE Fanuc Automation

SQL Programming Guidelines

Page 114 of 120

@SCANDENSITYLIMIT @FRAGMENTATIONLIMIT

float, float

-------------------------------------------------------------------------------------------------------------Initialization --------------------------------------------------------------------------------------------------------------- Constants SELECT @DBCCCOMMAND = 'DBCC SHOWCONTIG (', @DBCCOPTIONS = ') WITH TABLERESULTS', @SCANDENSITYLIMIT = 90.0, @FRAGMENTATIONLIMIT = 10.0 -- Parameters SELECT @Defrag_Time = 0, @Reindex_Time = 0, @Total_Start_Time = getdate(), @Debug @Defragment @Reindex

= 1, = 0, -- SET THIS TO 1 TO DEFRAGMENT INDEXES = 0 -- SET THIS TO 1 TO REBUILD INDEXES

-------------------------------------------------------------------------------------------------------------Get Fragmented Indexes -------------------------------------------------------------------------------------------------------------SELECT @Start_Time = getdate() IF @Debug = 1 BEGIN PRINT 'Querying Indexes...' END INSERT INTO @IndexList ( SELECT si.id, si.IndID FROM sysindexes si JOIN sysobjects so ON so.name = si.name AND si.id = so.Parent_obj AND ( so.xtype = 'PK' OR so.xtype = 'UQ') WHERE si.IndID > 0 SELECT @Rows = @@ROWCOUNT, @Row = 0 WHILE @Row < @Rows BEGIN SELECT @Row = @Row + 1 SELECT @ObjectId = ObjectId, @IndexId = IndexId FROM @IndexList WHERE RowId = @Row INSERT #Indexes (ObjectName, ObjectId, IndexName, IndexId, Level, Pages, Rows, MinimumRecordSize, MaximumRecordSize, AverageRecordSize, ForwardedRecords, Extents, ExtentSwitches, AverageFreeBytes, AveragePageDensity, ScanDensity, ObjectId, IndexId)

GE Fanuc Automation

SQL Programming Guidelines

Page 115 of 120

BestCount, ActualCount, LogicalFragmentation, ExtentFragmentation) EXEC (@DBCCCOMMAND + @ObjectId + ',' + @IndexId + @DBCCOPTIONS) END SELECT @Query_Time = convert(real, datediff(s, @Start_Time, getdate()))/60.0 IF @Debug = 1 BEGIN SELECT ObjectName, IndexName, IndexId, ScanDensity, LogicalFragmentation, ExtentFragmentation FROM #Indexes WHERE ScanDensity < @SCANDENSITYLIMIT OR LogicalFragmentation > @FRAGMENTATIONLIMIT ORDER BY ObjectName ASC, IndexName ASC PRINT 'Queried Indexes in ' + ltrim(str(@Query_Time, 25, 2)) + ' min' END -------------------------------------------------------------------------------------------------------------Defragment Indexes -------------------------------------------------------------------------------------------------------------IF @Defragment = 1 BEGIN SELECT @Row = 0 WHILE @Row < @Rows BEGIN SELECT @Row = @Row + 1 SELECT @ObjectName @ObjectId @IndexName @IndexId @ScanDensity @LogicalFragmentation FROM #Indexes WHERE RowId = @Row IF = ObjectName, = ObjectId, = IndexName, = IndexId, = ScanDensity, = LogicalFragmentation

@ScanDensity < @SCANDENSITYLIMIT OR @LogicalFragmentation > @FRAGMENTATIONLIMIT BEGIN IF @Debug = 1 BEGIN PRINT 'Defragmenting ' + @ObjectName + '.' + @IndexName END SELECT @Start_Time = getdate() IF @Debug = 1 BEGIN DBCC INDEXDEFRAG (0, @ObjectId, @IndexId) END ELSE BEGIN DBCC INDEXDEFRAG (0, @ObjectId, @IndexId) WITH NO_INFOMSGS END SELECT @Time = convert(real, datediff(s, @Start_Time, getdate()))/60.0 SELECT @Defrag_Time = @Defrag_Time + @Time IF @Debug = 1

GE Fanuc Automation

SQL Programming Guidelines

Page 116 of 120

BEGIN PRINT 'Defragmented ' + @ObjectName + '.' + @IndexName + ' in ' + ltrim(str(@Time, 25, 2)) + ' min' END END END END -------------------------------------------------------------------------------------------------------------Rebuild Indexes -------------------------------------------------------------------------------------------------------------IF @Reindex = 1 BEGIN SELECT @Row = 0 WHILE @Row < @Rows BEGIN SELECT @Row = @Row + 1 SELECT @ObjectName @IndexName @ScanDensity @LogicalFragmentation FROM #Indexes WHERE RowId = @Row IF = ObjectName, = IndexName, = ScanDensity, = LogicalFragmentation

@ScanDensity < @SCANDENSITYLIMIT OR @LogicalFragmentation > @FRAGMENTATIONLIMIT BEGIN IF @Debug = 1 BEGIN PRINT 'Reindexing ' + @ObjectName + '.' + @IndexName END SELECT @Start_Time = getdate() IF @Debug = 1 BEGIN DBCC DBREINDEX (@ObjectName, @IndexName) END ELSE BEGIN DBCC DBREINDEX (@ObjectName, @IndexName) WITH NO_INFOMSGS END SELECT @Time = convert(real, datediff(s, @Start_Time, getdate()))/60.0 SELECT @Reindex_Time = @Reindex_Time + @Time IF @Debug = 1 BEGIN PRINT 'Reindexed ' + @ObjectName + '.' + @IndexName + ' in ' + ltrim(str(@Time, 25, 2)) + '

min' END END END END -------------------------------------------------------------------------------------------------------------End Game -------------------------------------------------------------------------------------------------------------SELECT @Total_Time = convert(real, datediff(s, @Total_Start_Time, getdate()))/60.0 IF @Debug = 1 BEGIN PRINT 'Finished!' PRINT 'Query Time=' + ltrim(str(@Query_Time, 25, 2)) + ' min' PRINT 'Defrag Time=' + ltrim(str(@Defrag_Time, 25, 2)) + ' min' PRINT 'Reindex Time=' + ltrim(str(@Reindex_Time, 25, 2)) + ' min'

GE Fanuc Automation

SQL Programming Guidelines

Page 117 of 120

END DROP TABLE #Indexes

GE Fanuc Automation

SQL Programming Guidelines

Page 118 of 120

16.0 Appendix C: Monitor Blocking/Parallelism


Monitor Blocking 20061101.sql

The embedded sql code installs a SQL Server job that can be used to monitor blocking and parallism issues. This job can only be installed on SQL Server 2000 Service Pack 3 or greater. The job runs on a configurable 1 minute frequency and checks the master..sysprocesses table for blocking issues. If blocking is found, it then records the blocking process, all blocking victim processes and any processes that are currently running queries with parallelism. It can also optionally record the associated locks but it is not recommended to enable that option and leave the job unattended. The following 4 tables are created and populated by the job:
Table Name Local_Blocking_Log Local_Blocking_Victims Local_Blocking_Parallelism Local_Blocking_Locks Description List of the blocking processes. List of the blocking victims processes. List of processes with multiple threads at the time of the blocking. List of the blocking process locks.

The following query is an example of how to look at the data in the table:
SELECT TOP 10 Start_Time, Duration = datediff(s, start_time, end_Time), BlockingSPID = bl.spid, BlockingProgram = bl.program_name, BlockingObject = so.name, BlockingInputBuffer = bl.Event_Info, BlockingText = bl.Text, BlockingEncrypted = bl.Encrypted, VictimSPID = bv.spid, VictimProgram = bv.program_name, VictimObject = vso.name, VictimInputBuffer = bv.Event_Info, VictimText = bv.Text, VictimEncrypted = bl.Encrypted FROM dbo.local_blocking_log bl WITH (NOLOCK) LEFT JOIN sysobjects so WITH (NOLOCK) ON bl.Object_id = so.id LEFT JOIN dbo.local_blocking_victims bv WITH (NOLOCK) ON bl.bl_id = bv.bl_id LEFT JOIN sysobjects vso WITH (NOLOCK) ON bv.Object_id = vso.id ORDER BY bl.BL_Id DESC SELECT top 10 TimeStamp, SPID, Program_Name, Host_Name, Text, Encrypted FROM dbo.Local_Blocking_Parallelism bp WITH (NOLOCK) LEFT JOIN sysobjects so WITH (NOLOCK) ON bp.Object_id = so.id ORDER BY bp.Timestamp DESC, bp.ECId ASC

GE Fanuc Automation

SQL Programming Guidelines

Page 119 of 120

The job utilizes the fn_get_sql() function to retrieve the current running text of the processes (stored in the Text field in all the tables). This usually points to a particular query, which can then be addressed to resolve the blocking.

GE Fanuc Automation

SQL Programming Guidelines

Page 120 of 120

You might also like