Professional Documents
Culture Documents
Wonderware Factorysuite A: Deployment Guide
Wonderware Factorysuite A: Deployment Guide
Deployment Guide
Revision D.2
Last Revision: 1/27/06
Contents
Contents
Before You Begin ............................................. 11
About This Document ...........................................................................11
Assumptions ..........................................................................................11
FactorySuite A2 Application Versions ............................................. 12
FactorySuite A2 Terminology .......................................................... 13
Document Conventions ........................................................................ 14
Where to Find Additional Information................................................. 14
ArchestrA Community Website........................................................ 14
Technical Support ............................................................................. 15
Contents
Contents
Alarm DB Manager.............................................................................. 97
SuiteVoyager Software ......................................................................... 97
QI Analyst Software............................................................................. 98
DT Analyst Software............................................................................ 98
InTrack Software .................................................................................. 99
Using InTrack Software with an
Application Server ............................................................................ 99
InTrack Software Integration with Other FactorySuite Software... 101
InBatch Software................................................................................ 102
InBatch Production System Requirements ..................................... 103
InBatch Production System Topologies.......................................... 104
Third Party Application Integration ................................................... 106
FactorySuite Gateway..................................................................... 106
Other Connectivity/Integration Tools ............................................. 106
CHAPTER 6: Implementing
QuickScript .NET .............................................133
IAS Scripting Architecture................................................................. 134
ApplicationObject Script Interactions ............................................ 136
Script Access to IAS Object Attributes.............................................. 137
Referencing Object Attribute and Property Values ........................ 137
Other Script Function Categories ................................................... 138
Using UDAs ................................................................................... 142
Contents
Contents
CHAPTER 9: Implementing
Alarms and Events ..........................................213
General Considerations ...................................................................... 214
Configuring Alarm Queries................................................................ 214
Determining the Alarm Topology ...................................................... 215
Alarming in a Distributed Local Network Topology...................... 215
Alarming in a Client/Server Topology ........................................... 216
Logging Historical Alarms................................................................. 217
Contents
Contents
Index ................................................................387
10
Contents
11
Assumptions
This deployment guide is intended for:
It is assumed that you are familiar with the working environment of the
Microsoft Windows 2000 Server, Windows Server 2003, and Windows XP
Professional operating systems, as well as with a scripting, programming, or
macro language. Also, an understanding of concepts such as variables,
statements, functions, and methods will help you to achieve best results.
It is assumed that you are familiar with the individual components that
constitute the FactorySuite A2 environment. For additional information about a
component, see the associated user documentation.
All topologies referenced in this Deployment Guide assume a "Bus" topology,
which comprises a single main communications line (trunk) to which nodes are
attached; also known as Linear Bus. Exceptions are noted.
For more information on standard topology schemas, see
http://www.microsoft.com/technet/prodtechnol/visio/visio2002/plan/glossary.
mspx or http://en.wikipedia.org/wiki/Category:Network_topologies.
12
The figure on the following page contains terms used throughout this
document. Definitions of specific terms are included after the figure.
13
FactorySuite A2 Terminology
The following figure shows basic object classifications and their relationships
within the IAS System. This document focuses on the Application/Device
Integration/Engine/Platform/Area Objects level except where otherwise noted:
Galaxy Objects
AutomationObjects
Domain Objects
System Objects
Application Objects
Device Integration
Objects (DIObjects)
EngineObjects
PlatformObjects
AreaObjects
AnalogDevice
FS Gateway
AppEngine
WinPlatform
Area
DiscreteDevice
AB TCP Network
Field Reference
AB PLC5
Switch
Etc.
User-Defined
14
Document Conventions
This documentation uses the following conventions:
Convention
Used for
Bold
Monospace
Italic
15
Technical Support
Before contacting Technical Support, please refer to the appropriate chapter(s)
of this manual and to the User's Guide and Online Help for the relevant
FactorySuite A2 System component(s).
For local support in your language, please contact a Wonderware-certified
support provider in your area or country. For a list of certified support
providers, see http://www.wonderware.com/about_us/contact_sales.
If you need to contact technical support for assistance, please have the
following information available:
The type and version of the operating system you are using. For example,
Microsoft Windows XP Professional.
Details of the attempts you made to solve the problem(s) and your results.
Any relevant output listing from the Log Viewer or any other diagnostic
applications.
When requesting technical support, please include your first, last and company
names, as well as the telephone number or e-mail address where you can be
reached.
16
C H A P T E R
17
Contents
FactorySuite A2 Project Workflow
18
Chapter 1
Plan Templates
19
FV103
PT
301
TT
301
FIC
402
FV402
DRIVE
3
LIC
401
FT
302
CT
301
LT
402
PT
401
FT
401
FV403
FV401
DRIVE
4
CT
401
20
Chapter 1
C. Once a complete list is created, group the devices according to type, such
as by Valves, Pumps, and so on. Consolidate any duplicate devices into
common types so that only a list of unique basic devices remains, and then
document them in the project planning worksheet.
Each basic device is represented in the IDE as an ApplicationObject. An
instance of an object must be derived from a defined template. The number of
device types in the final list will help determine how many object templates are
necessary for your application. Group multiple basic objects to create more
complex objects (containment).
For more information on objects, templates, and containment, see the IDE
documentation for the Industrial Application Server.
Note The Industrial Application Server IDE's alarms and events provide
similar functionality to what is provided within InTouch Software.
Security: Which users will access to the device? What type of access
is appropriate? For example, you may grant a group of operators readonly access for a device, but allow read-write access for an
administrator. You can set up different security for each attribute of a
device.
All the above functional requirement areas are discussed in detail in this
Deployment Guide.
21
HMIs
ArchestrA
YY123XV456\OLS
.OLS
YY123XV456\CLS
.CLS
YY123XV456\Out
YY123XV456
.Out
YY123XV456\Auto
.Auto
YY123XV456\Man
.Man
Individual Tags
Object
Object
Attributes
22
Chapter 1
23
Plan Templates
The fourth workflow task determines the necessary object "shape" templates.
A Shape template is an object that contains common, configuration parameters
for derived objects (objects used multiple times within a project). The Shape
Template is derived from the $BaseTemplate object and is designed to
represent baseline or "generic" physical objects, or to encapsulate specific,
baseline functionality within the production environment.
Both the Shape Templates and child Template instances are called
ApplicationObjects.
For example, multiple instances of a certain valve type may exist within the
production environment. Create a Shape valve template that includes the
required, basic, valve properties. The Shape Template can now be reused
multiple times, either as another template or an object instance.
If changes are necessary, they are propagated to the derived object instances.
Use the drag-and-drop operation within the IDE to create object instances.
The following figure shows multiple instances (Valve001, -002, etc.) derived
from a single object template ($Valve):
24
Chapter 1
Application Objects
$AnalogDevice
$DiscreteDevice
$FieldReference
Base Templates
$UserDefined
$Valve
Derived Template
Template Derivation
Since templates can be derived from other templates, and child templates can
inherit properties of the parents, establish a template hierarchy that defines
what is needed before creating other object templates or instances. Always
begin with the most basic template for a type of object, then derive more
complicated objects.
If applicable, lock object attributes at the template level, so that changes cannot
be made to those same attributes for any derived objects.
A production facility typically uses many different device models from
different manufacturers. For example, a process production environment has a
number of flow meters in a facility. A base flow meter template would contain
those fields, settings, and so on, that are common to all models used within the
facility.
Derive a new template from the base flow meter template for each
manufacturer. The derived template for the specific manufacturer includes
additional attributes specific to the manufacturer. A new set of templates
would then be derived from the manufacturer-specific template to define
specific models of flow meters. Finally, instances would be created from the
model-specific template.
Note For detailed examples of template derivation, see Chapter 5, "Working
with Templates." For more information on templates, template derivation, and
locking, see the IDE documentation.
Template Containment
Template containment allows more advanced structures to be modeled as a
single object. For example, a new template called "Tank" is derived from the
$UserDefined base or shape template. Use the instance to contain other
ApplicationObjects that represent aspects of the tank, such as pumps, valves,
and levels.
25
$Tank
$V101[ Inlet ]
$V102[ Outlet ]
Template
containment
$LT102[ Level ]
Note Deeply nested template/container structures can slow the check-in of
changes in IDE development and propagation.
Two options are available when defining object properties:
Always use a contained object for I/O points and use a user-defined
attribute for memory or calculated values. How this is accomplished is up
to the application designer, and should be decided in advance for project
consistency.
26
Chapter 1
Users: A user is each individual person that will be using the system. For
example, John Smith and Peter Perez.
Roles: Roles define groups of users within the security system. Roles
usually reflect the type of work performed by different groups within the
factory environment. For example, Operators and Technicians.
27
Configure the attribute security for Template objects (at the Template
level).
2.
3.
4.
5.
The objects deployed on particular platforms and engines define the objects'
"load" on the platform. The load is based on the number of I/O points, the
number of user-defined attributes (UDAs), etc. The more complex the object,
the higher the load required to run it.
Note For object types and target deployment node recommendations (such as
DIObjects), see Chapter 10, "Assessing System Size and Performance."
After deployment, use the Object Viewer to check communications between
nodes and determine if the system is running optimally. For example, a node
may be executing more objects than it can easily handle, and it will be
necessary to deploy one or more objects to another computer.
28
Chapter 1
C H A P T E R
29
Identifying Topology
Requirements
Contents
Topology Component Distribution
Topology Categories
General Topology Planning Considerations
Best Practices for Topology Configuration
I/O Server Connectivity
Extending the IAS Environment
30
Chapter 2
Visualization Node
Visualization Node
Visualization Node
Supervisory Network
Network Device
(Switch or Router )
AutomationObject
Server
Historian
( Data and Alarms)
Engineering Station
Configuration Database
SuiteVoyager
Portal
PLC Network
I/O Server
31
Best Practice
If working in a distributed network context (wide-area networks, distributed
SCADA systems characterized by slow connections), install the Galaxy
Repository on a laptop computer and carry it to remote sites as a portable
resource for application maintenance.
Connect the portable configuration database node to the local node via a
dedicated local area network (LAN) connection to expedite the process of
deploying/undeploying objects and creating and configuring templates.
Remember that if the Galaxy Repository is not available on the network,
existing objects cannot be deployed or undeployed to/from any Platform.
However, any deployed objects will continue to operate normally.
Create a ghost image of the portable GR Node, and perform frequent backups
in case the laptop is damaged or lost.
32
Chapter 2
Run-Time Components
Integrated Development
Environment (IDE)
Galaxy Repository*
Bootstrap*
Platform*
* Required component
Run-Time Components
Integrated Development
Environment (IDE)
* Required component
Bootstrap*
Platform*
AppEngine*
Areas
ApplicationObjects
DIObjects
33
Visualization Node
A Visualization node is a computer running InTouch Software on top of a
Platform. The Platform provides for communication with any other Galaxy
component via the Message Exchange (MX) protocol.
Configuration and run-time components for the Visualization node are
described in the following table:
Configuration Components
Run-Time Components
(none)
Bootstrap*
Platform*
InTouch Software 8.0 or later; OR
InTouch View 8.0 or higher*
* Required component
Run-Time Components
I/O Server*
Bootstrap
I/O Server*
Platform
InTouch Software 8.0 or higher or
InTouch View 8.0 or higher
* Required component
34
Chapter 2
Best Practice
Observe the following guidelines to optimize I/O data transmission:
Always deploy the DIObject to the same node where the I/O data source is
located, regardless of the protocol used by the I/O data source (DDE,
Suitelink, OPC).
If the I/O data source is located in the same node as the AutomationObject
Server, the communication is local, minimizing the travel time of data
through the system.
Note I/O Server installation sequence is important: Always install the most
recent software last. For example: first I/O Servers, then DAServers, then the
Bootstrap, and so on.
Run-Time Components
Integrated Development
Environment (IDE)
Bootstrap*
Platform
* Required component
The IDE does not require a Platform. However, if Object Viewer is used on an
Engineering Station node, a Platform is required. For more information about
Object Viewer, see the Object Viewer documentation.
35
Best Practice
When remote off-site access to the Galaxy Repository is required by means of
the IDE, use a Terminal Service session or Remote Desktop connection to the
Galaxy repository where the IDE has also been installed.
Important! In order to launch an IDE session, the session must have user
accounts in the Administrators group of the Terminal Server node.
For more information about integrating InTouch Software for Terminal
Services with other FactorySuite A2 System components, see the Terminal
Services for InTouch Deployment Guide.
The Engineering Station node also hosts the development tools to modify
InTouch Software applications using WindowMaker.
Historian Node
The Historian node is used to run IndustrialSQL Server Historian software.
IndustrialSQL Server Historian stores all historical process data and provides
real-time data to FactorySuite client applications such as ActiveFactory and
SuiteVoyager Software.
The Historian Node does not require a Platform. The AutomationObject Server
pushes data (configured for historization) to the Historian node using the
Manual Data Acquisition Service (MDAS) packaged with Industrial
Application Server and IndustrialSQL Server Historian.
Important! MDAS uses DCOM to send data to IndustrialSQL Server
Historian. Ensure that DCOM is enabled (not blocked) and that TCP/UDP port
135 is accessible on both the AppServer and IndustrialSQL Server Historian
nodes. The port may not be accessible if DCOM has been disabled on either of
the computers or if there is a router between the two computers - the router
may block the port.
Configuration and run-time components for the Historian Node are included in
the following table:
Configuration Components
Run-Time Components
* Required component
Best Practice
Most system topologies combine the Historical and Alarm databases on the
Historian Node. Configure the alarm system using the Alarm Logger utility,
which creates the appropriate database and tables in Microsoft SQL Server. For
requirements and recommendations for alarm configuration, see Chapter 9,
"Implementing Alarms and Events." For information about historization, see
Chapter 8, "Historizing Data."
36
Chapter 2
SuiteVoyager Portal
A Server machine with a SuiteVoyager Software portal can be incorporated
into any Galaxy. Use the Win-XML Exporter to convert InTouch windows to
XML format, so SuiteVoyager Software clients can access real-time data from
the Galaxy.
A Platform must be deployed on the SuiteVoyager Portal for SuiteVoyager
Software to access the Galaxy.
Configuration and run-time components for the SuiteVoyager Portal are
described in the following table:
Configuration Components
Run-Time Components
Bootstrap*
Platform*
SuiteVoyager Software 2.0 SP1 or
higher*
* Required component
For information on deployment options for SuiteVoyager Software, see the
SuiteVoyager Software documentation.
37
Topology Categories
The following information describes high-level topology categories using
FactorySuite A2 System components.
Workstation Node
The Visualization and AutomationObject Server components are combined on
the same node. Both components share the Platform, which handles
communication with other nodes in the Galaxy. The Platform also allows for
deployment/undeployment of ApplicationObjects.
If you plan to combine the Visualization and AutomationServer components on
the same node, evaluate the resource requirements for the following:
Active tags-per-window.
ActiveX controls displayed.
Alarm displays.
Trending.
38
Chapter 2
The following figure illustrates the software components and their distribution:
Supervisory Network
Network Device
(Switch or Router )
Historian
Engineering Station
( Data and Alarms) Configuration Database
Workstation
SuiteVoyager
Portal
I/O Server
PLC Network
Best Practice
I/O Servers can run on Workstations, provided the requirements for
visualization processing, data processing, and I/O read-writes can be easily
handled by the computer. Run the I/O Server and the corresponding DIObject
on the same node where most or all of the object instances (that obtain data
from that DIObject) are deployed.
This implementation expedites the data transfer between the two components
(the I/O Server and the object instance), since they both reside on the same
node. This implementation also minimizes network traffic and increases
reliability.
39
Network Device
(Switch or Router )
Workstation
I/O Server
Historian
(Data and Alarms)
Engineering Station
Configuration Database
SuiteVoyager
Portal
PLC Network
Historian Node
The IndustrialSQL Server Historian software must run on a designated node.
SuiteVoyager Portal
The SuiteVoyager Portal supplies real-time historical data to web clients.
40
Chapter 2
Client/Server
This topology configuration includes dedicated nodes running
AutomationObject Servers, while visualization tasks are performed on separate
nodes.
The benefits of this topology include usability, flexibility, scalability, system
reliability, and ease of system maintenance, since all configuration data resides
on dedicated servers.
The client components (represented by the visualization nodes) provide the
means to operate the process using applications that provide data updates to
process graphics. The clients have a very light data processing load.
The AutomationObject Server nodes share the load of data processing, alarm
management, communication to external devices, security management, etc.
For details on the implementation of AppEngine and I/O server redundancy in
a client server configuration, see the Widely-Distributed Network section later
in this chapter.
The following figure illustrates a client/server topology:
Visualization Node
Visualization Node
Visualization Node
Supervisory Network
Network Device
(Switch or Router )
AutomationObject
Server
Historian
( Data and Alarms)
Engineering Station
Configuration Database
SuiteVoyager
Portal
I/O Server
PLC Network
41
Object Viewer is not run either from the SMC (Platform Manager) or from
the executable file installed in the application directory.
The client nodes are not running other applications, ActiveX objects, or
functions that request data from remote sources (for example,
ActiveFactory). This could cause more open connections on the client
node. Also, consider any network shares on the client nodes as possible
open connections.
A single client node does not require data from more than 10 server nodes.
Note This topology was tested and the above requirements validated on a
system that included 16 InTouch Software client nodes and five
AutomationObject Server nodes executing ApplicationObjects. The Galaxy
Repository was installed on a dedicated server.
For more information on defining system size, see Chapter 10, "Assessing
System Size and Performance."
Finally, consider different options when deploying I/O Servers in a Galaxy,
such as whether to run them on AutomationObject Servers or on dedicated
computers, and redundancy strategies.
42
Chapter 2
Visualization Node
Visualization Node
Visualization Node
Supervisory Network
Network Device
(Switch or Router )
AutomationObject Server
I/O Server
Historian
(Data and Alarms )
PLC Network
43
Visualization Node
Visualization Node
Visualization Node
Supervisory Network
Network Device
(Switch or Router )
Historian
AutomationObject
Server ( Data and Alarms)
Engineering Station
Configuration Database
SuiteVoyager
Portal
I/O Server
PLC Network
44
Chapter 2
InTouch 7.11
Visualization Node
Visualization Node
Supervisory Network
Network Device
(Switch or Router )
Historian
AutomationObject
Engineering Station
Server (Data and Alarms) Configuration Database
SuiteVoyager
Portal
PLC Network
I/O Server
45
Terminal Services
Terminal Services provides the capability to run several sessions of the same
InTouch Software application or different applications in a Terminal Server
session.
Terminal Services technology enables thin-client computer communication to
a Terminal Server node, where multiple instances of InTouch Software
applications run simultaneously. The software and hardware requirements for
the client node are minimized since no programs are required at the client
station accessing the application.
A dedicated Terminal Server node is recommended for this topology. This
node requires the following software components:
Configuration Components
Run-Time Components
(none)
Bootstrap*
Platform*
Terminal Services for
InTouch Software 9.0 or
higher*
* Required component
Only one Platform runs on a Terminal Server node, regardless of the number of
sessions executed. However, the number of sessions affects the size of the
license for the system.
Note For information on FactorySuite A2 Licensing, see the License Utility
Guide.
Consider different configuration options when using Terminal Services, such
as whether to run Terminal Services on the same node as the
AutomationObject Server; or run Terminal Services on a dedicated Server
node.
Running Terminal Services on a dedicated Server node presents a loading
concern - AppEngine execution may be degraded by client sessions' resource
consumption.
46
Chapter 2
Thin Client
Thin Client
Thin Client
Corporate Network
Network Device
(Switch or Router)
Supervisory Network
Network Device
(Switch or Router)
AutomationObject
Server
Historian
(Data and Alarms )
Engineering Station
Configuration Database
SuiteVoyager
Portal
I/O Server
PLC Network
This configuration prevents any failure at the Terminal Server from impacting
the AutomationObject Servers. A problem at the Terminal Server node only
effects the visualization nodes; the AutomationObject Server continues
operating.
Note Configurations for the I/O-DA-OPC Server presented in the
Client/Server topology can be included in a Terminal Server topology.
47
Thin Client
Thin Client
Thin Client
Corporate Network
InTouch Terminal Servers
Server Farm
Network Device
(Switch or Router )
Supervisory Network
Network Device
(Switch or Router )
AutomationObject
Server
Historian
(Data and Alarms)
Engineering Station
Configuration Database
SuiteVoyager
Portal
I/O Server
PLC Network
48
Chapter 2
Thin Client
Thin Client
Thin Client
Supervisory Network
Network Device
(Switch or Router)
AutomationObject Server
Terminal Server
Historian
(Data and Alarms)
Engineering Station
Configuration Database
SuiteVoyager
Portal
I/O Server
PLC Network
When considering this deployment, keep in mind that any problem in the
Terminal Server/AutomationObject Server node impacts not only the
visualization nodes, but also the I/O data collection and the processing of
objects on the AutomationObject Server.
For details about Terminal Services for InTouch Software, see the Terminal
Services for InTouch Deployment Guide. For additional considerations for this
configuration, see Chapter 10, "Assessing System Size and Performance."
49
InTouch
Terminal Server
Bridge
Ethernet
AutomationObject
Server
Historian
(Data and Alarms)
Engineering Station
Configuration Database
I/O Server
Supervisory Network
Network Device
(Switch or Router)
PLC Network
50
Chapter 2
Widely-Distributed Network
Networks distributed over a large geographical area. This network incorporates
a variety of communication components and accounts for network delays,
intermittent traffic, and network outages.
Widely-Distributed networks are also known as "Intermittent" networks.
Because of low bandwidth and/or high traffic, distributed networks tend to
experience delays or breaks in communication.
The following topology diagram includes 3 connection types:
Dial-up
Wireless/Radio
WAN
Terminal Services Clients
(Thin Client)
Visualization
(Data, Alarms, History )
Ethernet
Terminal Server
SuiteVoyager
Portal
AutomationObject
Historian
Server
(Data and Alarms)
Engineering Station
Configuration Database
SCADAlarm
I/O Server
Network Device
(Switch or Router )
Supervisory Network
Network Device
(Switch or Router)
Control Network
RTU Radio Modem
RTU Radio Modem
(RS-232)
PLCs
RTU Radio Modem
RS-485
Network
Level Transmitters
51
Network Configuration
Verify Required Connection Settings
If your network administrator or Internet Service Provider (ISP) requires static
settings, one or more of the following NIC settings is necessary:
A specific IP address.
DNS addresses.
DNS domain name.
Default gateway address.
WINS addresses.
52
Chapter 2
If you must use DHCP, keep in mind that it is possible to reserve IP addresses
for specific computers; in the previous example, when the Historian node has a
reserved address, store/forward operates without delay.
Consult with IT staff for complete network configuration details.
Note Industrial Application Server 1.5 and later supports DHCP-assigned IP
addresses.
Move Internet Protocol (TCP/IP) to the top of the File and Printer
Sharing for Microsoft Networks binding on the Adapters and
Bindings tab.
53
Software Configuration
FactorySuite A2 Application Coexistence
When installing multiple FactorySuite A2 applications on one node, always
install the oldest components first.
For example, Wonderware I/O Server technology pre-dates ArchestrA-based
technology. Common Components for DAServers are newer than those that
come with most I/O Servers. Thus, install I/O Servers first, then DAServers.
Anti-Virus Software
Although it is desirable to run an Anti-virus software on each node in the
system, it may impact the performance of some tasks. To reduce the effect of
the Anti-virus program, disable the auto-update feature of any virus scan
software and manage the updates either manually or by a centrallyadministered scheduler.
The following files should be excluded from scanning in the
AutomationObject Server nodes:
C:\Program Files\ArchestrA\Framework\Bin\CheckPointer
C:\Program Files\ArchestrA\Framework\Bin\GalaxyData
C:\Program Files\ArchestrA\Framework\Bin\GlobalDataCache
C:\Program Files\ArchestrA\Framework\Bin\Cache
The Virus scan should include the logger files (*.aeh), typically found in:
C:\Program Files\Common Files\ArchestrA
54
Chapter 2
Licensing Requirements
Be sure that the appropriate licenses are installed where required, i.e Galaxy
Repository, Visualization nodes, etc. Note that an I/O server license is required
in every node that runs an I/O Server.
Note For detailed information on licensing requirements, see the License
Utility Guide.
I/O Servers
I/O Servers provide connectivity for devices using DDE, FastDDE and
SuiteLink protocols. Wonderware's I/O Servers can connect every
FactorySuite 2000 and FactorySuite A2 System component. I/O Servers also
connect various popular PLC, RTU, DCS and ESD systems.
Wonderware's Rapid Protocol Modeler (RPM) Kit enables I/O Server
customization to suit your needs. The RPM Kit can configure connections
between FactorySuite applications and devices with standard and non-standard
protocols. It handles serial and TCP/IP communications with either ASCII or
binary protocols. The RPM Kit can be used to profile and save a protocol.
DAServers
The Data Access Server (DAServer) provides simultaneous connectivity
between plant floor devices and SuiteLink, OPC and DDE/SL-based client
applications that run under Microsoft's Windows 2000/2003/XP Professional
operating systems. DAServers can operate with many current FactorySuite
2000 client components, as well as with FactorySuite A2 product offerings,
when these are used with their associated DIObjects.
The DAServer supports run-time configuration, device additions and
device/server-specific parameter modifications.
55
Note If the DAServer has to use specific hardware, for example, a specialized
card, it is quarantined until the physical hardware is installed.
The DAServer is like a driver: it can receive data from different controllers
simultaneously. For example, a DAServer might use OPC to access data
remotely in one machine, and use InTouch Software to communicate with
another machine. When a DAServer transfers data, it also transfers a
timestamp and quality codes.
The DAServer is flexible enough to be used in a variety of topologies, but
some topologies are more efficient than others.
For example, the DAServer can connect to the OPC Server directly across the
network, or FactorySuite Gateway can be placed on the same machine as the
OPC DAServer and SuiteLink can be used to link the server to devices. Of the
two topologies, using FactorySuite Gateway is more efficient than connecting
the DAServer directly to the OPC Server.
DIObject Advantages
Device Integration Objects (DIObjects) represent communication with external
devices. DIObjects may be DINetwork Objects (for example, the Redundant
DIObject) or DI Device Objects. DIObjects (and their associated AppEngine)
can reside on any I/O, DA, or Automation Object Server node in the Galaxy.
DIObjects allow connectivity to data sources such as DDE servers, SuiteLink
servers, OPC servers, and existing InTouch Software applications.
The advantages of using DIObjects are as follows:
DIObjects are very closely tied to the DAServer they are assigned to, so
that when an object is deployed, it brings with it all code, including
registry, scripting, attributes, and parent.
Note that in a large project, this process may take some time. However,
tremendous savings are achieved when comparing centralized deployment
with individual tasks should the Servers be separately installed and
configured on each node.
56
Chapter 2
Ethernet
Terminal Server
SuiteVoyager
Portal
AutomationObject
Historian
Server
(Data and Alarms)
Engineering Station
Configuration Database
SCADAlarm
Network Device
(Switch or Router )
Supervisory Network
Network Device
(Switch or Router)
Control Network
RTU Radio Modem
RTU Radio Modem
(RS-232)
PLCs
RTU Radio Modem
RS-485
Network
Level Transmitters
I/O Server
57
The following figure the shows the same network represented using the
communication protocols supported in a FactorySuite A2 system environment:
Terminal Services Thin Clients
Visualization
(Data, Alarms.History)
SuiteLink (Alarms)
Ethernet
Historian
(Data and Alarms)
SuiteVoyager
Portal
Terminal Server
Engineering Station
Configuration Database
AutomationObject
Server
SCADAlarm
I/O Server
Supervisory Network
MX
OPC/DCOM
SuiteLink
SuiteLink (Alarms)
Control Network
RTU Radio Modem
RTU Radio Modem
(RS-232)
PLCs
RTU Radio Modem
RS-485
Network
Level Transmitters
Note MX is not used for communications with data servers. Thus, it is not a
replacement for DDE, SuiteLink or OPC.
OPC: Supports the OLE (Object Linking and Embedding) for process
control specification. OPC uses the client-server model and Microsoft
COM/DCOM (Distributed Component Object Model) protocols for
vendor-independent data transfer.
58
Chapter 2
Communication Requirement:
Protocol
59
Communication Requirement:
Protocol
Communication Between
FactorySuite Applications
60
Chapter 2
Implementing Redundancy
C H A P T E R
61
Implementing Redundancy
Contents
Redundant System Requirements
NIC Configuration: Redundant Message Channel (RMC)
Redundant DIObjects
Redundant Configuration Combinations
Alarms in a Redundant Configuration
Failover Causes in Redundant AppEngines
Redundant System Checklist
Tuning Recommendations for Redundancy in Large Systems
62
Chapter 3
Note When enabling redundancy, do not select the Restart the engine when
it fails option (in the Primary engine's General editor tab).
Redundant Engine configuration requires the following:
Both nodes hosting the redundant AppEngine pair should run the same
version and service pack levels of supported Operating Systems.
Best Practice
Platforms hosting Primary and Backup AppEngines must have identical
configurations for the following elements:
Implementing Redundancy
63
Note For more information on UDAs and scripting, see Chapter 5, "Working
with Templates."
Changing the default Platform and Engine settings depends on the size of the
system, the number of I/O points, and other variables.
Detailed information on tuning the Platform and Engine settings is included in
Tech Note 410: Fine-Tuning AppEngine Redundancy Settings.
64
Chapter 3
2.
3.
2.
3.
Select Use the following IP address. See your network administrator for
IP address and subnet mask. The IP address must be fixed and unique.
4.
In the TCP/IP Properties dialog box, click the Advanced button, then
select the DNS tab. Be sure the Register this connection's address in
DNS checkbox is not checked.
Best Practice
Assign a descriptive name to each network connection to easily identify its
functionality. From the Network Connections window, rename the Local Area
Connections i.e. "Primary Network" and "RMC Network."
To assign Network Services primary connections
1.
2.
3.
4.
Verify that the normal connection between the redundant pair uses the
primary network. This is done using the PING command (from the DOS
Command Prompt) with the redundant partner's node name. Verify that the
node name resolves to the IP Address of the partner's primary network
card.
Implementing Redundancy
65
Redundant DIObjects
The following section explains implementing redundant DIObjects.
Configuration
AppEngines can host redundant Device Integration Objects (DIObjects). The
Redundant DIObject is a DINetwork Object used to enable continuity of I/O
information from field devices.
The redundant DIObject provides the ability to configure a single object with
connections to two different data sources. If the primary data source fails, the
redundant DIObject automatically switches to the backup data source for its
information.
There is a one-to-two relationship between an instance of the redundant
DIObject and the running instances of the source DIObjects; that is, for each
redundant DIObject, a pair of source DIObjects is deployed.
Visualization Nodes
Redundant
DIObject
Supervisory Network
AutomationObject Server
I/O Server
AppEngine1
DI_1
DI_2
DAServer_1
DAServer_2
Platform 1
PLC Network
ABTCP
DH+
If the I/O or DAServer resides on the same node as the AppEngine hosting
the DIObject, configure the server node name in the General tab as
<Blank> or <Localhost>.
66
Chapter 3
In the previous example, the PLC sent data using two unique protocols. It is
also common for the PLC to send data through two ethernet ports using the
same protocol, via different I.P. addresses. In this case, the following redundant
DIObject configuration is recommended:
Visualization Nodes
Redundant
DIObject
Supervisory Network
AutomationObject Server
I/O Server
AppEngine1
Platform 1
DI_1
DI_2
DAServer_1
Topic1
I.P.1
DAServer_2
Topic1
I.P.2
PLC Network
ABTCP
ABTCP
The figure shows two unique DAServer instances, each using the same topic
and pointing to a unique I.P. address. The DAServers in this scenario can be
also be deployed on different machines.
Implementing Redundancy
67
2.
3.
Monitors the connection with the Active and Standby DI source. If the
connection to the Active DI source is lost, the object switches to the
Standby DI source.
If both DI sources are in bad state, the object raises the Connection Alarm.
68
Chapter 3
The following figure includes Alarm configuration. For details, see "Alarms in
a Redundant Configuration" on page 78.
Visualization
InTouch
Alarm Provider
Visualization
Supervisory Network
AOS (PRIMARY)
AOS (BACKUP)
AppEngine 1
DI
Network Device
(Switch or Router )
RMC
DAServer
Historian
( Data and Alarms)
Alarm Logger
PLC Network
Implementing Redundancy
69
Visualization
InTouch
Alarm Provider
Visualization
Supervisory Network
AOS (PRIMARY)
AOS (BACKUP)
AppEngine1
AppEngine 2
DI 1
DAServer A
AppEngine2
(Backup)
RMC
Network Device
(Switch or Router)
DI 2
DAServer A
AppEngine 1
(Backup)
Historian
( Data and Alarms)
Alarm Logger
PLC Network
70
Chapter 3
Scenario 1
A set of AppObjects is hosted by AppEngine1 and another set by AppEngine2.
These two engines are hosted by different platforms. AppEngine 1 and
AppEngine 2's backup engines reside on the opposite node. One instance of a
DI Network object resides in any of the nodes providing data to AppObjects in
both nodes.
Use the Redundancy.ForceFailoverCmd AppEngine attribute in a script (in
the DI Network Object) to trigger the failover in the event of a communication
failure with the PLC Network.
The following figure includes Alarm configuration. For details, see "Alarms in
a Redundant Configuration" on page 78.
Visualization
InTouch
Alarm Provider
Visualization
Supervisory Network
AOS (PRIMARY)
AOS (BACKUP)
AppEngine1
DINetwork
RMC
DAServer
AppEngine 1
(BACKUP)
AppEngine 2
Historian
(Data and Alarms)
Alarm Logger
AppEngine2
(BACKUP)
PLC Network
Network Device
(Switch or Router)
Implementing Redundancy
71
Scenario 2
This scenario provides a very powerful and reliable solution. To avoid the
conflict with multiple instances of the same DI Network in a node after a
failover occurs, you can arrange the system as shown in the following figure.
The configuration in AOS1 is mirrored in AOS2. AOS1 hosts AppEngine 1,
which is part of a redundant pair with AppEngine 1 (BACKUP) and is hosted
by AOS2. Additionally, AppEngine 1 hosts RDI1, which provides high
availability at the I/O level. RDI1 is configured to use DI1Network as the
Primary DI source and DI2 Network as the Backup DI source.
This configuration provides high availability at the data execution level and
I/O data acquisition in a very efficient approach.
In the event of a failure in AOS1, AppEngine 1 will failover to AppEngine 1
(BACKUP). Once AppEngine 1 fails over the Standby engine, RDI1 will also
switch to DI2 Network in the new node. On the other hand, if there is a
communication problem at the DI1 object level, the RDI1 object will
automatically switch to DI2 Network while AppEngine 1 still runs on AOS 1.
AppEngine E3, which is not a redundant AppEngine, hosts DI1 Network. By
keeping the DI1 and DI2 Network objects on separate non-redundant engines,
you are avoiding the conflict created by having two instances of the same DI
Network object on the same node.
The following figure includes Alarm configuration. For details, see "Alarms in
a Redundant Configuration" on page 78.
AOS 1
AppEngine 1
AOS 2
RDI1
RDI 2
AppEngine 2'
AppEngine 3
AppEngine 2
AppEngine 1'
DINetwork1
DINetwork2
AppEngine 4
72
Chapter 3
Visualization
InTouch
Alarm Provider
Visualization
Supervisory Network
AOS (PRIMARY)
AOS (BACKUP)
AppEngine 1
RDI 1
AppEngine 2
(BACKUP)
AppEngine 3
RMC
RDI 2
AppEngine 1
(BACKUP)
AppEngine 4
DINetwork 2
DINetwork 1
DAServer
Network Device
(Switch or Router )
Historian
(Data and Alarms)
Alarm Logger
DAServer
PLC Network
The communication protocol between the I/O Server node and the AOS is
MX. This protocol is optimized for data transfer over the network, with
special emphasis on slow and intermittent networks.
Implementing Redundancy
73
The following figure includes Alarm configuration. For details, see "Alarms in
a Redundant Configuration" on page 78.
Visualization
InTouch
Alarm Provider
Visualization
InTouch
Alarm Provider
Supervisory Network
AOS (PRIMARY)
AppEngine 1
AppObjects
RMC
AOS (BACKUP)
I/O Server 1
I/O Server 2
AppEngine 4
AppEngine2
AppEngine 3
AppObjects
DI 1
DAServer 1
DI 2
DAServer 2
RDI
RDI
AppEngine 4
(BACKUP)
AppEngine 1
(BACKUP)
Network Device
(Switch or Router)
Historian
(Data and Alarms)
Alarm Logger
PLC Network
Run-Time Considerations
The following information summarizes run-time behaviors between Redundant
Engines.
74
Chapter 3
Checkpointing
AppEngines store specific attributes in memory, then write them to disk in both
single- or redundant Engine configurations. The frequency of the write
operation is determined by the Checkpoint period setting in the AppEngine
editor.
Note Checkpoint period configuration details are included in "Tuning
Redundant Engine Attributes" on page 87.
The checkpointed attribute types include:
Scan Rate
Checkpoint Directory location (default is blank but can be modified)
StartUp Attributes
StartUp Type (Automatic, Semi-Automatic, Manual)
StartUp Reason
Attributes with Category User Writable or Object Writable that are not
extended as Input/Output and Input extensions.
Deployment Considerations
Automation Objects are always deployed to the Active Engine:
If the Backup Engine is the Active Engine, the objects are deployed to the
Backup Engine.
When an Active Engine becomes the Standby, the Engine sets all objects off
scan, shuts down all features that make up the object and stops executing all
deployed objects. All objects are unregistered on the previously active engine.
Implementing Redundancy
75
When a Standby Engine becomes Active, the Engine calls Startup on all
features that make up the objects. The call-up includes a method that shows the
objects are starting up as part of a failover. The newly active engine calls
SetScanState on all features and begins executing all objects that are on scan.
Best Practice
To deploy objects in a Load Shared configuration
1.
2.
3.
Finally, cascade deploy the backup Engines. Always deploy the primary
Engine first.
Scripting Considerations
Any state, such as local variables or calculated attributes, that is not kept
in checkpointed attributes is not passed to the objects started on the newlyactive Engine.
These attributes can also be used to execute scripts that re-initialize variables
and COM objects.
After a failover occurs, scripts in the new Active engine are executed based on
the trigger type they use i.e. Startup, On Scan, Periodic and Data Change.
76
Chapter 3
Use Startup and OnScan scripts to initialize conditions used later in the script.
In many cases the initialization is required only when the object is
deployed/redeployed, or when the AppEngine and or platforms are restarted. In
the case of a failover, the requirement may be to continue operating using
values from the checkpoint rather than re-initializing the conditions.
The Redundancy.FailoverOccurred attribute is set to "True" for the first scan
right after the failover occurs; after the first scan the attribute is automatically
reset to "False." Using this attribute as a script condition initializing the
variables prevents the script from running when the system recovers from a
failover.
Similarly, Data Change scripts execute when the object is deployed, the engine
re-started and the Standby engine becomes active after a failover. Using the
Redundancy.FailoverOccurred in an "If-then-else" statement will prevent
the script from executing after the failover.
Any script that is set with an Execution type of Execute and a trigger type of
Periodic will have the following behaviors after an AppEngine failover. The
situation is described using a period of 60 minutes as an example time period:
The script executes the first time when the engine is deployed. E.g. T0.
The Periodic script(s) will restart with the period reset to T0.
The period for the execution of the Periodic script(s) will be shorter than
planned for, or possibly longer if an engine failover occurs shortly before the
time period elapses.
Some applications may have critical data generated by a Periodic script. Do not
use Periodic type scripts where the time period could be shorter, or longer, and
this is critical information being managed by this script. Instead, set up the
script to run using a Condition and set up a UDA for the trigger.
Then for a time base, use System Time and calculate the time period from this
to set the condition. The UDA current value is maintained in the failover and
the System Time is real time versus an expiring timer.
Shutdown and OffScan scripts will execute after an orderly completed failover,
i.e. using ForceFailoverCmd, or in the event of Primary Network failure.
Asynchronous Scripts
QuickScripts must be evaluated to anticipate likely delays (SQL Query
completion, calling COM or .NET objects, etc.) due to network transport or
intensive database processing. When a delay in script completion is likely, set
the QuickScript to run asynchronously.
If not set to run asynchronously, it is possible that the non-asynchronous
QuickScript could cause the Engine to miss the following scan while waiting
for the script to finish executing.
Note The Runs asynchronously option must be manually selected within the
Scripts tab page of the Object Editor; it is not set by default.
Implementing Redundancy
77
Once set to run asynchronously, the QuickScript will not be cut off when the
scan is completed. When a problem occurs, the script could "hang" if the
process never completes, as in the case of a SQL query that never returns a
rowset or even an error message. When the QuickScript's
ExecuteTimeout.Limit value is reached, the ExecuteError.Alarmed and
ExecuteError.Condition attributes are set.
In this context it is useful to monitor these attributes and log a message when
the maximum timeout threshold is exceeded.
History
Historical data is sent to the Historian only from the Active Engine. The Active
Engine processes historical data and sends it to the Historian when the
Historian is available. If the Historian becomes unavailable, the Active Engine
stores the data locally (in Store Forware History Blocks) and forwards it when
the Historian becomes available.
In the meantime, local Store Forward data is transferred to the Standby Engine
via the RMC. When an engine enters Store Forward mode, it synchronizes its
data with its partner engine. Store Forward data is transferred (and
synchronized) every 30 seconds, so no more than 30 seconds can be lost in the
event of an Engine Failure.
Note Attributes and tags which were not configured in the Historian before
failover are not stored.
78
Chapter 3
Visualization
InTouch
Alarm Provider
Visualization
InTouch
Alarm Provider
Supervisory Network
AOS (PRIMARY)
AOS (BACKUP)
AppEngine 1
DINetwork
RMC
AppEngine1
(BACKUP)
DAServer
Historian
(Data and Alarms )
Alarm Logger
PLC Network
Network Device
(Switch or Router )
Implementing Redundancy
79
Visualization
InTouch
Alarm Provider
Visualization
InTouch
Alarm Provider
Supervisory Network
AOS (PRIMARY)
AOS (BACKUP)
AppEngine 1
DINetwork
RMC
AppEngine1
(BACKUP)
Network Device
(Switch or Router )
DAServer
Historian
(Data and Alarms )
Alarm Logger
PLC Network
AOS (BACKUP)
AppEngine1
AppEngine 1
(BACKUP)
InTouch
Alarm
Provider
DINetwork
DAServer
InTouch
InTouch
Alarm
Provider
RMC
Network Device
(Switch or Router)
Historian
(Data and Alarms )
Alarm Logger
PLC Network
80
Chapter 3
Forcing Failover
It is possible to force a failover in a pair of redundant AppEngines by simply
setting the attribute ForceFailoverCmd in the Active engine to "true". This
can be accomplished using the ObjectViewer, InTouch Software, an object's
script or any other application that has access to this attribute.
Use this attribute in a script (with any set of conditions) to trigger a failover.
For example, you can monitor the status of other applications on the same
machine, hardware devices, etc. and based on that status, trigger a failover to
the Standby engine.
When a failover occurs, the Standby engine becomes Active and stays in that
status unless the system is forced to fail back when the new Standby engine
becomes available. In this case, the ForceFailoverCmd can be used to take the
Active engine back to the original node.
For details on the attributes associated to a Redundant AppEngine please refer
to the AppEngine Help files.
Implementing Redundancy
81
Considerations
The Failover scenarios described in this section refer to topologies where there
is at least one more platform besides the two hosting the Redundant pair, i.e.
Client/Server configuration.
If the topology consists of just two platforms hosting the Redundant pair (Peerto-Peer configuration) a failover does not occur in the event of a
communication failure in the supervisory network. Instead, the
Redundancy.PartnerStatus attribute is set to Missed heartbeats while both
partners synchronize data through the RMC.
In this case, the user can execute the failover either manually or via scripting, if
required.
Visualization
InTouch
Alarm Provider
Visualization
InTouch
Alarm Provider
Supervisory Network
AOS (PRIMARY)
AOS (BACKUP)
AppEngine 1
DINetwork
RMC
AppEngine1
(BACKUP)
Network Device
(Switch or Router )
DAServer
Historian
(Data and Alarms )
Alarm Logger
PLC Network
82
Chapter 3
.
Initial
Condition
Transition
Final Condition
Primary
Backup
Primary
Backup
Primary
Backup
Primary
Network
Connected
Connected
Disconnected
Connected
Disconnected
Connected
Red. Partner
Status
Standby Ready
Missed
Heartbeats
Red. Status
Active
Active
SCENARIO 1a
SCENARIO 1b
Primary
Network
Disconnected
Connected
Connected
Connected
Connected
Connected
Red. Partner
Status
Missed
heartbeats
Standby Ready
Red. Status
Active
Active
SCENARIO 2
Primary
Network
Connected
Red. Partner
Status
Standby Ready
Missed
heartbeats
Red. Status
Active
Active
Connected
Connected
Disconnected
Connected
Disconnected
SCENARIO 2b
Primary
Network
Connected
Red. Partner
Status
Missed
Heartbeats
Standby Ready
Red. Status
Active
Active
Disconnected
Connected
Connected
Connected
Connected
SCENARIO 3
Primary
Network
Connected
Red. Partner
Status
Standby Ready
Missed
heartbeats
Red. Status
Active
Active
Connected
Disconnected
Disconnected
Disconnected
Disconnected
Implementing Redundancy
83
Backup
Primary
Backup
Primary
Backup
Failure in RMC
Connected
Connected
Connected
Disconnected
Connected
Disconnected
Red. Partner
Status
Standby Ready
Unknown
Red. Status
Active
Active - Standby
not available
SCENARIO 1
SCENARIO 2
Failure in RMC
Connected
Connected
Disconnected
Connected
Disconnected
Red. Partner
Status
Standby Ready
Unknown
Red. Status
Active
Active - Standby
not available
Connected
PC Failures
If a power failure occurs on the Active Engine node, the Standby node takes
control of the system. The following matrix shows the corresponding status
and AppEngine attribute values under different conditions:
Primary
Backup
Primary
Backup
Primary
Backup
PC Failure
PC Available
PC Available
PC Available
PC Not
Available
PC Available
PC Not
Available
Red. Partner
Status
Standby Ready
Unknown
Red. Status
Active
Active - Standby
not available
SCENARIO 1
SCENARIO 1b
PC Not
Available
PC Available
PC Available
PC Available
PC Failure
PC Available
Red. Partner
Status
Unknown
Standby Ready
Red. Status
Active - Standby
Not Available
Active
PC Available
SCENARIO 2
PC Failure
PC Available
PC Available
PC Not Available
PC Available
PC Not Available
PC Available
Red. Partner
Status
Standby Ready
Unknown
Red. Status
Active
Active - Standby
not available
84
Chapter 3
SCENARIO 2
PC Failure
PC
Not Available
PC Available
PC Available
PC Available
PC Available
PC Available
Red. Partner
Status
Unknown
Standby Ready
Red. Status
Active - Standby
not available
Active
Undeploying AppEngines
Undeploying AppEngines in a redundant pair may trigger failover. The
following description refers to non-cascade Undeploy operation of the
AppEngine.
Executing a cascade Undeploy operation of the Primary AppEngine undeploys
all objects from both engines.
The table below describes the expected behavior under the non-cascade
condition:
Initial
Condition
Transition
Final Condition
Primary
Backup
Primary
Backup
Primary
Backup
Undeploy
Backup Engine
Deployed
Deployed
Deployed
Undeployed
Deployed
Undeployed
Red. Partner
Status
Standby Ready
Unknown
Red. Status
Active
Active - Standby
not available
SCENARIO 1
SCENARIO 2
Undeploy
Primary Engine
Deployed
Red. Partner
Status
Standby Ready
Unknown
Red. Status
Active
Active - Standby
not available
Deployed
Undeployed
Deployed
Undeployed
Deployed
SCENARIO 3
Deploy Primary
Engine
Undeployed
Deployed
Deployed
Deployed
Deployed
Deployed
Red. Partner
Status
Unknown
Standby Ready
Red. Status
Active - Standby
not available
Active
Implementing Redundancy
85
86
Chapter 3
Implementing Redundancy
87
AppEngine Editor
Small System
(Default)
Very Large
System
Remarks
N/A
N/A
N/A
N/A
N/A
N/A
1000 ms
1000 ms
Maximum consecutive 5
heartbeats missed from
Active engine
10 - 30
~60
Maximum consecutive 5
heartbeats missed from
Standby engine
10 - 30
~60
120,000 ms
150,000 ms
N/A
N/A
N/A
Maximum time to
maintain good quality
after failure
15,000 ms
88
Chapter 3
Small System
(Default)
Very Large
System
Remarks
N/A
N/A
N/A
N/A
Checkpoint period
0 (Every scan)
60,000 ms
(40K I/O)
Platform Editor
Small System
(Default)
Medium-Large System
Very Large
System
Remarks
N/A
N/A
Consec. number
Missed NMX
Heartbeats
AppEngine Editor
Failover Services talk between themselves using the RMC and determine the
communication status between the two nodes. The status is provided by
monitoring Heartbeat attributes.
Message Channel Heartbeat settings control the heartbeat intervals; i.e., how
often the redundant platforms send each heartbeat through the RMC.
Remarks
Modifying the Active/Standby Heartbeat Period values makes the Engines
more sensitive to network failure.
Missed Consecutive heartbeats determines the number of missed heartbeats
that will trigger the redundant engine to act. Setting the values smaller makes
the engines more sensitive to network failure. Setting the values larger makes
the Engines more tolerant of high CPU loads that can cause missed heartbeats.
The values can all be set using the IDE or the Object Viewer.
Engine Monitoring
The following information describes how the failover service monitors the
redundant engines.
In general, an Engine has the following states:
Start Up: Measured as the time required for all engine objects to be
created, initialized and started.
Shut Down: Measured as the time required for all Engine objects to be
stopped.
Implementing Redundancy
89
The following parameters determine how much time the Engine can be
unresponsive during each of the above states.
Increase RAM: Increase the RAM to 2GB. Tests have shown that
increasing RAM can help provide proper shutdown.
1.
Change the format to Decimal and ensure the setting is 30000 ms. (5
minutes in this example) or larger.
This should be sufficient for a large system. Setting the values too high could
lead to delays in discovery that the Engine has hung/crashed during startup or
shutdown, since the Bootstrap considers the Engine healthy until the timeout
expires.
Note If the WatchdogStartup- and ShutdownTimeout values are modified,
they must be reset after the Platform is undeployed and redeployed.
Execution
The EngineFailureTimeout attribute determines how long that Engine has to
inform the Bootstrap that it is executing. If the Engine does not signal the
attribute for 3 consecutive timeouts, the Engine is determined to be "in
trouble," and the redundant partner takes action.
Setting this attribute value too low causes the redundant partner to overreact
when CPU usage is high. Setting the value too high can delay notification that
the Engine is in trouble since the Bootstrap considers the Engine "healthy"
until the timeout expires.
Attribute Name
Default Value
Recommended
Value
20,000 ms.
90
Chapter 3
C H A P T E R
91
Integrating FactorySuite
Applications
Contents
IndustrialSQL Server Historian
ActiveFactory Software
InTouch HMI Software
SCADAlarm Event Notification Software
Alarm DB Manager
SuiteVoyager Software
QI Analyst Software
DT Analyst Software
InTrack Software
InBatch Software
Third Party Application Integration
92
Chapter 4
ActiveFactory Software
ActiveFactory Software is Wonderware's IndustrialSQL Server Historian
Client Application Suite.
93
InTouch SmartSymbols
The InTouch SmartSymbol Manager allows you to create, edit and manage
libraries of reusable graphical templates (InTouch SmartSymbols).
SmartSymbols can be used to connect to ArchestrA objects and their attributes,
and also to local InTouch Software tags or to any InTouch Software remote
references. SmartSymbol templates can be associated to Application Object
templates and instances, providing a very powerful combination.
94
Chapter 4
SmartSymbol Notes
The Integrated Development Environment must be installed on the InTouch
Development node to use the following SmartSymbol features:
Note Other SmartSymbol features are also enabled when the IDE is present
on the InTouch Development node.
Creating new instances of an object template using the SmartSymbols property
dialog requires the user to have the correct permissions assigned in the Galaxy
security configuration.
Note For more details on SmartSymbols, the SmartSymbol Manager and the
IOSetRemoteRefereces() script function refer to the InTouch Reference
Guide.
Changes to any Smart Symbol causes recompiling of every window in the
application and subsequent re-deployment of all windows. For more
information, see "NAD" on page 266.
Network Utilization
Galaxy references (e.g., references of the form Galaxy:MyObject.MyAttribute)
that resolve to remote nodes affect bandwidth utilization, which increases as
the data-change rate increases.
If your requirement is to minimize network-bandwidth utilization between an
InTouch Software node and remote Industrial Application Server nodes, you
need to account for all active subscriptions between your InTouch Software
node and the remote Industrial Application Server nodes that provide your
data.
95
When any of the following items are configured with a galaxy reference, that
item activates a subscription between InTouch View and Industrial Application
Server when the following events occur:
B.
Condition Scripts.
C.
Data-change Scripts.
D.
When any window is open; that is, a currently-open window that has:
A.
Animation links.
B.
C.
D.
For example, assume your application has one window which contains a RealTime trend with "Only update when in memory" unchecked. Assume this trend
is configured to gather data from Galaxy reference "Galaxy:MyObj.MyAttr".
Even though that window is not open, InTouch Software has an active
subscription and is receiving updates for Galaxy:MyObj.MyAttr.
96
Chapter 4
SPCPro
Using the SPCPro OLE Automation Library to directly call SPC functions
from Industrial Application Server is not recommended. This method requires
a very good understanding of the SPCPro database schema, as well as OLE
automation. This scenario has not been tested and will not be supported in
future releases.
Use QI Analyst Software or custom-made objects for SPC analysis within the
Industrial Application Server environment.
Tablet PCs
Industrial Tablet PCs are furnished with Windows XP Tablet PC Edition and
have InTouch Software pre-installed at the factory. A number of wireless
options are available.
Industrial Tablet PC users can leverage a number of features in both the Tablet
Edition of Windows XP and InTouch Software, such as using digital ink (i.e.:
to write values into data links or annotate graphical displays), enabling more
efficient communication and troubleshooting of problems on the factory floor.
Industrial Tablet PCs support a number of options to secure wireless
communications, including the use of VPN over wireless. For more
information on securing your wireless networks, refer to the white paper
"Tablet PCs in Industrial Applications", accessible through the Wonderware
FactorySuite support website.
Panel PCs
Touch Panel computers are shipped with either Microsoft Windows XP
Professional or Windows XP Embedded and, in addition to InTouch Software,
have selected Wonderware DAServers pre-installed. A pre-installed Industrial
Application Server Bootstrap in Windows XP Professional Touch Panel
Computers enables fast integration with Industrial Application Server.
Both Tablet PCs and Touch Panel computers include the Microsoft Remote
Desktop Connection (Terminal Services Client). Using Industrial Tablet PCs
or Touch Panel PCs in combination with Terminal Services allows you to
install the Industrial Application Server platform and InTouch Software once
on a central server and then open sessions from multiple terminals.
For information about using InTouch Software with Terminal Services, refer to
the Terminal Services for InTouch Deployment Guide on the Wonderware
FactorySuite support website http://www.wonderware.com/support.
97
Alarm DB Manager
Alarms generated by ApplicationObjects are stored within the alarm database.
The alarm database can be hosted by the Galaxy Repository node (if available
all the time), the IndustrialSQL Server Historian Node, or any other Microsoft
SQL Server node that has an Industrial Application Server Platform deployed
to it and is configured as an Alarm Provider.
The configuration of the Alarm Groups (Areas in the Industrial Application
Server) should be configured as "Galaxy!Area" where "Area" is the globally
unique name of any configured Area ApplicationObject of the Galaxy.
SuiteVoyager Software
SuiteVoyager is designed as the web portal to any FactorySuite A2 System
data. InTouch Software windows that access Industrial Application Server data
can be converted to web pages for display in SuiteVoyager. History and alarms
generated by an Industrial Server Application can also be made available via
SuiteVoyager.
The SuiteVoyager portal node requires a local Platform to be part of a Galaxy.
98
Chapter 4
QI Analyst Software
QI Analyst Software provides statistical analysis of FactorySuite A2 System
data. The data includes real-time InTouch Software data, real-time data from
Application Server through FactorySuite Gateway, InSQL (historical) data, or
any other ODBC or OLE DB-supported data source.
Quality information is collected, analyzed, and displayed through various QI
Analyst Software components. The collecting and analysis engines of QI
Analyst Software can be accessed and leveraged by Industrial Application
Server, thus allowing data to be sent to and read from QI Analyst Software.
Process analysis charts are provided through stand-alone QI workstations or
through ActiveX controls deployed within InTouch Software or another
ActiveX container.
Scripting
QI Analyst Software calls can be made from within an Application Object. The
QI Analyst Software OLE Automation library is called with the
CreateObject() function from ApplicationObject scripting. New data rows
and values can be stored in the QI Analyst Software database through the
object interface.
The main difficulty is in the persistence of objects from one script to the next,
or from object to object. Object persistence must be managed with the .NET
global data cache (that is, System.AppDomain.CurrentDomain).
For more information, see "Scoping Variables" in Chapter 5, "Working with
Templates,". Setting objects such as a QI database or data table to the global
memory allows other scripts to continue working with the object in the same
state as the previous script stored the object.
DT Analyst Software
DT Analyst Software collects, analyzes and displays downtime and efficiency
data using the following components:
DT Analyst Software requires its own list of tags. The tagscan be configured to
read from I/O Servers, DAServers/OPC Servers, InTouch Software nodes, or
the FactorySuite Gateway. Virtual tags (those with no I/O access name) can
also be configured. These tags cannot read Application Objects directly from
Industrial Application Server.
99
FactorySuite Gateway
DT Analyst Software is integrated with Industrial Application Server
environment by using FactorySuite Gateway. The DT Analyst Logic Manager
is configured to read from FactorySuite Gateway, which acts as a SuiteLink
Server or OPC Server to provide Galaxy data to any SuiteLink or OPC client.
Refer to the FS Gateway documentation for further details.
The DT Analyst Logic Manager does not require a Platform because it relies
on SuiteLink and OPC to receive value changes from I/O Servers.
Important! The DT Analyst Logic Manager and Event Monitor require a
connection to the Downtime Database (DTDB) in order to be functional.
Best Practice
Install the DT Analyst Configuration Manager on an Engineering Station node,
along with the Integrated Development Environment (IDE) and InTouch
WindowMaker.
The configuration, downtime, and efficiency data is stored in a SQL Server
database called DTDB (default). DTDB is independent of the Galaxy
Repository database and the objects that it contains. Aspects of the DT Analyst
Software process model (such as Systems, Areas, or Groups) may be similar to
some objects in Industrial Application Server, but they are not directly related.
However, it is possible to query the DTDB database from an object designed
for that purpose.
InTrack Software
InTrack Software is the Work-In-Process (WIP) and Material tracking
component of FactorySuite. It delivers configuration tools, OLE COM objects,
and database structures that manage the movement, creation and consumption
of materials as they move through the manufacturing process.
InTrack Software requires either Microsoft SQL Server or Oracle database to
function. For specific database requirements and important planning
guidelines, refer to The InTrack Deployment Guide.
100
Chapter 4
All IAS attribute tracking capabilities - alarming, historization, and I/O source
definition - can be applied to the execution of objects pertaining to InTrack
Software interaction. The two components, Industrial Application Server and
InTrack Software, must be closely coordinated through naming conventions
and structure to accomplish this coexistence.
The following section provides guidelines for successful implementation.
101
The transaction server can be run on the same server as the database or
separately. For larger customers or a high traffic network, you might have three
separate servers: the Application server, transaction server, and DB server. A
Web server on each side handles taking attributes from the object, sending
them across the network, and delivering them to the transaction server.
If the InTrack scripting is complex and the User Interface simple, Visual
Basic is faster and offers more debug options.
102
Chapter 4
InBatch Software
InBatch Software is flexible batch management software designed to power
production of any batch process.
InBatch Software includes several SQL databases. The Material database and
the Recipe database are the most relevant in the context of IAS. For example:
It may be necessary to modify the amount of one ingredient added based on the
concentration of another ingredient.
InBatch data is accessed using an OLE Automation server. The OLE
Automation Server is called, and passed a list of "fill-in-the-blank" values
indicating what data you want. The object then exposes the data in a group of
fields.
Having only Discretes, Analogs and Strings, InTouch Software lacked the
proper data types to support access to these databases through its QuickScripts.
Users were forced to write their own VisualBasic or C code to access these
files.
With the release of Industrial Application Server, the datatypes supported by
the UserDefinedAttributes of an IAS object contain the necessary datatypes to
access the Batch data. This means that a object can be built with UDAs that
correspond to every attribute of the OLE automation object.
Include the proper values into each UDA to define what transaction to make
with the OLE server.
The script is used to "Fill-in-the-blanks" through the UDAs and then trigger a
different UDA linked to a method of the OLE Server. The requested data goes
into a different set of UDAs to be read or fed into other objects.
InBatch Objects enable the ability to interface with data retrieval objects that
previously required the knowledge of higher programming languages; i.e.; VB
or a C variant.
Industrial Application Server can be used to extend InBatch Software's
functionality by performing the following tasks:
103
104
Chapter 4
InBatchServer
The first topology, BatchServerSuiteLinkClient/IBServ uses the
DDE/SuiteLinkClient Object. InBatch is the server, Industrial Application
Server is the client. This topology is recommended for batch engine
information, batch information, and equipment allocation.
The BatchServerSuiteLinkClient/IBServer topology provides the following
advantages:
New unit attributes can be created for InBatch data (also available through
COM interfaces).
105
Batch Server:
IBServ
SuiteLink
InBatchClient
The second topology, FS Gateway/IBCli, a unit or phase topology, uses
SuiteLink Protocol. Industrial Application Server is the server, InBatch is the
client. This topology is recommended for Industrial Application Server
scripting and field I/O data, (such as phase information, equipment status). The
FS Gateway/IBCli topology:
Can be combined with PLC logic for added intelligence and parameter
adjustment.
Batch Server:
IBClient
SuiteLink
106
Chapter 4
I/O Servers
I/O servers provide connectivity for devices using DDE, FastDDE, and
SuiteLink protocols. I/O servers can connect every FactorySuite 2000 and
FactorySuite A2 System component, as well as PLC, RTU, DCS, and ESD
systems.
Build I/O servers using Wonderware's Rapid Protocol Modeler (RPM) Kit.
The RPM Kit can handle both serial and TCP/IP communications with either
ASCII or binary protocols to connect FactorySuite client applications to
devices with non-standard protocols.
107
DAServers
DAServers (Data Access Servers), provide simultaneous connectivity between
plant floor devices and DDE, SuiteLink and/or OPC-based client applications
running under Microsoft Windows. The DAS Toolkit allows you to create a
DAServer specific to your needs. DAServer architecture is modular, allowing
for plug-ins for DDE/SL, OPC, and other protocols.
108
Chapter 4
InControl Software
InControl Software enables connectivity to third-party OPC servers and clients
by creating an OPC Server and/or an OPC client. Use the OPC interface set to
collect and transfer data between software packages from any vendor.
InControl Software's OPC Server consists of these three primary types of
objects: server, group(s), and item(s). Each OPC item object represents a single
data element in the data source and has a name, value, time stamp, and quality.
Items also have attributes and properties. OPC groups manage the attributes of
each item contained in them. The OPC server maintains the properties of all
OPC items. The InControl OPC server uses SuiteLink for communications.
Note OPC uses the client-server model and the Microsoft COM/DCOM
protocols for vendor-independent data transfer.
SQL Server
Use SQL Server Linked Server functionality as a gateway to systems using
Oracle, Sybase, Ingres and other databases. Once linked to SQL Server,
external databases operate as if they were part of the native SQL Server
database. Data from multiple databases can then be combined in reports and
queries.
See the FactorySuite eSupport website, especially the Compatibility Matrix,
for more information on specific component combinations.
C H A P T E R
109
Contents
110
Chapter 5
111
Four Object Editor tabs are used for configuring the template: The Object
Information tab contains basic configuration information, object execution
order, and a Help file link. The Script, UDA, and Extensions tabs are
discussed in more detail in the following content.
After creating a derived template set, create sets of complex template
instances. The derived templates become the basis for all other instances. This
derivation practice is called containment.
Use UDAs when lower-level object is very basic. Use the UDA for
memory or calculated values.
It is also valid to always use a contained object for consistency, even when
the property is very simple.
Best Practice
Ensure the container incorporates functionality; otherwise place it as an
attribute in another object. Do not use excessive empty containers simply
as placeholders to host objects. Empty containers impact engine scan
resources and time.
112
Chapter 5
$AnalogDevice Template
The $AnalogDevice Template object provides supervisory control capabilities
for instruments or equipment that have a key continuous variable. It contains
numerous features to model more complex analog inputs and control loops.
Any analog value that requires I/O scaling or alarming must be derived from
this base template.
The General tab of the Object Editor includes a field for setting the type of
analog device. The Analog option type enables configuring a Process Variable
(PV) input source and (optionally) a different output destination.
The PV can be scaled, multiple alarm points defined, and history collected for
the PV. The Analog regulator option type allows for a PV input (no separate
output), a setpoint, an optional different setpoint feedback address, setpoint
high and low limits, and optional control tracking. It also supports scaling,
alarms, and history.
When the analog device is configured as an Analog regulator, many aspects of
a PID loop can be defined. It is necessary to add User-Defined Attributes to
access specific loop control parameters, such as controller gain, integral time
constant, etc.
A fully-functional loop simulation object (based on the $AnalogDevice
template) and other $AnalogDevice Objects are available for download at
http://www.archestra.biz.
$DiscreteDevice Template
The $DiscreteDevice Template object provides supervisory control capabilities
for instruments or equipment that have two or more discrete states.
Use the $DiscreteDevice template for creating objects that monitor multiple
discrete inputs and map them to a state table. A simple example is two discrete
values representing an Open limit switch and a Close limit switch. Four
options exist for the open and close options, and could be represented by the
states Open, Closed, In Transition, and Fault.
The Process Variable (PV) attribute of the discrete device is a string
representing the state. This can also be read as an enumerated integer value.
The object supports up to five distinct states based on one- to four inputs. Up to
six discrete outputs are available from the template.
113
The Passive state is provided to represent the state when the field device is not
energized. For example, a valve that fails to the closed state when it loses
power would have a passive state of Closed. A valve that requires power to
command it to open and to close may only use the two active states and not
have a passive state.
Alarms can be generated for either of the two active states or the fault state.
The object also includes the option to record statistics, such as the duration of
the various states, and to alarm on the duration.
$FieldReference Template
The $FieldReference Template object provides simple I/O capabilities for an
external data source, including field instruments, for a variety of datatypes.
The $FieldReference template is the parent for $Boolean, $Double, $Float,
$Integer, and $String templates. These templates provide a mechanism to read
and/or write to single I/O points in the field and to collect history on the
process variable.
The $FieldReference template does not provide scaling or alarm limits. Use the
$AnalogDevice template for scaling I/O and setting alarm limits. The field
reference templates are basically the same as adding a user-defined attribute
and using the extensions to add input, output, or history.
$Switch Template
The $Switch Template object provides simple I/O capabilities for a single,
two-state discrete signal.
The $Switch template provides slightly more functionality than the $Boolean
template, but less than the $DiscreteDevice template. The $Switch template
provides two text states for a single I/O point (with an optional different output
address). The value can be stored in history and either state (On or Off) can be
alarmed whereas an alarm extension only alarms on the TRUE state.
$UserDefined Template
The UserDefined object provides an empty starting point for creating custom
built objects that include UDAs, Scripts, Extensions, or Contained objects.
The UserDefined ApplicationObject enables the user to define the following
Analog and Digital inputs:
Analog Inputs:
114
Chapter 5
Discrete Inputs:
Example 1
This example describes the model of a process that contains four discrete valve
types. The valve types are determined from device requirements. To optimize
engineering development, a common derived template called $DValve is
developed (from the $DiscreteDevice base template) that contains all the
shared discrete valve requirements.
A template for each discrete valve type is derived from that common template.
The following figure illustrates how these templates are developed:
$DValve
Discrete Valve
Base Template
$SDValve
Single Actuator
Valve without Feedback
$SDValveSF
SingleActuator
Valve with Feedback
$SDValveDF
Single Actuator Valve
with
Dual Feedback
$DDValveDF
Dual Actuator Valve
with
Dual Feedback
Use this practice to model valves on different manufacturers and their models.
A base valve template contains the fields and settings common to all valves
within the facility. A new template is then derived from the base valve template
for each manufacturer and contains vendor-specific settings.
Finally, a new template set is derived from the vendor-specific template for
each valve model used in the facility. Once a model-specific template is
available, instances will be derived from the template to represent the actual
valves.
115
Example 2
This example describes a common, complex relationship called Reactor.
Reactor is based upon an interaction of five field devices. Multiple instances of
this relationship are used within the plant model.
The relationship can easily be developed using containment.
Create a derived template called $Reactor from the $UserDefined base
template.
Then, create template instances representing each of the five field devices. The
complex relationship can now be developed (using scripting) in the container
object ($Reactor) using the hierarchical names given to each field device.
When instances of $Reactor are created as field devices, each has two names:
the containment name (hierarchical name); and the physical name.
The following figure illustrates this practice:
Derived Template with Containment
$UserDefined
Template
$Reactor
Inlet
$DDValveDF
Dual Actuator Valve
with
Dual Feedback
Outlet
Temperature
$Analog
Standard Analog
Level
InletPump
$Pump
Standard Pump
116
Chapter 5
Best Practice
When defining UDAs with input or output extensions, never lock their
source or destination within a template, since it will be unique to each
instance and defined later.
Only a Boolean UDA can take on alarm extensions. Make the proper
selections of priority, category, and whether the template description or a
unique message for this alarm is to be used. Alarms on analog values
require use of an analog device template and object containment.
For a Boolean or Analog UDA alarm that comes from the field device
control logic and requires a corresponding Acknowledge, the "Acked"
attribute should take on an output extension with the destination being the
"Ack" point in the control logic.
Any UDA and its extension created within a template is inherited by all
derived objects. However, attributes and extensions are only propagated
when the instance is created or when that particular attribute is locked.
117
Most UDAs are checkpointed. This means that all data necessary to
support automatic restart of a running Application Object is saved
periodically. The restarted object has the same configuration, state, and
associated data as the last checkpointed value.
Unlike other UDAs, Calculated UDAs are not checkpointed. However, a
"Retentive Enabled" Calculated UDA is available and can be configured
to be checkpointed with all the other attributes in that object.
Best Practice
Derive from the Base Template First: Derive a template from the base
templates before deriving any instances. An object instance that is derived
from one of the base templates ($AnalogDevice, $DiscreteDevice,
$FieldReference, $Switch, $UserDefined, $WinPlatform, $AppEngine,
and $Area) cannot be modified at the template level in the future to add
additional scripting, UDAs, and so on.
Therefore, always derive a new template from the base templates, even if
it is identical to the base template.
When a template is derived from another template, the derived template
inherits all of the characteristics of the parent template. If the parent
template is modified, only the attributes and extensions that are locked
will propagate to child templates.
118
Chapter 5
Best Practice
Create a Master Galaxy for development purposes. This Galaxy contains
the master template library with the latest revisions for all of them. When
new galaxies must use any of these templates, ensure they include the
latest revisions.
Export .aaPkg packages (cab files) that contain the necessary templates
out of the Master Galaxy, then import them into the new production
Galaxies.
119
Create local templates that derive from the master at each production
galaxy. Any changes or specialization required should be implemented in
the local template. Make use of toolsets to separate the master templates
from the local templates; it is even recommended to hide the master
template toolset in the production galaxies and treat them as if they were
Read Only templates.
Galaxy Dump: Best when working within the same Galaxy to create new
instances of a template.
120
Chapter 5
Galaxy Dump
A Galaxy Dump exports template instances to a .csv file for editing or for
adding an instance of a template. Modifications can be loaded back to the
Galaxy.
When a dump is performed, any script, attribute, or attribute extension that is
not locked at the template level will be dumped, each in its own column. A
reference to the parent template is also contained in the file, in order to bring in
all of the locked scripts, attributes and extensions. Attributes that are calculated
or writable at run time are not dumped.
The dump and load functions are useful for quickly creating multiple instances
of a template, instead of using the IDE.
To prepare a file for a Galaxy Load operation
1.
Create one instance of the required template and dump this into a .csv file.
2.
Select the first column in Excel and "Convert Columns To Text" using
delimited and comma as parameters. This places each column from the
the .csv file into a different Excel column.
WARNING! NEVER click the Save button in the toolbar or the Save option
in the File menu. Excel will save the spreadsheet as a .xls file and destroy the
formatting conversions. See the following step for saving to a .csv format.
4.
After the modifications are done to the file (using Excel), select Save As
from the File menu and make sure that the File Type is .csv.
The file now has the valid format to successfully apply a Galaxy Load
operation from within the IDE.
In the following example, five additional instances were created from the
$Boolean template.
121
For three of the derived instances, the Area is not known, and for three other
instances, the Area is HomeArea:
;Created on: 1/10/2003 2:01:12 PM from Galaxy:Test
:TEMPLATE=$Boolean
:Tagname
Area
Boolean1
Boolean2
Boolean3
Boolean4
HomeArea
Boolean5
HomeArea
Boolean6
HomeArea
5.
6.
For this example, the following events occur when the load is performed:
The first three instances are created with all template functionality. If
no Area is set as the default, the instances appear in the Unassigned
folder in the IDE.
The next three instances are placed in the HomeArea Area, if this
Area currently exists. Otherwise, the objects use the Area settings for
the first three instances.
The advantages of using the Galaxy Dump and Load over creating instances
within the IDE are evident when conforming to a naming strategy.
For a contained object with three levels and hundreds of instances, it is much
easier to rename all the instances with a search and replace of 101 to 102
instead of naming each instance one-by-one within the IDE.
Best Practice
When backing up the Galaxy database, use the Backup functionality available
within the Galaxy Database Manager of the System Management Console. The
backup contains all Galaxy information (including security configuration),
whereas the simple export of automation objects only includes the object
structure and template toolsets.
122
Chapter 5
Best Practice
The following best practice recommendations are cross-referenced to practical
examples described in the following chapter.
When creating scripts at the template level, you may want to lock them.
You can then make changes to the template script, and the changes will
propagate to the next level. When a script is locked and there are no
declarations or aliases, these sections should also be locked for improved
propagation and deployment performance.
Note For more information on scripting and practical examples, see the
Chapter 6, "Implementing QuickScript .NET."
123
Startup: Called when the object is loaded into memory. Primarily used to
instantiate COM objects and .NET objects.
OnScan: Called the first time an AppEngine calls this object to execute
after the object scan state is changed to onscan. Primarily used to initiate
attribute values.
Execute: Called every time the AppEngine performs a scan and the object
is onscan. Supports conditional trigger types of On True, On False, While
True, While False, Periodic, and Data Change. Most run-time
functionality is added here.
OffScan: Called when the object is taken offscan. Primarily used to clean
up the object and account for any needs that should be addressed as a
result of the object no longer executing.
Scoping Variables
Declared variables can be of any data type within the development
environment plus object, .NET type libraries, and imported types. Declared
variables can be up to a three-dimensional array.
The issue becomes how to create a .NET data type that is persisted from scan
to scan and available to other scripts within the object or available to other
objects. Since UDAs only support the base data types, a declared variable or
script variable must be used.
Once the variable has been created and values set, it must be added to the .NET
global data cache through the following call:
DIM Connection AS System.Data.OLEDB.OLEDBConnection;
{Set required values}
System.AppDomain.CurrentDomain.SetData("MyConnection",
Connection);
124
Chapter 5
Do not use a UDA if a DIM (dimensioned variable) will suffice. Most UDAs
are checkpointed; DIMs are not. Checkpoints consume scan time. The
exception to this rule is a UDA set as "calculated", which by default is not
checkpointed. Calculated UDAs have the least "weight".
Note For more information on DIM Statements, see "Using DIM Statements"
on page 146.
125
InTouch
OLE_CreateObject(%QIDC, "QI.DataComponent");
Using Aliases
Aliases provide a single place for changing the variable, instead of multiple
places in the script. This practice makes script maintenance easier when
references contained in that script must be changed.
Create an Alias:
Use three dashes (---) to represent an unknown reference. The dash characters
prevent the "could not resolve reference" warning when instances are created.
Alias naming can also make the script easier to read and create. When the same
relative reference is used multiple times, create an alias for it.
126
Chapter 5
AppEngine Execution
The AppEngine is the only engine that hosts more than one object. Object
execution is handled by the scheduler primitive, which is single-threaded. It
executes objects registered on the host engine repeatedly, and in a sequential
order, during the scan interval.
The scan interval is the desired rate of execution of each Automation Object
the AppEngine hosts. The following tasks execute engine-to-engine in the
following order during the scan interval:
Input Processing Phase: The goal of this phase is to process all input
requests (SetAttributes, subscription packets, publish notifications). Input
requests are retrieved one at a time. If any input requests are left in the
queue, they are processed during the idle period before the next scan
interval. At least one queued input request is processed following the
Output Processing Phase, and before the Execution Phase.
Scan Overruns are a boolean condition that becomes true when the Execution
Phase crosses from one scan interval to the next. When a Scan Overrun occurs,
a new Execution Phase is delayed until the next scan interval begins. Any of
the phases in the above list can cause a Scan Overrun when they extend beyond
the scan interval.
All objects deployed on the AppEngine are processed in the following order:
DIObjects (multiple DIObjects are processed alphabetically by tagname),
hosted ApplicationObjects, then their Areas (numerically).
127
Read inputs.
2.
3.
4.
5.
Write outputs.
6.
Test alarms.
Each script is executed in its entirety before the next script is executed.
This behavior is different from InTouch, where a script can trigger another
script, such as a data change script. Within InTouch, the calling script halts
while the data change script is run.
Within Industrial Application Server, each script completes before the next
script is run. If a user-defined attribute of a second object instance is set during
the execution of the script, and that UDA triggers a script on the second object,
the script of the second object may or may not run during the same scan of the
engine. If the second object is configured to run after the first object, the script
on the second object will run during the same scan.
If the second object has already been serviced during the scan, the script on the
second object will run during the following scan.
Since the Engine manages each object, a script runs only as fast as the engine's
scan period or some multiple of that period. If the engine's scan period is one
second, and an object script is set to periodic for every 1.5 seconds, the script
will run every other scan (that is, every two seconds).
Data requested or sent to objects residing on another engine/platform are
updated on the next scan. This is also true for Application Objects on the same
AppEngine if an Application Object needs data from another object but it
executes before it on the scan.
For example, when Object A executes, it needs the output values from Object
B.
The values received are from the previous scan, because Object B has not
executed yet in the current scan. You must wait one scan if you want to verify a
write of this type. Alternatively, you can change the execution order in the
Object Editor so that Object B executes first.
128
Chapter 5
Asynchronous Scripts
An asynchronous script runs in a separate thread and is not directly tied into
the engine's scan process. Therefore, reading and writing to any object
attributes (including the calling object) is a slow process.
An asynchronous script should not Read or Write within a long FOR-NEXT
loop to a UDA or other external source. Since the asynchronous script runs in a
separate thread, it must wait until the next scan of the engine for all the Read or
Write transactions to occur. If the scripts have not all been completed at the
next scan, the system must wait for another scan.
A single-system test with an engine scan period of one second achieved
approximately 70 UDA writes per second and 35 UDA reads per second.
129
Valve220OLS
Valve220CLS
B3:12
B3:13
The Onscan script would still be the same. The added advantage of using the
attribute list is that all the I/O definitions are located in one central spot. A
second DIObject with essentially the same set of I/O points can easily be
created and, if necessary, modified by dumping and loading the existing
DIObject.
Other slight modifications are to store the DIObject name and topic name
within UDAs on the area object hosting the automation object. Then it could be
referenced by MyArea.DIName, where DIName is the name of the string UDA
on the area object.
Writing to a Database
To write to a database
1.
2.
At the object level, fill the message queue with the values to be written.
3.
At the engine level, create an object that moves the entries from MSMQ to
the database, using Stored Procedures. The scripting may run
asynchronously as no collecting is required.
Note The Message Queue service may use a large amount of memory,
depending on the amount of data written.
130
Chapter 5
Best Practice
The following are best practices to keep in mind regarding scripting:
Do not use OnScan scripts to access external data from multiple instances
simultaneously.
Managing object references by accessing external files from all instances
at the same time (onscan) is not recommended. Either some type of handshake mechanism between the Platform and object is required to make this
a viable approach, or you might use other methods to achieve this
functionality.
Avoid reading a database from within an object script because the engine
stops scanning until the connection is made or the query is returned.
Using asynchronous scripts does not help in this instance, because the
asynchronous script must collect the data and transfer it back into the
object for use. If it cannot finish all the 'writes,' it will start over on the
next scan. This retrieval can take many scans.
131
Use caution when including Data Change scripts. Recall that Data Change
scripts are executed in three instances: Value Change, Quality Change, and
a Set OnScan event (when the value is set from "nothing" to its default,
and the quality changes).
When large numbers of Data Change scripts are included, a failover event
could cause all the Data Change scripts to execute at the same time,
causing unwanted consequences.
132
Chapter 5
C H A P T E R
133
Implementing QuickScript
.NET
QuickScript .NET is the IAS scripting language and is integrated tightly with
.NET.
The greatest scripting capabilities are leveraged by the application of .NET
Classes. These are made available through the Function Browser in the Types
section.
Incorporating imported .NET Function DLLs into the Types achieves the best
method for customizing the functionality within QuickScript .NET. Desired
behaviors can be compiled using source code in any of the .NET languages.
Existing COM DLLs can be wrapped as .NET Function DLLs and
incorporated into QuickScript .NET.
Import the desired .NET function DLL into the Galaxy. The Classes and their
methods are exposed for use in the Types section of the Function Browser. The
Function Browser is a very useful tool for selecting script functions.
This chapter provides practical examples of .NET script use within the
IAS/ArchestrA environment.
Note This chapter assumes basic familiarity with .NET concepts and
structures. Graphics and script samples are provided from within the IDE
Environment. Developing in Visual Studio is outside the scope of this material.
Contents
134
Chapter 6
Basic .NET Types: Supported by all platforms that support .NET. The
Type functions are accessed through the IAS Script Function Browser:
135
Note Highlighting a function displays the argument list at the bottom of the
window.
136
Chapter 6
aaEngine
Input (MX)
Subscriptions
(InputSource)
.NET
Object
Subscriptions
(OutputDest)
.NET
Framework
ADO.NET
Database
Output (MX)
XML
Documents
Legend
IAS Scripting Interactions:
Modular Script Execution
and UDA references
Input/Output
(Subscriptions)
.NET Interactions:
Referenced Objects
(in .NET ObjectCache)
.NET Framework
137
Input/Output
The principal method referencing Attribute and Property values of other IAS
Objects is through the use of I/O Attributes. The IAS AnalogDevice Object
includes the PV Attribute and properties identifying the PV.Input.InputSource.
The IAS DiscreteDevice Object supports configuration of one or more discrete
Input and/or Output Attributes which receive designations as "Input1,"
"Input2," "Output1," "Output2"; these support I/O referencing using the
Input1.InputSource and Output1.OutputDest Properties.
An Attribute may be configured having any of the MxValue datatypes and it
can be extended as an Input, as an Output, or as an InputOutput. This is
accomplished using UDAs and Extensions.
For example, a UDA named Pressure can be added to a UserDefined Object.
The Pressure Attribute can be extended assigning Pressure.InputSource.
Assignments of the I/O Source can be deferred until run-time by leaving the
triple-dash string " --- " in the configuration text field. A run-time QuickScript,
typically named IOInitialize and typically configured as Execute
WhileTrue - is implemented so that the QuickScript runs once at the first
Execution scan.
The typical QuickScript I/O assignment command is similar to the following:
Me.Pressure.InputSource = MyContainer.DIDeviceName +
"." + MyContainer.PLCScanGroup + ".F30[146]";
138
Chapter 6
.NET Framework
The .NET Framework within QuickScript ensures the code syntax remains
straightforward, since QuickScript enables many different approaches to the
acquisition and processing of I/O data.
For example, QuickScripts are used to detect conditions, apply transformation
algorithms, post results as I/O as database inserts/updates. Or the QuickScript
may simply capture calculated values in UDAs for viewing by InTouch
Software or Historization by IndustrialSQL Server Historian.
Industrial Application Server includes a number of pre-built .NET functions
for Math and for String manipulation. These are exposed under specifically
named tree branches in the Script Function Browser.
139
140
Chapter 6
Z has the value of the object but the quality value of the object attribute is
lost. However, it is valid to carry the object quality value by direct
assignment to a local integer variable:
Q = MyObject.PV.Quality
If the referenced attribute quality value is Bad, the data value is set to
the default value associated with the data type of the respective attribute:
type float, bad quality, data value is set to the default NaN (Not a
Number).
The script system is capable of dealing with the NaN data value in such
expressions as:
Me.PressureSetPoint = Me.Pressure + 12.0;
{assigned NaN}
{NaN}
{Constant}
Me.PressureMetric = Me.Pressure * Me.KPAperPSI;
{assigned NaN}
{NaN}
{Calculated Constant}
results in the quality value of the attribute being set to good regardless of
the quality value of the attribute prior to the assignment.
141
Condition statements are the only instance where the Data Quality
Propagation approach takes quality explicitly into account. In all other
cases, script execution ignores the quality and the script developer must
test the quality explicitly.
A standard coding pattern should be followed to allow quality to be taken
into account without the explicit testing of the quality, for example:
IF ((MyContainer.ValveA == "CLOSED") AND
(MyContainer.ValveB == "OPENED") THEN
Me.Valve.Cmd = "CLOSE";
ELSE
Me.Valve.Cmd = "OPEN";
ENDIF;
If a situation exists such that one value is Initializing and another is Bad,
Bad quality takes precedence and the expression's result quality value is
reported as Bad. The result value itself is set to the default value for the
given data type (for example, a result value for a type FLOAT is set to
NaN - not a number) when the quality goes unacceptable.
If one value is Uncertain and all others are Good, the resulting value is
calculated but the resulting quality is set to Uncertain.
See Chapter 12, "Maintaining the System," for information about testing the
quality of an attribute.
142
Chapter 6
Using UDAs
UDAs provide read-write interactions between domain objects and facilitate
MX subscriptions between object instances. This is handled at the
Input/Output level.
Use remote references to attributes of other objects which are NOT bound
directly to UDAs via InputSource or OutputDest properties.
The reference in a QuickScript looks like the following:
TankLevel101.PV
Such remote references get subscriptions at run-time but their values are held
in a 'memory cache' associated with the Object that contains all such remote
references.
Even the InputSource and OutputDest extended Attributes have entries listed
in the actual memory cache.
143
Indirect Referencing
IAS version 2.0 introduced the "Indirect" variable type for use within
QuickScript. In IAS the behavior of an Indirect variable type is different from
the same variable type used in InTouch Software.
InTouch Software designates three types of Indirect and the designated
InTouch Indirect variable is global to the InTouch VIEW application. For IAS
the Indirect is generic for all MX datatypes. A dimensioned Indirect variable is
local to the QuickScript that contains it. The BindTo Property is used to
redirect the data source or data destination of the local Indirect variable
immediately during the same scan.
However, there is a limitation that data can only be immediately read during
the same scan if the reference points to "Me" or to an Attribute of an Object
that is running on the same Engine as the current Object, for example by UDA
extension.
Indirect Referencing requires the creation of Indirect Tagnames in the InTouch
Tag Dictionary that are global within that InTouch Software Application.
IAS Indirect Referencing provides dynamic reassignment of
InputSource/OutputDest attributes to an object with an existing subscription.
When a new reference is created (i.e., a reference to a different object), the
reassignment takes effect at the next scan.
IAS Context
Note The following material applies to .NET functions, not to native
QuickScripts.
The QuickScript .NET language is not intended to fully replace a developer
tool such as Visual Studio. Complex scripting with .NET object error trapping
and user-generated data types still requires a developer tool to write the code
and contain it within a .NET or COM component file, such as a .dll, .tlb, or .olb
file. These components can then be imported into the Galaxy Repository and
be made available within the scripting environment. Any object instance that
makes use of an imported library will be deployed with the library file to
ensure correct object execution.
144
Chapter 6
The main .NET editing tool in the IDE is the Script Function Browser (see
previous figure). This utility provides access to .NET functions registered on
the local node. All platforms that support .NET expose these function types.
Deployed objects automatically install the QuickScript .NET .dll function
library (required by the Application Object) on the remote platform.
Many types are already installed on the computer. Others are installed by
Visual Studio or by installer programs.
Note For a list of included .NET functions and Sample Scripts, please refer to
the IDE Online Help.
.NET Overview
Microsoft .NET is based on open standards. It augments the presentation
capabilities of HTML with the metadata capabilities of XML to provide a
programmable, message-based infrastructure. Like HTML, XML is a widely
supported industry standard, defined by the World Wide Web Consortium.
XML provides a means of separating data from views, which offers a means to
unlock information so that it can be organized, programmed, edited, and
distributed in ways that are useful to digital devices and applications.
XML and message-based technologies do not replace existing interface-based
technologies like COM, whose tightly coupled, synchronous nature is required
in many situations, particularly equipment control. However, the tight coupling
afforded by interface-based technologies also makes them difficult to
implement over the Internetit is often difficult to even establish connections
due to firewall constraints.
In contrast, an XML instance document, wrapped in SOAP (Simple Object
Access Protocol) and bound to HTTP, can pass easily through an enterprise's
firewall on port 80, which is typically open for Web traffic. Much like HTML,
XML is independent of operating systems and middlewareas long as an
application can parse XML, it can exchange information with other
applications. As such, XML provides developers with a powerful new tool for
integrating systems in a highly distributed environment.
.NET Provides an open standards-based framework that enables the following
functionality:
145
Scripting Practices
The following information summarizes scripting practices that facilitate
effective implementation and script management.
Modularize Scripts
Breaking the script into smaller pieces enables greater control, management
and reuse of content/functionality.
TryConnect:
See the "TryConnectNow QuickScript" on page 172.
Comments: Includes Connection attempts, Retrys and Error Message handling.
PostData:
See the "PostDataNow QuickScript" on page 177.
While True
Using a While True script condition ensures that the system is not burdened by
unnecessary script processing because once the Expression goes false the
script is skipped.
146
Chapter 6
Syntax Differences
Code from VB.NET is not the same as QuickScript - ensure that syntax
differences are known before copy/pasting VisualBasic script into the
QuickScript editor.
Handling Exceptions
QuickScript does not directly support exception handling. Encapsulate .NET
Exception Handling inside of a QuickScript function .dll.
147
Note Using Database Triggers and Stored Procedures is outside the scope of
this example.
148
Chapter 6
.NET Implementation
Template "Shape"
Virtual Class
Template Implementation
Class Implementation
Object Implementation
Class Instances
When a derived object template is created from the "shape" object, the derived
UDAs appear in the IDE as Inherited and are called by other UDAs created at
the template level. When the object instance is created, it should require only
slight customization to be effective.
For example, when posting data, a different UDA is required for each data
type. If posting Multiple records, use the UDA Array.
149
ObjectCacheExt Classes
The sample QuickScript Function DLL is located in a file called
ObjectCacheExt.dll, and includes three .NET Classes: The first Class is
simply the base ObjectCache functionality described in the IAS Help
documentation covering scripting. The other two Classes implement a variety
of useful methods for managing two database connection types.
The two Classes are given the following names: .SQLConnCache, and
.OLEDBConnCache (properties of a .NET Class). These two classes are
"type safe". This means that each Class only manages .NET database
connections of the corresponding type of database protocol: SQL or OLEDB.
Connection Types
Direct ADO.NET calls (direct from QuickScript) are discouraged for the
following reasons:
Note For the complete QuickScript and source code for ObjectCacheExt.dll,
see Appendix B.
150
Chapter 6
For the examples described in this chapter there should be one instance Object
that manages the database connection pool. The connection pool doesnt reside
inside of the IAS Object, however. It must be instantiated as a .NET Class
Instance. The Instance is attached to the Application Domain of the Engine
hosting the Instance Object.
Because the list is alphabetized, the Derivation View does not show the objects'
relationships. In this example, the SqlConnCacheMgr provides the connection
maintenance, and the PostTOaOrb objects provide posting for the data and
use the SqlConnCacheMgr object for the DB connection.
151
ObjectCacheExt.DLL Overview
This section describes the ObjectCacheExt.dll Classes, focusing on the
SqlConnCache class functions.
The function browser does not indicate the datatypes of the arguments, nor
does it indicate the data type of the return value. Function argument datatypes
may be guessed from the placeholder names that are given.
The sure way to determine the datatypes is to look them up in the
documentation for the function. For Microsoft .NET functions, the information
can be found in the MSDN documentation or using the Visual Studio .NET
Object Browser.
The ObjectCacheExt.dll contains the following classes:
152
Chapter 6
Functions
Description
Add(sqlConnName, sqlObj)
Add(sqlConnName, sqlObj,
withExceptions)
ContainsKey(sqlConnName)
ExecuteNonQuery(sqlConnName,
SqlCommandText, enableMonitor)
Get(sqlConnName)
Get(sqlConnIndex)
GetDatabaseName(sqlConnName)
GetServerName(sqlConnName)
GetExecuteNonQueryDuration
(sqlConnName)
GetExecuteNonQueryCount
(sqlConnName)
Initialize( )
Initialized( )
Remove(sqlConnName)
Arguments
Description
sqlConnName
sqlObj
withExceptions
sqlConnIndex
SqlCommandText
enableMonitor
153
Purpose of ObjectCacheExt.DLL
Although it is possible to encode ADO.NET function calls in QuickScript
.NET the lack of support for handling Exceptions inline does not allow the
Objects to recover gracefully from errors (interruptions) in run-time without
the help of Function DLLs.
Possible interruptions when trying to implement database interactions include:
154
Chapter 6
The SqlConnCache Class contains only "static" member variables. This means
that only one instance of the Class will be loaded. The Class is loaded into
memory in the Application Domain of the Engine when an IAS Object running
under that Engine first calls any function of the Class.
155
SqlConnCacheMgr
The following figure describes the $SqlConnCacheMgr template Object's
UDAs.
Most of the UDAs are arrays with indexes 1 and 2. This example
accommodates up to two independent SqlConnections.
The UDAs of interest at this point in the discussion are:
SqlConn.Name[1] holds the String to be given to the cache as the name of
the SqlConnection.
SqlConn.Name[1]
SqlConn.Name[2]
LogMessages.Enabled
b1
c1 d1
e1
b2
c2 d2
e2
SqlConn. GetStatistics.Now
Connection. Acquire[1]
Connection. Acquire[2]
Connection. Release[1]
Connection. Release[2]
d
e
g1
g2
h1
h2
f1 f2
k1 k2
l1 l2
m1m2
n1 n2
o1 o2
p1 p2
SqlConnCacheMgr
Connection. Connect[1]
Connection. Connect[2]
Connection. Disconnect[1]
Connection. Disconnect[2]
SqlConn. Initialized[1] g
SqlConn. Initialized[2]
SqlConn. Result[1]
SqlConn. Result[2] h
Query. Count
Query. Duration
q1
r1
Connection. AcquiredBy[1]
Connection. AcquiredBy[2] q
q2
r2
Connection. Status[1]
Connection. Status[2]
Connection. NodeName[1]
Connection. NodeName[2]
Connection. IntegratedSecurity[1]
Connection. IntegratedSecurity[2]
156
Chapter 6
PostToDBaORb
The $PostToDBaORb template Object is shown in the following figure.
Instances derived from the template take care of acquisition of data that will be
posted as a new row to a database table.
Derived Object instances do not create their own .NET SqlConnection. Instead
they reserve a connection owned by the $SqlConnCacheMgr Object instance.
A $PostToDBaORb Object instance gets a .NET SqlConnection Object
reference from the SqlConnCache Class, then it invokes several methods of the
Class to Open the SqlConnection and post a new row of data. The Object
releases the named SqlConnection when it is finished with it.
Connection. TryConnect
LogMessages.Enabled
SqlConn.Name
SqlConn.Name.Prefix
b Connection. Acquired.ByA
c Connection. Acquired.ByB
d Connection. Connect. ToAorB
b
a
c
e DB. PostData
f Connection. Disconnect
i
j
PostToDBaORb
g h
s1
t1
Connection. State
Connection. Attempts
u1
r DB. TableName
n
o
Connection. NodeName.A
Connection. NodeName.B
t1
p
q
Connection. IntegratedSec.A
Connection. IntegratedSec.B
Connection. Result
157
SqlConnCacheMgr
creates
connections
m1 m2
q1
read/write
q2
Database Name
Node Name
Security type
read/write
b
c
Object
chooses
connection
"a" or "b"
PostToDBaORb
158
Chapter 6
Initialize QuickScript
The $SqlConnCacheMgr includes many UDAs used to keep track of its own
state and the state of a cache of .NET SqlConnection objects. (See Appendix B
for a complete listing of the UDAs). Several QuickScripts operate within the
Object.
The Initialize script must execute before any other script, thus its execution
order is stipulated to be first. Use Configure Execution Order in the
QuickScript editor window to modify the execution order of scripts:
The purpose for declaring local variables in the Declarations section (of any
Object's QuickScript) is to establish the variables in memory upon Object
Startup.
Such variables are local to the named QuickScript but their lifetime extends
until Shutdown of the Object. Local means that the variable can be read and
written to from within any of the script types (Startup, OnScan, Execute,
OffScan, and ShutDown) but only within the context of the named
QuickScript.
Note The following examples are not complete and shown from the script
editor as graphic images. For the complete, copy-able source code, see
Appendix B, .NET Example Source Code.
159
It is good practice to fill in the complete .NET Type designation, although the
short form SqlConnection will work. A full designation ensures there will be
no ambiguity when imported Function DLLs exist that may have the same
named function under a different .NET namespace.
It is also important to understand that the use of the Dim statement does not by
itself create the variable in memory. Dim is used to establish a Type Safe
Datatype for the named variable. The QuickScript compiler will give warning
messages for code that attempts to use the variable if it happened to be a
different Datatype than the one declared using Dim.
The IF ENDIF block ensures that the Initialize function call will be
invoked only when the initial state of the SqlConnCache is double-quotes (""),
meaning Blank.
160
Chapter 6
In order to establish any .NET Object in memory the new operator is used in an
assignment statement.
The following statement binds the new Object to the local variable
sqlConnectionDBa which had been previously dimensioned in the
Declarations section:
This particular function call invokes the constructor for a new .NET
SqlConnection object and leaves it essentially empty as a default, ready to
make a connection to an MSSQL database.
The local variable (sqlConnectionDBa) had been previously declared as a local
.NET Object variable of the desired type in the Declarations section.
Upon assignment as new it goes into memory in the scope of the Initialize
QuickScript at Object instance Startup. It then stays in scope until the
$SqlConnCacheMgr Object instance goes through Shutdown.
Note however that the new reference to a .NET SqlConnection is not yet
accessible for use by other IAS Objects, nor is it accessible immediately by
other QuickScripts of the same $SqlConnCacheMgr Object instance. As
described in the previous section, the $SqlConnCacheMgr Object instance
must place a reference to the new .NET SqlConnection Object into the cache.
This is achieved using the following line of code in the example:
161
The Add method places the String name for the .NET SqlConnection Object
into the hash table in the Application Domain. It uses the value from UDA
Me.SqlConn.Name[1] of the $SqlConnCacheMgr Object Instance. That value
is inserted into a hash table as a sqlConnection name and a reference to the
new .NET SqlConnection Object is passed along to the hash table as well.
The .NET SqlConnection object reference has been attached to the locally
declared variable (sqlConnectionDBa). The third argument set to true means
that the function call should investigate Exceptions and return a String
identifying them if and when they occur.
Note that the SqlConnCache Class's Add method inspects the Type of the
.NET object that is passed in by reference. If the .NET object is not of Type
System.Data.SqlClient.SqlConnection, it is not added to the cache but an
error message String is returned. Thus the Add function is Type Safe.
A local variable (resultConnectionDBa) captures the return value String
which is may be inspected using Object Viewer because it gets copied into a
UDA:
This code checks the cache again to see whether the .NET SqlConnection
Object is present. If so it is gracefully removed by applying the Remove
function call.
The balance of code in the OffScan QuickScript cleans up the state of the cache
in memory, providing a guarantee that the cache will be ready for new .NET
SqlConnection Objects once the $SqlConnCacheMgr Object Instance is
brought back OnScan.
162
Chapter 6
Two UDAs are used to determine when the Statistics QuickScript runs:
Me.SqlConn.GetStatistics.Periodic
Me.SqlConn.GetStatistics.Now
The first of these is set to true by default. The second defaults to false.
Either UDA being true results in periodic execution repeated every Trigger
period. If the Periodic attribute is set to false, the Statistics QuickScript
execution depends entirely upon the Now UDA.
Upon setting the Now UDA to true the QuickScript executes once and stops.
This is generally called a one shot among control system engineers and
technicians.
An alternate method utilizes a single Integer UDA as the trigger. The
Expressions look to see if the Integer UDA is greater than zero. Non-zero
values trigger the QuickScript.
Inside the QuickScript would be IF ELSE ENDIF blocks that vary the
actions according the actual value of the Integer UDA.
163
ManageConnections QuickScript
The ManageConnections QuickScript of the $SqlConnCacheMgr template
Object handles requests from other IAS Objects to acquire .NET
SqlConnection objects from the pool of connections.
Following are excerpts from the QuickScript:
Template Object
$SqlConnCacheMgr
QuickScript Name
ManageConnections
Declarations:
Execute Expression
WhileTrue
164
Chapter 6
165
If it does exist in the cache, the Class's Get function is called within the code
block: IF foundConnectionDBa THEN ENDIF.
Note that the local variable (sqlConnectionDBa) has been dimensioned in the
Declarations section as an ADO.NET connection object, i.e.
System.Data.SqlClient.SqlConnection.
Once again it is important to understand that the Declarations section does not
place such variables into memory, it only defines the Types of variables for
compiler checks. Furthermore it is important to understand that the
Declarations section in the ManageConnections QuickScript is essentially
independent of the Declarations section in the Initialize QuickScript; thus the
Dimension statements are repeated.
It is perfectly acceptable to use different local variable names in differently
named QuickScripts, even for the same Type of variable. Keeping the same
local variable names is actually a convenience of "cut and paste" editing. Once
the distinction is made that the two QuickScripts (Initialize and
ManageConnections) have independent local variable scope, there should be
no confusion in reuse of the local variable names.
Just remember to provide the appropriate Dim statements in the Declarations
sections.
166
Chapter 6
Review the line of code that retrieves a .NET SqlConnection object from the
cache:
What happens if the named .NET SqlConnection object is not found in the
cache when another IAS object requests it?
Even though the Initialize QuickScript was supposed to have created the
required .NET SqlConnection object, another QuickScript called
DisconnectFrom exists. If this QuickScript had been invoked it is possible
that the desired named SqlConnection will have been removed from the cache.
Under this scenario it is important that the ManageConnections QuickScript
be able to recreate the SqlConnection and reinstate it in the cache:
Thus the new keyword does get applied in this scenario. The UDA
Me.Connection.Status[1] gets a copy of the State of the .NET
SqlConnection.
The ToString( ) function is a universal .NET method that translates any object
parameter into a readable String. Having just created a new .NET
SqlConnection object, the State should in fact be Closed. Once created it is
added to the hash table (the cache) by applying the Add function and the result
is placed into the local variable (resultConnectionDBa).
167
Per the logic of acquiring a valid .NET SqlConnection from the pool, it is
always necessary to insure that a reference is retrieved from the cache, so these
lines of code are repeated.
The Status is then inspected. If the SqlConnection is Closed, the following
code generates a connection string and applies it in an attempt to establish a
live connection with Microsoft SQL Server. The node name of the Microsoft
SQL Server is supplied in the string.
Note that the format of the connection string will vary depending upon whether
the UDA is Me.Connection.NodeName[1]. If another UDA, namely
Me.Connection.IntegratedSecurity[1] is set to true.
If the security UDA is false, user name and password are passed along from
UDAs Me.Connection.User.Name[1] and
Me.Connection.User.Password[1].
Note that this technique for passing along a password is not entirely secure
because the UDA can be read in plain text using Object Viewer.
This code sample also illustrates a common technique in QuickScript whereby
a String local variable repeatedly gets additional String values appended to it
(connectStringa) upon finishing the connect string by appending the database
name:
168
Chapter 6
ConnectTo QuickScript
The ConnectTo QuickScript of the $SqlConnCacheMgr template Object
handles requests from other IAS Objects to find existing, or create new .NET
SqlConnection objects from the connections pool, without acquiring it for its
own use.
The code is not reproduced here because it is essentially the same as that of the
ManageConnections QuickScript.
If one script performs essentially the same function as another, why have such
a script? The answer lies in the fact that there are many scenarios where other
IAS Objects feel the need to initialize a SqlConnection in the cache but they
don't need to acquire it immediately for their own use.
169
DisconnectFrom QuickScript
The DisconnectFrom QuickScript handles requests from other IAS Objects to
remove existing .NET SqlConnection objects from the connection pool.
The Execute trigger expression for this QuickScript includes only the UDA
Me.Connection.Disconnect[1]. Most of the lines of code from this
QuickScript are not reproduced here. The following lines, however, merit
consideration:
This code fragment illustrates the technique for cleaning up the hash table (the
cache) owned by the SqlConnCache Class.
When it is determined that the .NET SqlConnection is still Open the .NET
SqlConnection's Close method is invoked. Note that the local variable
(connection1) had been defined as a System.Data.SqlClient.SqlConnection
in the QuickScript Declarations section using the Dim statement.
Also not shown above is the mandatory Get function call, which binds a .NET
SqlConnection reference from the cache to the local variable (connection1).
The state of the SqlConnection is then copied into the UDA
Me.Connection.Status[1] and if the SqlConnection's State property has
achieved Closed status, the Remove function is called to expunge the
SqlConnection from the cache.
Finally as cleanup, the ContainsKey function is called looking for false. This
allows the QuickScript to then clear the UDA Me.Connection.AcquiredBy[1]
to a null String (""), which signifies that the SqlConnection is not acquired any
longer.
170
Chapter 6
The programmer creating the IAS Object must still create logic that reacts to
the error message Strings returned by the function calls.
Programming languages prior to Java and .NET required careful management
of memory by explicitly allocating and de-allocating it. .NET performs
automatic Garbage Collection.
This means that objects that are no longer referenced at certain steps are
available for elimination, thereby recovering memory. For example, when
SqlConnection is eventually removed from the cache, the .NET Framework
returns that amount of memory to the .NET managed heap. This operation is
performed automatically.
UDAs
Following are selected UDAs defined in the $PostToDBaORb template Object.
Datatypes and array designations are omitted:
Connection.Acquired.ByA
Connection.Attempts
Connection.Connect.ToA
Connection.Connect.ToAorB
Connection.Disconnect
Connection.IntegratedSec.A
Connection.NodeName.A
171
Connection.TryConnect
DB.Column.DataType
DB.Column.Name
DB.Column.ValueString
DB.Name.A
DB.PostData
DB.TableName
SqlConn.Name
Note that the UDAs are segregated into groups using the convenient dot
separator. This technique mimics the hierarchical naming structure of IAS
within a single Object.
Also note that the prefix for each is a functional name - Connection, DB and
SqlConn. This convention provides a strong hint as to the purpose of the
UDA.
Note that the text window cuts off the beginning of the text field in this figure.
Also note that the UDA of the $SqlConnCachMgr instance Object is indexed
to the first element of an array of type String.
This particular UDA array serves as a reservation system. An Object links to
this UDA and places a String value giving its own Tagname (using
Me.Tagname in scripting) into the UDA, thus acquiring exclusive rights to the
.NET SqlConnection object owned by the $SqlConnCacheMgr instance
Object.
When finished with the SqlConnection, the $PostToDBaORb instance Object
clears the String value to a null String (assigning double-quotes "") and the
reservation is withdrawn.
172
Chapter 6
TryConnectNow QuickScript
Previous examples addressed the creation of .NET SqlConnection objects in a
connection pool using the SqlConnCacheMgr Object.
Template Object
$PostToDBaORb
QuickScript Name
TryConnectNow
Declarations:
Execute Expression
Me.Connection.TryConnect
WhileTrue
173
174
Chapter 6
175
The local Boolean variable (hasConnection) gets set to true only if the desired
.NET SqlConnection object already resides within the hash table (the cache).
The name of the .NET SqlConnection object is given by the String value
encode in the UDA Me.SqlConn.Name.
Once it is determined that the SqlConnection is really in the cache QuickScript
code acquires it for use:
Recall that the local variable connectAcquiredBy has gotten a copy of the
value from the InputOutput extended UDA Me.Connection.Acquired.ByA or
alternately from the .ByB UDA.
The IF ELSE ENDIF block checks to see whether that UDA value
happens to be the very Tagname of the acquiring object. If it matches, the
SqlConnection is already correctly reserved by the instance of
$PostToBDaORb.
If not, an attempt is made to acquire by poking the instance Object's name, i.e.
Me.Tagname into the InputOutput extended UDA, which happens to be bound
to the reservation UDA Me.Connection.AcquiredBy[1] of the
$SqlConnCacheMgr instance Object.
176
Chapter 6
For the "b" case the second array element [2] gets the reservation Tagname
String value.
Then the repeating cycle of the QuickScript execution is halted by setting the
trigger false.
If a SqlConnection reservation has not been acquired successfully the
following IF THEN test increments a local variable to count attempts,
comparing it against a configured maximum number of attempts encoded in a
UDA.
If the engineer should forget to fill in a number during configuration, after 10
attempts the QuickScript clears the trigger UDA:
TryConnectOnFalse QuickScript
Template Object
$PostToDBaORb
QuickScript Name
TryConnectOnFalse
Declarations:
Execute Expression
Me.Connection.TryConnect
OnFalse
Execute Runs
Asynchronously
Checked
177
PostDataNow QuickScript
Template Object
$PostToDBaORb
QuickScript Name
PostDataNow
Declarations:
Execution Expression
Me.DB.PostData
OnTrue
178
Chapter 6
179
If the SqlConnection really is in the cache and it has been acquired by this
instance Object the status of the connection is retrieved by a call to the
Function DLL and copied into a local variable.
180
Chapter 6
The second FOR NEXT loop fills in data values of the INSERT query.
Note that there are IF ENDIF blocks which test the column DataType and
insert single quotes into the query where they are needed to accommodate the
particular DataType.
DataTypes are encoded in elements of a UDA array:
181
The INSERT query is wrapped up, and the moment of truth arrives where the
ExecuteNonQuery method of the SqlConnCache class gets executed.
This query could take several seconds. Hence this QuickScript's Execute
operation has been marked as Runs asynchronously.
The FOR NEXT loop that follows illustrates a technique for taking a result
String value, looking it up in a UDA String array (an enumeration) to
determine if the result matched a known Exception type.
Note the use of the EXIT FOR keywords. They are used to jump out of the
loop when a match is found:
If there was not an Exception the QuickScript inspects the result for useful
information such as the number of rows affected by the INSERT query.
This number is extracted and concatenated into a result UDA:
182
Chapter 6
RandomizeNow QuickScript
Template Object
$PostTOaORb
QuickScript Name
RandomizeNow
Declarations:
Execution Expression
Me.Product.Randomize
OnTrue
The .NET System.Random object must be initialize in the standard way for
any .NET object. The OnScan QuickScript applies the new operator.
Each time the trigger UDA goes true a newly randomized value is generated
and copied into a UDA.
183
This example creates two values in units of percent (%) from the single
System.Random object. Several method calls generate different random result
Datatypes depending upon the call.
The NextDouble method of the object gives float values between 0.000 and
1.000 or whatever precision you're interested in. The QuickScript multiplies by
100.0 and copies the result to a UDA, then subtracts that result from 100.0 for
the other UDA.
Another example of a technique is the use of a lookup table stored in a UDA
array:
184
Chapter 6
UDA Name
Data type
Category
Clock.EndTime
Time
Calculated retentive
Clock.StartTime
Time
Calculated retentive
Clock.ElapsedTime
ElapsedTime
Calculated retentive
Running.ElapsedTime
ElapsedTime
Calculated retentive
Running.PercentOfClock Float
QuickScript Name: Timer
Declarations
Object writeable
185
Startup Script:
186
Chapter 6
OnScan Script:
Name
Trigger type
Trigger Period
Execute script
Periodic
00:00:00.0000000
187
188
Chapter 6
The Startup script for "Timer" includes all of the code that initializes the
dimensioned .NET variables e.g.:
calcPeriodT = new System.TimeSpan;
and
lastScan = new System.DateTime;
An IF THEN block checks whether the Engine is not in the recovery state,
for example following a failover.
IF NOT MyEngine.Engine.StartingFromCheckPoint THEN
.
For the case that there has not been a failover, it is assumed that the Object
instance is starting up for the first time. Therefore the variables need to be
properly initialized, i.e. not reinstated from Checkpoint data.
Several of the UDAs of type Time are initialized by applying the QuickScript
.NET function call Now( ), which grabs the time stamp from the computer's
system clock.
Me.Clock.StartTime = Now();
A pair of statements appear to copy the same initialization data into two
different local variables:
calcPeriodT = Me.CalcPeriod;
calcPeriodL = calcPeriodT;
Actually the two local variables are of two different .NET data types: The first
is System.TimeSpan and the second is System.Int64.
Calculations comparing time inside of IAS QuickScript .NET must be
performed using 64 bit arithmetic. So when differences in time must be
evaluated, the System.Int64 local variables must be used.
The example script code lines first capture the "retentive" Me.CalcPeriod of
MX ElapsedTime, and converts it into a System.TimeSpan. The
System.TimeSpan is implicitly converted into a System.Int64 for use in
calculations.
As stated above for Startup, in the event that there has actually been a failover
the Startup script will run, but the local variables will not be initialized, having
verified StartingFromCheckpoint and thus skipping that section of code.
Instead the OnScan script recovers the correct values from the Checkpoint
files, for example:
cstrtT = Me.Clock.StartTime;
189
An IF THEN block that determines when the amount of elapsed time has
passed that exceeds the desired "calculation" period.
Again, this determination is made by comparing two System.Int64 local
variables:
IF calcDelayL >= calcPeriodL THEN
The following line illustrates the use of another method of the .NET
System.TimeSpan object TotalSeconds, which facilitates calculation of
percentages in this case.
TotalSeconds actually returns 64 bit values and division of two 64 bit values
keeps precision of 64 bits. The UDA is of type MX Float but .NET implicitly
converts from 64 bit to a proper float number.
me.Running.PercentOfClock =
100.0 * rET.TotalSeconds() / cET.TotalSeconds();
The calculations in next scans are dependent upon remembering the exact time
stamp so the local variable gets a time stamp using the Now( ) function:
lastScan = Now();
Note that this local variable lastScan is not copied to a UDA of category
Calculated retentive. This is not necessary because lastScan is dimensioned
as System.DateTime in the Declarations section. This means that it is active
for the life of the $RunTime instance Object and its value is preserved from
scan to scan.
In a failover scenario it is not necessary to preserve its value because
calculations must start over fresh with the Startup and OnScan scripts
anyway.
190
Chapter 6
Summary
Interactions between scripts can be complex. This chapter has described some
of the important concepts, including UDA scope, local variable scope, script
execution order and interactions with other IAS Objects.
Interaction with a database is the example selected for illustration in this
chapter. However the complexity of ADO.NET and the propensity for database
interactions to thwart simple timing logic points toward custom Function
DLLs as the most effective script implementation.
An IAS Object called SQLDataObject is available and can be used for many
database interactions. Thus it is not absolutely necessary to resort to custom
Function DLLs to communicate with a database. ObjectCachExt.dll was
chosen as the example for this chapter because it also illustrates techniques for
leveraging storage into memory using the Application Domain associated with
the operation of the AppEngine hosting the IAS Object.
QuickScript .NET offers a variety of script operational types - OnStartup,
OnScan, Execute, OffScan, and Shutdown, and a variety of trigger types are
possible for the Execute operation using a versatile trigger Expression.
The timing of repeat cycles can be configured and the Run asynchronously
check box option allows an Execute script to overcome the confining limits of
an Engine scan. This becomes quite important when a script is attempting to
perform programmatic interactions with off-Engine processes like transactions
with a database.
Incorporation of programmatic access to the Microsoft .NET Common
Language Runtime (CLR) is the most powerful aspect of the IAS scripting
environment. The syntax of QuickScript .NET has been restricted to basic
constructs given the lessons learned with the InTouch Software script
language. QuickScript .NET adds some fundamental functions mostly per their
definitions and behaviors taken from InTouch Software.
The QuickScript .NET code examples in this chapter have been presented in a
logical order and are somewhat complete, however they will not complete a
functional set of Objects for the test case of communicating with a database.
See Appendix B, .NET Example Source Code, for the complete definitions.
This chapter also described the concept of time calculations and provided a
brief example.
This chapter has provided a glimpse into the QuickScript .NET environment
using practical examples based on a real-world application. The complete
listings for the Object templates and the C#.NET source code are found in
Appendix B, .NET Example Source Code.
Architecting Security
C H A P T E R
191
Architecting Security
Contents
192
Chapter 7
Guidelines and best practices for securing plant control systems from
advisory groups, such as ISA SP99 committee, NIST (Process Control
Security Requirements Forum -PCSRF), NERC, etc.
Firewalls
Network based Intrusion Prevention/Detection
Host-based Intrusion Prevention/Detection
Architecting Security
193
Accounts
Types and uses of security accounts needs to be defined by strong security
policies and must comprise useful account creation and maintenance
procedures. The policies that govern system accounts should be fully
developed, documented and communicated by IT, Automation engineering,
and Management in a collaborative environment.
194
Chapter 7
Service accounts should exist on the local Domain or local machine and
should not be used to logon to a server.
Passwords
Passwords are one of the most vulnerable security components. This security
vulnerability is largely eliminated by defining a solid password policy and
configuring your system to enforce the policy.
Using complex passwords (and changing them regularly) minimizes the
likelihood of unauthorized access to the control system.
The following list provides guidelines for effective password management:
Remote Access
The need for access to process information, configuration information and
system information from outside of the systems domain is common. Welldefined policies and procedures to manage remote access to the system by
other company business units and or suppliers and venders greatly reduces the
possibility of security threats penetrating the system.
The following list contains guidelines for remote access:
Define and document all outside system access routes and accounts.
Architecting Security
195
Physical Access
Most production facilities have physical security plans in place. These plans
should be an integral part of an overall security program. By not allowing
unchecked computers and unauthorized users to have access to critical
infrastructure components, many security threats can be eliminated.
Critical process control components such as servers, routers, switches, PLCs,
and controllers should be protected under lock and key and have personnel
assigned who are directly responsible for the components.
Define and document how each part of the system will or can be backed
up.
Virus Protection
Add an additional security level at each access point of the system by defining
where and what virus protection is to be implemented. Document the proper
configurations for the virus protection software.
Mandatory virus definition file updates are essential.
Note For a list of ArchestrA-related file exclusions, see "Configure AntiVirus Software" on page 283.
196
Chapter 7
Authenticators
System users could be actual operators and engineers, or other systems or
services that run internally or externally to the Supervisory Control System.
All known users must be accounted for and defined authentication methods
and procedures should be developed to reduce the risk of unauthorized access
to critical systems or protected information.
Architecting Security
197
Additionally, defining solid policies and procedures for Router and Switch
configuration ensures management of where information and access is
permitted along with control over bandwidth. Optimal network utilization can
then be achieved.
Although firewalls, routers, and switches have overlapping capabilities, each
device should be used for its base functionality: firewalls should be used to
control communication types, routers should be used to forward
communication by routing protocols along a proper route, and Switches should
be used to manage bandwidth by controlling communication flow between
ports and avoiding packet collisions.
Domain Controllers
The use of services such as Microsoft Active Directory provides management
and enforcement of access security for users, groups, and organizational units.
Not all software supports domain-level security. For example, some
automation software will require local PC or even package- or
AutomationObject-level security to be defined and implemented. Check the
product documentation carefully before deployment.
Physical Networks
The basic building block of a supervisory and control system is the physical
network itself. Special attention should be given to the design, selection of
media, and installation of the network. A careful review of any installed
network segment should be undertaken before extending or adding
components.
By making sure redundant paths and proper distances are observed, slow and
unreliable communication can be avoided. All networks should be reviewed
for live unsecure ports and exposed segments that could be tapped. With the
complete network layout documented, recover plans can be defined to improve
system available in the case of an accident that takes down part of the network.
198
Chapter 7
Wireless Access
Wireless technologies are quickly becoming part of Supervisory and Control
Systems. Wireless security has a lot of underlying topics that should be
discussed.
The following topics should be taken into consideration when defining a
wireless implementation:
Software
The software components of a supervisory and control system can have a large
impact on the security of the overall system. When reviewing the security
features of the software that will be deployed within a production facility, each
component should be evaluated as an integrated part of the complete system.
All software components should leverage the capabilities of the infrastructure
and support configurations that meet the policies and procedures that are
defined as need to secure the system. By reviewing all software from a
secureability standpoint, policies and procedures can be established to audit the
system and maintain high levels of security.
Architecting Security
199
Operating Systems
Review the base operating system that hosts all of your Supervisory and
Control applications for proper deployment, configuration, and security
patches. The initial focus should be reviewing installed components and
configured users.
Microsoft provides detailed guidance for locking down your operating systems
so that security threats can be managed and eliminated. By defining what
Supervisory and Control software is to be deployed to a system, you can define
the level of lock-down, and at the same time ensure full functionality of
manufacturing applications.
Databases
Database applications such as Microsoft SQL Server have become a common
component of all manufacturing systems. Because of the need to allow access
to database information, and the need to update and append the information,
you must be very deliberate in the approach to locking down a database.
Provide a detailed mapping of users (people and services) which require access
and define usable database security policies.
200
Chapter 7
Security Considerations
The following section summarizes the security considerations within a
production environment, and describes recommendations as applied to a
Process Control Network (PCN) or SCADA System (WAN).
Secure Layers
Divide the system into secure "layers." In the security context, a layer can be
defined as a division of a network model into multiple discrete layers, or
levels, through which messages pass as they are prepared for transmission. All
layers are separated by a router or smart switch device.
A secure layer is further defined by the need to allow or restrict access and the
criticality of the sub-system. An Intrusion detection system is deployed in
higher- risk layers.
The following figure is designed to show a representative topology; it is not
intended to depict an actual Plant System topology. It includes the following
named layers:
Architecting Security
201
Note that all layers (represented by the main backbone) are separated by
firewalls and routers:
ActiveFactory Client
( http )
SuiteVoyager
Client
( IE Browser )
Thin
Client HMI
( TS Client )
Corporate
IT Functions
Notes
Gateway 10.2.72.3
Non-DHCP Server
Assigned at Domain Controller
Gateway 10.2.72.33
Application Object Server
Historian Node
SuiteVoyager
TerminalServer
Firewall
Galaxy Repository
Domain Controller
Gateway 10.2.72.35
Gateway 10.2.72.34
Non-DHCP Server
Assigned at Domain Controller
Optional
Firewall
Engineering Station
Gateway 10.2.72.97
Gateway 10.2.72.65
`
PLC1
10.2.72.
51
Visualization
Router
Slow Connection
Firewall
Monitored
DO Line
Modem
Failover
SCADAlarm
10.2.72.48
`
`
Legacy Visualization
PLC3
10.2.72.83
Comm
PLC2
VisualSite 2
10.2.72.98
AOS1Site2
10.2.72.99
AOS2Site2
10.2.72.105
202
Chapter 7
Function
Port
HTTP
TCP 80
HTTPS
TCP 443
TCP 3389
Architecting Security
203
Function
Primary Connection
Secondary
Connection
Remarks
CIFS
NETBIOS Datagram
Name Service/Browsing.
From IAS to Browse
Master or from Browse
Master to Domain Master
Browser.
Name Service/Browsing.
From IAS to WINS Server
or Browse Master or
Domain Master Browser.
NETBIOS Session
NMXSVC
Archestra Communication
Channel. Peer-to-Peer, bidirectional between all
ArchestrA-enabled nodes.
RPC DCE
SQL Server
SQL Client
SQL Browser
Only if implementing
SQL Server instances
SUITELINK
TCP 5413
PING
204
Chapter 7
Function
Primary Connection
Secondary
Connection
Remarks
NTP
UDP 123
Time Synchronization.
From Client to Domain
controller(s) or time
master.
DNS
LDAP
TCP 389
KERBEROS
TCP 88
Authentication
Securing Visualization
Users with different roles require different user interface experiences. Typical
interface experiences include window-to-window navigation, data visibility on
a specific window, and restrictions on visible actions. Each of these actions are
easily achieved by animation links in InTouch Software. The animation links
(typically) test the InTouch Software system tag $AccessLevel. While this
implementation works, it provides a very linear security model.
Industrial Application Server roles offer more flexibility and can be leveraged
from InTouch Software by using the IsAssignedRole ("RoleName") script
function. When executed, this function determines if the currently logged-in
user is assigned to the role that was entered into the script call. This function
allows the InTouch Software application to access the role-based security of
ArchestrA IDE.
To implement this, add a Data Change script to InTouch Software that executes
any time the InTouch system tag $Operator changes. For example, the
following script could be called when the $Operator tag changes:
AdministrativeAccess = IsAssignedRole("Administrator");
SetpointAccess = IsAssignedRole("Engineer");
ManualAccess = IsAssignedRole("Operator");
Architecting Security
205
First, the scripts execute only when the user changes. Instead of running
the same script for every animation, it only runs as needed, which
improves overall application performance. This also improves the draw
times of the screen are also improved, since it is not necessary to evaluate
the user rights for each associated animation.
This change would only be made in the $Operator Data Change script, instead
of every Manual Control animation.
System Considerations
The first time a user logs on to a system (using InTouch Software, for
example), and the OS Group Based mode is set, the login is validated at a
Domain Controller. After the login is validated, a cache is created on the local
machine and propagated to other nodes in the system. The user then has
specific permissions to interact with the system (operator, administrator, etc.)
on any node.
This scenario has several implications:
The first time a user logs on to the system, they may experience delays
while the system validates their permissions and creates the cache. This is
especially relevant if the system includes a large number of Groups, and/or
network nodes. This delay may be exacerbated by widely-distributed
networks (see the last bullet).
206
Chapter 7
Subsequent logins in the system use the (local) cache created at the
previous login. This means that if login permissions are modified, the user
can still log on, but uses the "old" cache until the update occurs. This
update operation takes place "under the hood" and does not prevent the
user from logging in with the old permissions.
If the Login Time is set to 0, the system validates permissions and creates
a new cache at each login. When the security mode has a large number of
groups, and the system is widely-distributed (SCADA) with slow or
intermittent network components, lengthy login delays may occur.
Ensure the Login Time and Role Update settings are set correctly for the
local environment. For example, setting the Login Time to 10,000 ms
means that the user cannot interact with the system for 10 seconds,
regardless of the use of the validation cache. In this case, 1000 ms
(default) is usually acceptable.
Architecting Security
207
208
Chapter 7
Historizing Data
C H A P T E R
209
Historizing Data
Contents
General Considerations
Area and Data Storage Relocation
Non-Historian Data Storage Considerations
210
Chapter 8
General Considerations
When designing a FactorySuite A2 System-based industrial automation
application, consider the following data storage concerns:
Data Storage Rate: At what rate will the data be changing and how
quickly will that data need to be stored?
Data Loss Prevention: What are the possible scenarios that would result
in a loss of data?
Client Locations: As you plan the network topology for your application,
you should consider the location of history clients.
User Account: The user account under which services run must be the
same for all applications. Also, if you specify a local computer user, then
the Historian Node must be in the same network domain or workgroup as
the AutomationObject Server node
Historizing Data
211
Be sure that DCOM is enabled for both the MDAS and IndustrialSQL
Server Historian computers and that TCP/UDP port 135 is accessible. The
port may not be accessible if DCOM has been disabled on either of the
computers or if there is a router between the two computers that is
blocking the port. For more information, see "Process Control Network
Firewall Ports" on page 203.
212
Chapter 8
When designing the system topology, note that the locations of the historical
data clients may impact the end design. This is particularly true when portions
of the application are separated by low bandwidth or intermittent network
connectivity. Client applications should not be required to access the historian
over these poor connections. One solution to this is to have local historians that
service the computers that are locally situated with good network connections.
Log Viewer Event Storage: By default the Log Viewer event storage
mechanism (which is installed on all computers) is set to use a maximum
of 5 GB of storage. You can adjust this value. The Log Viewer event
storage must be considered in the total disk space requirements.
C H A P T E R
213
The Alarm and Event subsystem consists of both Alarm Consumers and Alarm
Providers. When determining the topology for your application, be aware of
how alarm and event messages are processed within the system and how
different configurations can affect system performance.
Note The event messages produced by Alarm Providers are not the same as
events generated by the IndustrialSQL Server Historian system.
General Considerations
Configuring Alarm Queries
Determining the Alarm Topology
Logging Historical Alarms
214
Chapter 9
General Considerations
The Platform object serves as an Alarm Provider for all IDE objects and is the
primary Alarm Provider in a FactorySuite A2 System. The Platform is capable
of providing any alarm in the Galaxy; that is, the Platform is not limited to the
alarms generated by the objects it is hosting.
The network load can be affected by which Platforms are set as the Alarm
Providers. By default, when a Platform is configured as an Alarm Provider, it
will automatically subscribe to all alarms in the Galaxy. This means that any
time a new alarm occurs, it will be sent to all of the Platforms that have been
configured as an Alarm Provider. You can override this by configuring the
Platform to be only an Alarm Provider for a set of Areas that you designate.
The alarm consumers provided with FactorySuite A2 System are the InTouch
Alarm Viewer ActiveX Control and the InTouch Alarm DB Logger Manager
utility. These consumers can be configured to query alarms from a local
Platform or from a remote Platform. By leveraging this flexibility, you can
minimize the network load imposed by alarm distribution.
Note The InTouch Software alarm clients used to show summary alarms only
query for alarm information when they are visible on the screen.
ProviderNodeName - This is the host name of the node where the Alarm
Provider resides.
Provider - This is the word "Galaxy." There can only be one Platform per
computer, and this keyword represents the Platform Alarm Provider.
AlarmGroup - The Area objects in the IDE serve as the alarm groups.
When building the application in the Model View of the IDE, you can
place the Areas within each other. If an Area named "Tanks" hosts another
Area named "Clearwell," then subscribing to the alarms in "Tanks" will
automatically include the alarms in "Clearwell."
215
Best Practice
Be sure that parent alarm areas are on the same node as their subareas.
For more information on Distributed Local Network or Client/Server
topologies, see Topology Categories in Chapter 2, "Identifying Topology
Requirements."
Best Practice
The following list summarizes the key points in setting up an optimized alarm
distribution system in a Distributed Local Network topology:
216
Chapter 9
2.
217
Best Practice
The following list summarizes the key points for setting up an optimized alarm
distribution system in a client/server architecture. The list also applies to a
Widely-distributed SCADA system environment:
Best Practice
In a typical FactorySuite A2 System, IndustrialSQL Server Historian will be
used to store all time-series data. It is recommended that you install
IndustrialSQL Server Historian on a dedicated node. For consolidation
purposes, the best practice is to store the alarm history in another database on
the same node as IndustrialSQL Server Historian.
The location of the InTouch Alarm DB Logger utility may greatly affect data
loss prevention. The Alarm Logger utility automatically buffers the alarms it
receives until they are successfully stored in the target database. Installing the
Alarm Logger on the AutomationObject Server node ensures that the alarms
are not lost if the connection to the Historian Node is lost.
To eliminate a single point of failure, install the Alarm DB Logger Manager
utility on all AutomationObject Server nodes on which alarms are generated.
The alarm query configured in the Alarm DB Logger Manager utility will then
retrieve alarms that are generated only by objects hosted by the local platform.
This practice ensures that no network connection will be required to deliver the
alarm to the Alarm Logger and prevents the loss of alarm records due to
network instability.
218
Chapter 9
C H A P T E R
219
1 0
Contents
System Disk Space and RAM Use
Predicting System Performance
Performance Data
Failover Performance
Load Shared with Remote I/O Data Source
DIObject Performance Notes
OPC Client Performance
220
Chapter 10
Database Growth
Database growth can be set to either a fixed size, or to Auto Increment. The
default setting (Auto Increment) is preferred unless:
The Auto Increment setting is effective for the GR Node because of the
dynamic nature of the GR; in other words, the GR is not a "static" database that
stores raw production values. An example of a "static" database is the ProdDB
used by the PEM objects. The ProdDB is effectively managed using a fixed
size strategy because its size can be more easily quantified and administered.
The ProdDB database is also installed on the Historian node, distributing SQL
Server processing needs.
SQL Server provides effective growth administration engines and should not
decrement system performance when dynamically growing the database, even
when the AutomationObject Server is deployed on the same node as the GR.
221
RAM Allocation
In practice, MS SQL Server tends to monopolize and consume any RAM that
is available at any point in time. SQL Server enables setting a limit (or cap) on
RAM allocation so that other system resources can function without
limitations. This cap can be configured either as a dynamic or fixed setting.
When the RAM allocation is known, select the ...fixed memory size option to
limit RAM use to approximately half of the machine's available RAM. For
example, if a machine has 1GB of RAM, set the SQL Server cap to 500MB.
Setting a fixed memory size ensures the memory is always available.
Note For recommendations on Dynamic SQL Server memory allocation, see
"Bulk Operations Considerations" on page 232.
To set a fixed memory size
1.
2.
Right-click the Server icon and select Properties from the submenu.
3.
4.
5.
6.
222
Chapter 10
Sizing
MB
Percent
Increase
6.6 MB
-----
Auto-incremented size at
reporting time.
7.1 MB
+ 7.5%
Auto-incremented size at
reporting time.
7.7 MB
+ 7.8%
....
....
....
Auto-incremented size at
reporting time.
11.44 MB
+8.3%
Auto-incremented size at
reporting time.
12.44 MB
+ 8.7%
....
....
+ 7 to 11%
The default Auto Increment value is 10%. This means that the occupied disk
space does not grow with each individual ArchestrA object template and/or
object instance added to the Galaxy Repository. Instead, the disk space will
grow by the specified percentage, when SQL Server determines that adding
space is necessary.
In practice, the growth increments for occupied disk space vary from 7% of the
Galaxy database size to 11%.
Note The size of the increase does not correlate to a single object instance.
It is prudent to leave enough disk space available for the planned capacity of
the ArchestrA Galaxy Repository. Always allow for 25-50% of
projected/future capacity (unless the system is quite well-defined) then add
10% to allow for Microsoft SQL Server to automatically increment beyond
that threshold, if and when the threshold is reached.
223
All disk space and RAM capacity in the following table are approximate:
Component
Disk Space
RAM
55 MB on disk
22 MB RAM
36 MB on disk
0.6 MB RAM *
19 MB on disk
7 MB RAM
2 MB RAM
63 MB RAM
7 MB RAM
3 MB on disk
Task
Disk Space
RAM
Create GR Platform
3 MB on disk
28 MB RAM
Create GR state
(small increment)
(small increment)
(small increment)
(small increment)
Deploy GR Platform
16 MB on disk
31 MB RAM
Deploy GR state
(small increment)
(small increment)
(small increment)
6 MB RAM
224
Chapter 10
Galaxy State
Database Size
Unallocated
Space
Actual Space
Occupied
11.44 MB
0.28 MB
11.14 MB
12.44 MB
0.36 MB
12.08 MB
Database size increase by addition of 100 -derived object templates of type XXX
--
0.94 MB
--
0.94 MB / 100 =
0.0094 MB
225
The size factor for different derived object templates varies due to differing
selections of UDA attributes and differing attribute extensions.
Galaxy State
Database Size
Unallocated
Space
Actual Space
Occupied
12.44 MB
0.36 MB
12.08 MB
12.44 MB
0.06 MB
12.28 MB
--
0.20 MB
Database size increase for the QuickScript -with 100 lines of code
226
Chapter 10
The following table describes disk space and RAM usage when creating and
deploying an aggregated Application Object instance containing multiple child
objects. The objects are deployed on the local node:
Task
Disk
Space
RAM
~ 7.7 MB
RAM
~ 17 MB
Deploy master object containing 10 objects
each with 500 UDAs (total of 5,000 UDAs) and disk space
each containing 100 lines of inherited
QuickScript
~ 2.5 MB
RAM
Some RAM on the local (target) node is used by the IDE, the remaining RAM
is used by the Platform and AppEngine.
Deploying to an XP Platform
In a production environment, AutomationObjects may be deployed to an
AppEngine on the XP platform. Estimating disk space on XP is problematic
because of the System Restore feature.
System Restore provides protection from inadvertent deletion of required
system files. The end result is similar to a Backup/Restore operation on a
Server operating system: the user can restore the machine from a certain point
in time.
However, the System Restore feature maintains extra copies of files on the
local machine's disk drive, and provides limited administrative options. File
copies are made at a random times determined by the operating system. The
deployment operation generates the System Restore operation, but may not
include all necessary executable files.
Default System Restore Disk Space settings are as follows:
For drives greater than 4 GB, System Restore uses up to 12% of the disk
space.
For drives less than 4 GB, System Restore by default only uses up to 400
MB of disk space
The data store size is not a reserved space on the disk and the maximum size
(to the max values defined above) is limited at any time by the amount of free
space available on disk. Therefore, if disk space use encroaches on the data
store size, System Restore always yields its data store space to the system.
For example, if the data store size is configured to 500 MB, of which 200 MB
is already used, and the current free hard-disk space is only 150 MB, the
effective size of the data store is 350 MB (200 + 150), not 500 MB. Note that
disk space usage can be adjusted at any time.
Note System Restore is accessed via the System utility in Control Panel.
227
System Restore is enabled by default. DO NOT turn off the System Restore
feature on an operational ArchestrA Platform computer. Instead, assess disk
space usage using an isolated test computer on an isolated Galaxy where
System Restore is disabled.
228
Chapter 10
The Unit Application is defined by the objects it contains (see the following
table):
Object Type
Object
Alarm Hist.
Count I/O Pts. Attrs. Attrs.
I/O
Avg.
Change Monitor Alrm.
Rate
Rate
Rate
Discrete Device
200
10/min
0.5 sec
10/hr
200
1/sec
0.5 sec
30/hr
50
10/min
1 sec
10/hr
Calculation (synchronous
script)
48
1/sec
1 sec
30/hr
Calculation (asynchronous
script) once/10 secs
6/min
1 sec
30/hr
Areas/SubAreas
1/5
I/O Networks
I/O Devices
Object Count describes the number of object instances for each template.
I/O Change Rate describes the average rate of process data value
changes per object. For example, the average number of discrete device
transitions for PV.
Monitor Rate describes the average rate of monitoring for data value
changes per object by the system. This should always be at least as fast as
the expected I/O change rate per object.
Avg. Alarm Rate describes the average rate of new alarms detected per
object by the system.
I/O Pts. describes the average number of configured I/O points per
object.
Note The state scan period is 500 msec for the Unit Application except when
there are six or more Unit Applications on the state.
229
Hardware Specifications
A computer with the following specifications was used in testing the
FactorySuite A2 System products in order to determine the sizing and
performance guidelines:
Operating System
CPU Speed
2.4 GHz
Physical RAM
1-2 GB
15 GB
Object Instances
I/O Points
500
500
4%
3000
3000
15%
5000
5000
22%
In order to optimize the use of RAM and CPU, it is very important to define
what the scripting requirements are and to plan ahead for how the scripts are
implemented. In many cases, it may be more efficient to create global objects
to execute scripts that update data in multiple objects, instead of running the
same script in all objects.
For recommendations on applied scripting techniques, see Chapter 5,
"Working with Templates."
230
Chapter 10
103 sec
60 sec
39 sec*
* Deployment time is measured after the first instance of the base template has
been deployed, since this deploys the code modules.
Performance Data
A FactorySuite A2 System enables a great degree of freedom when configuring
and deploying objects. While it is not practical to test all possible configuration
and implementation combinations, three representative system topologies are
detailed. The loading parameter details for each of those representative
systems are also described.
To determine the performance guidelines, each test system configuration was
measured against a standard of health. For testing purposes, a healthy system is
defined as follows:
231
Default Value
Recommended
Value
Message time-out
30000 ms
300,000 ms
NMX heartbeat
period
2000 ms
2000 ms
Consecutive
number of missed
NMX heartbeats
allowed
4 (Automation Object
Server)
4 (Galaxy Repository)
4 (Visualization node)
Comments
Increase to avoid timeouts
when deploying large number
of instances
For details on the attributes in the WinPlatform object, please refer to the
$WinPlatform object Help.
232
Chapter 10
Checkpointing Attributes
Every AppEngine in a galaxy saves changes to values of specific attributes
(checkpointed attributes) to disk at a pre-defined period of time. The more
attributes defined as checkpointed by an AppEngine, the more data is written
to disk. It is critical to set up this attribute to achieve the best performance for
the system and the process.
In the event of a re-start of an AppEngine or after a failover, the system will
read the checkpoint file to update the attribute values and start from there. The
default value in the CheckpointPeriod attribute in the AppEngine is set to 0 as
a safe way to checkpoint attributes every scan if the user is not aware of the
functionality. However it is highly recommended to set this parameter to a
higher value so that you can optimize CPU utilization and maintain data
integrity in the event of a failure in the AppEngine.
Note For more details on Checkpointing, see "Checkpointing" on page 74.
For information on checkpoint attributes, see "Tuning Redundant Engine
Attributes" on page 87.
233
Best Practice
To keep multiple engines with the same configuration from firing
simultaneously, set their scan rates to prime numbers. For example, the
scan rates of the various engines might be set at 997 ms, 1009 ms, and
1021 ms.
Platforms that are routinely placed off scan may cause a certain amount of
increased network traffic coming from other platforms if the other
platforms have Application Objects still on scan that subscribe for data
from Application Objects belonging to the off scan Platform. If the
resulting increase in network traffic becomes excessive, consider placing
the "subscriber" Application Objects off scan as well during the time that
the particular platform is scheduled to be off scan.
234
Chapter 10
Minimum
Processor
p4, 2 GHz
RAM
1GB
Hard Disk
10GB
235
The following operating systems, SQL Server editions, and auxiliary software
were tested:
Operating System
SQL Server
Application Specification
The following specification was implemented for single-node tests:
1000 I/O points with update rate 1 sec - make sure these I/O points are
distributed among all different data types and are distributed between the
DA Servers and legacy I/O Servers as well as be sure to use all three
communication protocols (S/L, OPC and DDE).
10% I/O points Historized - maintain the same ratio for various data types.
10% changing I/O points - 100 items were updated every second.
However, the server scanned for 1000 items per second
System Components
The single node includes the following components:
Galaxy Repository
IDE
BootStrap
InTouch Software HMI
IndustrialSQL Server Historian
ActiveFactory 9.0
DAS ABCIP
ABTCP Legacy I/O Server
DDE Server
DASSIDirect
236
Chapter 10
The following data was gathered over a period of approximately 2.5 days from
a system running Windows Server 2003. Performance on the XP Pro node was
found to be similar enough to be considered redundant for this document:
Galaxy Specification
Component
AppEngines
AutomationObjects
DAServer
DASABCIP
IOServer
ABTCP
DIObject
DASDIDirect
Historian
InSQL Historian
IDAS Tags
75
1.01 GB
Network Utilization
0.14-0.34%
PrivateBytes
Avg. (Bytes)
25
934223422
931662894
944376
aaBootstrap
5886859
5885952
9292
aaEngine
50091771
50331286
44172
aaEngine1
51537157
51402770
31340
81326269
81223239
59400
aaEngine3
50945422
50895837
31524
aaGR
75170986
78395992
70608
aahIDASSvc
13881344
13881344
17676
aahManStSvc
6864896
6864896
11600
Total
aaEngine2
abtcp
3470549
3473408
1924
DASABCIP
14304104
14576587
20016
DASSIDirect
5608829
5603328
11332
10071102
10903473
11856
lsass
aaTrend
43463130
45399870
27724
SQL Server
10
104809085
64583768
67872
237
Network Device
(Switch or Router )
Workstation
Historian
Engineering Station
( Data and Alarms) Configuration Database
SuiteVoyager
Portal
I/O Server
PLC Network
Note For more details about this topology type, see the "Distributed Local
Network" section in Chapter 2, "Identifying Topology Requirements."
238
Chapter 10
40
34,224 (~ 36,000 or
900 Unit Application
I/O points x 40)
900
Inputs
300 changes/sec
Outputs
10 changes/sec
History stores
300 changes/sec
3/sec
2000/sec
Node
CPU
Memory (MB)
Network Usage @
100 MB
GR
5%
700
0.3%
Historian
20%
563
0.3%
AutomationObject
Server/Visualization
Node
239
Ethernet
Terminal Server
SuiteVoyager
Portal
AutomationObject
Historian
Server
(Data and Alarms)
Engineering Station
Configuration Database
SCADAlarm
I/O Server
Network Device
(Switch or Router )
Supervisory Network
Network Device
(Switch or Router)
Control Network
RTU Radio Modem
RTU Radio Modem
(RS-232)
PLCs
RTU Radio Modem
RS-485
Network
Level Transmitters
The following elements and data is derived from the Unit application table that
appears in a previous section.
240
Chapter 10
I/O Baselines
Note This topology type was tested for integration purposes in a lab
environment and uses simulated I/O for data generation. "Real" I/O results
(from PLCs) are included in the following section as a subset of this test
configuration.
10
Platforms - AutomationObject
Servers
20
50,000
2,500
1,100,000
55,000
Alarm Rate
History Stores
7,000 changes/sec
The following results table contains simulated and real I/O values:
Results Table:
Physical
Memory
Use (MB)
Page File
(PF Usage)
Memory
%CPU
Use (MB)
Average
%Network
(RMC)
%Network
(Primary)
Normal Engine
(simulated I/O)
871
652
21-29
N/A
0.01
Redundant Engine
(simulated I/O)
908
791
22-30
0.12
0.01
IT Platform (TSE)
759
707
23
N/A
0.01
Load Sharing
(simulated I/O)
770
930
27-30
0.24
0.01
Redundant Engine
(Real I/O)
913
782
55
0.15
0.03
241
2.
3.
Primary AppEngines
4.
Backup AppEngines
5.
242
Chapter 10
Failover Performance
Much testing has been completed to measure the performance of a system
during a failover. Multiple variables have been considered to obtain
representative data to reflect the average failover time, CPU and memory
utilization in different configurations.
Note that failover performance may vary depending on the number of I/O
points referenced in the system, the amount of scripts per instance, task
executed by scripts, historized data, alarms and checkpointed data. Script
quantity and complexity will increase the failover time, as the system requires
more time and resources to process them.
This section presents two architectures with the corresponding application
configuration and the results of the performance test.
Note Attribute tuning for Redundant AppEngines is described on "Tuning
Redundant Engine Attributes" on page 87.
IT Alarm Provider
IT Alarm Provider
Supervisory Network
AOS (PRIMARY)
AE1
DI
RMC
AE1
(Backup)
DAS
AOS (BACKUP)
Historian
Alarm DB
Network Device
(Switch or Router )
AlarmLogger
PLC Network
The following table presents the hardware and software used in the architecture
shown above. It also presents the configuration of the application tested.
243
Some of the variables in the table are dependent on the number of I/O points
referred to in the application, so the value is shown as a percentage of the total
I/O count for the particular scenario.
Resources:
Hardware
Operating System
Configuration
DIObject
ABTCPPLC
Instances
History
Alarms
Total 10 alarms/sec
1000 msec
1000, 1200, 1500, 1700, 2600, 3300, 4400, and 4600 msec
Modified Attributes
The following table presents the results of the test executed to measure the
failover time as well as the CPU and Memory utilization.
The Failover values are generated by two events:
Also note that another key component tracked in the test is the CPU consumed
by the DIObject:
I/O
Total Failover Time
Counts [sec]
CPULoadAvg[%]
Total Memory
Usage [MB]
Network ForceFail
Failure overCmd
Active
Node
Standby
Node
DASABTCP
Active
Node
Standby
Node
2500
23
18
16
368
306
5000
30
25
22
10
371
311
10000
44
43
32
11
419
319
20000
59
58
60
16
25
550
399
30000
95
75
88
11
39
663
353
Note During the transition of the engine from Standby to Active state, the
CPU % value can spike to high levels.
244
Chapter 10
Visualization
Visualization
IT Alarm Provider
IT Alarm Provider
Supervisory Network
RMC
AutomationObject
Server 1
(Primary)
AutomationObject
Server 2
(Backup)
AE1
AppObjects
AE4
AppObjects
RDI
AE4'
RDI
AE1'
Network Device
InSQL (Switch or Router )
AlarmDB
AlarmLogger
PLC Network
The resources and configuration details are described in the following table:
Resources:
Hardware
Operating System
Configuration
DIObject
ABTCPPLC DIObject
Instances
History
Alarms
Total 10 alarms
1000, 1200, 1500, 1700, 2600, 3300, 4400, and 4600 msec
Modified Attributes
245
The results shown in the table below present the Failover time, CPU utilization
and Memory Usage in different scenarios. This table also contains the CPU
load on the node that hosts the I/O Servers.
I/O
Total Failover Time
Counts [sec]
CPULoadAvg[%]
Total Memory
Usage [MB]
Network ForceFail
Failure overCmd
Active
Node
(postStandby I/O Server
failover) Node
Nodes 1, 2
Active
Node
Standby
Node
2500
20
16
7, 10
430
330
5000
22
18
13
7, 10
461
333
10000
29
23
19
7, 10
555
362
20000
47
31
40
11, 15
605
385
30000
67
44
49
21, 22
797
443
40000
88
75
55
25, 30
814
552
Note During the transition of the engine from Standby to Active state, the
CPU values spike at high levels.
After failover, it will be necessary to restore the system to the original (loadsharing) configuration. The following script example would be used on both
nodes and configured as While True with a trigger period of 10 minutes:
if me.redundancy.status == "Active" and
me.redundancy.Partnerstatus == "standby - ready" THEN
me.redundancy.forceFailOverCmd = true;
Endif;
246
Chapter 10
2.
3.
Measure CPU consumption when increased OPC Client Objects are used.
A. 5 OPC Clients with varied number of items, totaling 10,000, were
deployed.
B. The 5 Objects were split into 10 objects with item total remaining at
10,000.
C. The 10 Objects were split into 20 objects with item total remaining at
10,000.
247
After the Objects were deployed, the system was "stabilized" for 5 minutes to
attain CPU usage. The following results are represented in percentage of total
system utilization:
Objects
Items
CPU %
1A
2500
13.2
1B
5000
28.0
1C
10000
54.0
2A
10000
54.0
2B
10
10000
59.2
2C
20
10000
60.5
3A
10000
57.0
3B
10
10000
57.4
3C
20
10000
58.9
CPU %
70
60
50
40
30
20
10
0
Items
Scan Groups
Objects
Item Count
Multiple Objects
The use of multiple Scan Groups appears to have more impact on system
performance than does multiple items, although under these test conditions,
each are negligible.
248
Chapter 10
C h a p t e r
249
1 1
Working in Wide-Area
Networks and SCADA
Systems
Low bandwidth.
High latency.
Intermittent communication.
Note This chapter contains information and terms specific to the SCADA
Industry.
Contents
250
Chapter 11
251
Network Terminology
When metric prefixes (k for kilo, M for Mega) are used in a network context,
they retain their original definitions. That is, k = 1,000 and M = 1,000,000.
This usage differs from disk-storage terminology, where KB = 1024 Bytes and
MB = 1, 048,576.
The following table summarizes the conventions used in this chapter:
1,000
1,000,000
Byte
bit
bps
bits-per-second
Bps
Bytes-per-second
Subnets
Set up subnets and sub-areas in the IP network. Most SCADA systems use
routers and switches to isolate traffic within a particular site so as not to burden
the network. Be sure routers are configured to isolate and route information
correctly.
252
Chapter 11
DCOM
When assessing and setting up the network, be careful in setting blocked ports.
Some DCOM ports need to be open to communicate with OPC. Leave open the
ports that interact between FactorySuite components.
For a list of ports used by FactorySuite components, refer to "Process Control
Network Firewall Ports" on page 203.
Domain Controller
The Domain Controller is a Windows server computer that stores user account
information, authenticates users, and enforces security policy for a Windows
domain. Domain controllers detect changes to user accounts and synchronize
changes made in Directory Server user entries.
Distributed SCADA networks typically employ multiple Domain Controllers
at strategic network locations.
The Domain Controller node includes the DNS service and Time
Synchronization to manage network communication requests.
Time Synchronization
The Windows 2000 environment uses W32Time services to synchronize time
settings across a network. For a detailed description of the Windows Time
service, how it operates on a Windows 2000 network, and configuration
details, see
http://www.microsoft.com/windows2000/techinfo/howitworks/security/
wintimeserv.asp.
The end result is that all computers within the network running W32Time
reliably synchronize to a common time.
Best Practice
Implement Universal Time Synchronization (UTS) at each site. Doing so
provides absolute certainty that data is properly time-stamped and forwarded to
the historian under all circumstances. Current technology uses GPS or radio
broadcast software along with dedicated hardware devices that may linked to
one or more computers at the site using serial ports, Ethernet or USB.
The following considerations apply when implementing UTS:
253
254
Chapter 11
Engineering Station
Configuration Database
Historian
( Data and Alarms)
Supervisory Network
10/100Mbps
Network Device
(Switch or Router )
RAS Server
Subnet 2
Modem (56kbps)
33.6Kbps
Modem (56kbps)
RAS Client
(also performs routing )
10/100Mbps
Network Device
(Switch or Router )
Control Network
Visualization Node
Visualization Node
PLC
PLC
Overview
Remote Access Server (RAS) is often used to connect remote computers with a
central computer. The use of telephone modem technology is commonly used
when the application requires communication over very long distances and
does not require a robust connection, high throughput, and low latency.
In addition, using modems allows the user to configure the connection from the
RAS client to the RAS server as a call-back, dial-on-demand, and hang-up-onidle or persist-connection.
Note Even though modems have low nominal throughput, they can achieve
high effective throughput by using software and hardware compression.
255
Terminal Services
Terminal Services (TS) is optimized to use minimal network bandwidth. As
such, TS works fairly well across modems. When managing a remote
AppServer node using the IDE, connect with a dedicated Terminal Services
node to manage the connection with the remote node.
Ensure that the remote can be pinged successfully. To do so, the DNS server
must be able to resolve all remote Node Names.
InTouch HMI
When NAD is used in a Terminal Services environment on a Windows 2003
machine, a share must be created on the Master application even if the Master
is on the Terminal Server Node.
InTouch for Terminal Services Environment (TSE) may be installed on a
central computer and client sessions may be run at remote sites. Please refer to
the InTouch TSE Deployment Guide for more information.
256
Chapter 11
Security
This section contains security information specific to a SCADA environment.
For general security-related information, see Chapter 7, "Architecting
Security."
Domain-Level Security
The ArchestrA Network account must be configured on the Domain Controller
and used by all local and remote component installations.
In order to maintain a central point of security administration, the following
configurations are recommended:
257
2.
Select Galaxy > Configure > Security from the main menu.
3.
InTouch HMI
Use the ArchestrA security model selection within InTouch WindowMaker.
258
Chapter 11
Workgroup-Level Security
Configure the following security permissions on each node as applicable:
InTouch HMI
Use the ArchestrA security model selection within InTouch WindowMaker.
259
Disaster Recovery
It is important to establish a site, physically separated from the central one, that
has replication capability. Doing so ensures the integrity of an operational
system where the central site is at risk from fire, tornado, hurricane or other
catastrophe. The replication capability includes having duplicated hardware,
and requires that software configuration and key state information is
periodically propagated from the central site to the recovery site.
Each disaster recovery scenario will be unique, thus it is important to consult
with system integration experts regarding the design of communications
equipment, hardware and the configuration of the software.
260
Chapter 11
Control Room
Guest Node
Central Visualization
Engineering Station
(IAS Multi IDE, Visualization Master ND)
Terminal Services
Historian
(Data and Alarms)
VPN
Corporate
10/100Mbps
Gateway Router
HUB\Switch
Cisco Router
9600bps-4Mbps
WAN
Cisco Router
Bandwidth
Controller
100Kbps
10/100Mbps
Radio Tower
HUB\Switch
Area3
Radio Tower
9600bps-100Mbps
Domain Controller
10/100Mbps
HUB\Switch
Cisco Router
Site
Visualization
Modem (56kbps)
33.6Kbps
Modem (56kbps)
WAN
Domain Controller
Modem (56kbps)
33.6Kbps
Modem (56kbps)
9600bps-4Mbps
Site
Primary AOS Node +
Visualization
Redundant
Message Channel
Cisco Router
Area1
10/100Mbps
10/100Mbps
HU\Switch
HUB\Switch
RAS Client
AOS Node +
Visualization
Visualization
Area2
RAS Client
AOS Node +
Visualization
Site
PLC3
Site
10/100Mbps
AOS Node +
Visualization
Visualization
HUB\Switch
AOS Node +
Visualization
Site
Site
PLC1
PLC2
Area 1 Communication
Uses a 56kbps modem connection through a domain controller. The modem is
not modified because of excellent compression capabilities.
Area 2 Communication
Radio modem link to 128kbps full duplex. The Cisco router clock rate is set to
125,000kbps
Area 3 Communication
Bandwidth-controller set to 16,000bps.
The information on the following pages references the above topology.
ApplicationObject Deployment
Do not initiate a Cascade Deployment of the entire Galaxy when large
numbers of Platforms and Engines reside on remote nodes over a distributed
network. Rather, deploy the Platforms separately. Separate Platform
deployment prevents overloading the network.
261
Do not enable the Historian feature (a check box in the Objects IDE Editor
form) in the highest level template for any object because this forces
historization of every instance. Selectively apply the Historian feature to
some templates and to specific instances of objects.
Modify the Historian tuning constants which are attributes of the Engine
component found in the WinPlatform and AppEngine objects.
Note The following attributes' default values are designed for a nonintermittent network environment and are especially important in a widelydistributed, redundant system.
The attributes are listed with the default values.
Engine.Historian.StoreForwardMinDuration: 0 s (seconds);
Engine.Historian.ForwardingChunkSize: 1024 Bytes;
Engine.Historian.ForwardingDelay: 250 ms (milliseconds).
262
Chapter 11
Note The previous 3 Historian-tuning attributes are available in IAS 2.0 and
later. They are accessible using the AppEngine and Platform Editors (Engine
tab).
For detailed attribute information, see the AppEngine 'General Configuration'
help file.
For the occasions that communications with the Industrial SQL Server
Historian are interrupted, local storage and recovery of historical data is
provided. It is important to configure the Historization parameters of the
Engine Object using the IDE to accommodate the number of packets that
will be transmitted over the network when data is being restored.
Be sure enough disk space is on the local node to temporarily store data
until it can be transferred to the historian.
Note For detailed information on changing the default Platform Start Up and
Shut Down settings, see "Tuning Recommendations for Redundancy in Large
Systems" on page 86.
Alarms
When the (default) InTouch Alarm Provider option box (in the Platform
Editor) is checked, and the Area text is blank when you configure the
Platform Object, a subscription is initiated to all Alarms throughout the
Galaxy, regardless of the originating node.
Note Alarms from sub-areas (of those specified) create subscriptions because
of the hierarchical design of the Alarm subsystem.
Be sure that parent alarm areas are on the same node as their sub-areas.
Alarm history is stored in an MSSQL Database. The default name for the
database is AlarmDB which may be given a different name (using the
configuration utility launched from the WWAlarmDBLogger application).
WWAlarmDBLogger
WWAlarmDBLogger is installed from the InTouch CD-ROM. It is a client
application that subscribes to Alarm messages in the same fashion as the
InTouch Alarm Clients do, including one or more Query expressions.
The WWAlarmDBLogger application delivers alarm message content to the
AlarmDB (or otherwise named) database using standard MSSQL network
technology. It receives alarm messages from subscribed Areas via the InTouch
Distributed Alarm protocol, which is verbose.
263
For remote computers that connect to a central supervisory system over a slow
network connection, the traditional deployment of Alarm components will
result in slow response times over that connection and can interfere with
delivery of live data.
The recommended topology for slow network connections is to have
individual MSSQL Alarm history databases located at the remote sites. This
deployment ensures that all Distributed Alarm protocol messages are confined
to the remote site's LAN. All database transactions posting Alarm log
messages to the AlarmDB (or otherwise named) database are likewise confined
to the LAN.
Microsoft MSSQL DTS (Data Transformation Services) technology may be
used to consolidate AlarmDB messages to a central database periodically for
central analysis and reporting.
Note For information on implementing Alarms, see "Determining the Alarm
Topology" on page 215.
Inter-Node Communications
The following section considers platform communication when deployed
across a widely-distributed and/or intermittent network (SCADA). A brief
summary is included for context and is not intended as a recommendation, but
as a "pointer" for the developer to begin tuning the communications to
accommodate the needs, and mitigate the effects, of a SCADA system.
The information assumes multiple platforms are deployed on multiple nodes in
a SCADA topology.
Communication Summary
Communication between distributed platforms occurs at two levels: Heartbeats
and messages (data change requests and replies, subscriptions, status
updates/replies, etc.). Messages are handled by Message Exchange (MX)
services.
IAS is designed to monitor heartbeats and messages (sends/receives) on a
regular, configurable basis. Several attributes can be used to monitor and tune
the system to avoid problems in a SCADA environment; for example,
heartbeats missed because of an intermittent network may cause all
subscriptions to be dropped and re-initiated, saturating the network and
preventing successful reconnection with remote nodes.
The actual settings depend on the particular network environment.
Tune the following attributes when implementing Redundant
Platforms/Engines within a SCADA environment:
264
Chapter 11
Default
Value
NmxSvc Attributes
Primitive
Remarks
NMXMsgMxTimeout
WinPlatform 30,000 ms
Can set at config-time and run-time if
(30 seconds) Platform is Off Scan.
Specifies how long Engine waits for
response from another Engine before
declaring timeout.
NetNMXHeartbeatPeriod
WinPlatform 2000 ms
(2 seconds)
NetNMXHeartbeatsMissedConsecMax WinPlatform 3
DataNotifyFailureConsecMax
Engine
These attributes can be set to balance correct and timely error notification with
a stable system performance. For example, the DataNotifyFailureConsecMax
value of 0 means that the system will begin tearing down subscriptions (and
rebuilding them) if a Data Change Notification failure occurs at any time.
Initiating this action means that the network is then flooded with subscription
messages both when tearing them down and rebuilding them.
This action may not be realistic in an environment in certain connections are
sporadically intermittent.
Using NetNMXHeartbeatsMissedConsecMax and
NetNMXHeartbeatPeriod together provides the total time elapsed since the
last heartbeat before the connection is declared broken. The formula is:
(NetMNXHeartbeatsMissedConsecMax + 1) *
NetNMXHeartbeatPeriod
265
Note The redundant pair must be at the same physical location; they cannot
be geographically separate.
Redundancy for AutomationObject Server Engines may be applied as needed
at remote sites. The Primary and Backup nodes must include individual NICs
for their RMC channels and must use a simple crossover cable between them.
The only impact upon network traffic will be some amount of additional
packets during deployment from the central GR node to both the Primary and
Backup nodes.
Load Balancing
Load balancing is relevant only in the central supervisory setting. This is
because "load balancing" implies moving traffic to another CPU at the same
location (SCADA systems have physically distributed architecture). In a
central location, use a cluster of Application servers to distribute processing
activities.
The NAD initial startup copy process and overview of the workaround.
Detailed instructions for how to prevent a NAD client from copying the
full application on initial startup.
266
Chapter 11
NAD
Do not include SmartSymbols in the InTouch Software NAD source
application for distribution to HMI nodes over a slow connection (A change to
any Smart Symbol causes recompiling of every window in the application and
subsequent re-deployment of all windows).
Instead, develop a master application using Smart Symbols and import only the
modified windows and any modified scripts to the NAD source application.
Configure the NAD client's poll intervals to avoid using network bandwidth.
Alarms
Instead of applying a single global query, configure displays to dynamically
query different provider nodes. Alarms query an area directly using SuiteLink
(and not MX) by specifying the node name (\\NODENAME\Galaxy!Area).
WW Alarm DB Logger should log only to a local database or over a fast
network.
Allocate alarm groups between different stations and adjust visibility for
appropriate areas. Some alarm groups may be at physically different sites.
Select alarms that need wide-area visibility to minimize network traffic.
Diagnostics
The following information is applicable within the SCADA environment:
Ping
Ping is a basic command that helps you check out the basics of your network.
When pinging another machine, a sequence of special ICMP (Internet Control
Message Protocol) Echo Request packets are sent. The receiving machine
responds with an "echo reply."
The Ping program reports a number of items including: the number of
milliseconds it took to get a reply to each Echo Request packet, the maximum,
minimum and average round trip times, the number of dropped packets and a
TTL (Time To Live) value.
267
The average round trip time provides an indication of the speed of your
network. In general, it's best if round-trip times are under 200
milliseconds. The maximum and minimum round-trip times give you an
idea of the variance ('jitter'). When large variance is present, you may
experience poor response in communications.
The TTL value helps you find out how many routers (or "hops") the
packet goes through in order to get to its destination. Every packet sent has
a TTL field set to an initial number (for example 128). As the packet
traverses the network, the TTL field is decremented by one, by each
router. If the TTL field in successive pings is different, it could indicate
that the reply packets are traveling through different routes.
Tracert
Tracert traces the path followed by a packet from one machine to another. The
results of this command also provide the IP address of each router the
information goes through and how long it took on each hop.
Reviewing the time between hops enables identification of slow or heavy
traffic segments: If tracert is unsuccessful, you can use the command output to
help determine at which intermediate router forwarding failed or was slowed.
Looking for hops that have excessive times or dropped packets in the report
from a tracert command can find potential trouble spots between two
machines.
Time Synchronization
When the network cable is reconnected, the system event viewer may contain a
message that the time provider NtpClient is currently receiving valid time
data.
This message does NOT mean the computer clock time sync happens. It means
the internal clock is adjusted and will act as described in the bullets above. In
other words, this message is sent every time the computer is reconnected, but
only in certain cases is the actual computer clock also updated to the current
Server time.
In other cases, only the internal clock is adjusted and the computer time is
gradually synced with Server time according to the algorithm.
268
Chapter 11
SCADA Benchmarks
The following information is derived from Wonderware QA testing.
Traffic [kbps]
0 [heartbeat traffic]
1.2
3.4
2.83
10
4.1
1.2
100
11.7
2.85
Increase [Factor]
269
Traffic [kbps]
0 [heartbeat traffic]
1.2
3.4
2.83
10
4.1
1.2
100
11.7
2.85
Increase [Factor]
270
Chapter 11
Traffic [kbps]
0 [MDAS heartbeat]
3.5
6.5
1.857
10
7.7
1.184
100
21.1
2.74
Increase [Factor]
Change Rate
Traffic
MX Subscription
100 [point/sec]
11.7 kbps
History
100 [point/sec]
21.1 kbps
Alarms
100 [point/sec]
26.1 kbps
271
10,500 History tags among the 500 nodes = 42 history tags per node.
Each history tag changes every two minutes, for a tag change every 3
seconds.
Alarm: 0.5 Alarms/second [no peak] = 2kbps [from the Alarms Network
Utilization table]
History: 0.35 changes/second [peak
at 1/second]
20%
272
Chapter 11
C h a p t e r
273
1 2
Contents
FactorySuite A2 System Diagnostic/Maintenance Tools
Add-on Diagnostic/Maintenance Tools
OS Diagnostic Tools
274
Chapter 12
FactorySuite A2 System
Diagnostic/Maintenance Tools
The following material describes diagnostic tools within the Industrial
Application Server context.
Object Viewer
The Object Viewer monitors the status of the objects and their attributes and
can be used to modify an attribute value for testing purposes.
To add an object to the Object Viewer Watch list, you can manually type the
object and attribute names into the Attribute Reference box in the menu bar
and select Go. When prompted to enter the Attribute Type, press the OK key.
You can save a list of items being monitored. Once you have a list of attributes
in the Watch Window, you can select all or some of them and save them to an
XML file. Right-click on the Watch window to save the selection or load an
existing one. You can also add a second Watch window that shows as a
separate tab in the bottom of the Viewer.
Refer to the Platform and Engine documentation for information about
attributes that may indicate system health. These attributes provide alarm and
statistics on how much load a platform or engine may have when executing
application objects or communicating with I/O servers and other platforms.
Best Practice
To test for the Attribute Quality Value
The actual value of a Bad and Good quality is 0 and 192, respectively. Past
methods for testing the Quality value have resulted in code such as:
If MyObject.PV.Quality == 192 then
A more appropriate way to code such tests is to call one of the quality test
functions available within the QuickScript language. The previous example for
testing for a GOOD quality condition would be coded as:
If IsGood(MyObject.PV.Quality)then
The available functions for testing the Quality value of an attribute are as
follows. The functions return a Boolean (True) value for success and a Boolean
(False) for failure of the test.
275
Test Condition:
IsBad
IsGood
IsInitializing
IsUncertain
IsUsable
As in the above example, the coding syntax requires the desired function (the
specific attribute to test). Note that the parenthesis around the quality is
required.
Best Practice
To use the Set Condition Function
The available functions for setting the Quality value of an attribute are as
follows. The functions return a Boolean (True) value for success and a Boolean
(False) for failure of the test.
Set Condition:
SetBad
SetGood
SetInitializing
SetUncertain
The syntax for the Set Condition functions is the same as the Test Condition
functions except that the attribute to be SET must be an attribute within the
object that the script is attached to.
Example:
SetInitializing(me.PV)
For more information on Attribute Data Quality, see Chapter 5, "Working with
Templates."
DAServer Manager
The DAServer Manager allows local or remote configuration of the DAServer
and its device groups and items, and can monitor and perform diagnostics on
DAServer communication with PLCs and other devices.
276
Chapter 12
Log Viewer
An important troubleshooting tool, the Log Viewer records messages from
machine execution. The Log Viewer can:
Platform Manager
Using Platform Manager, you can access platforms and engines deployed to
any PC node in the galaxy.
After highlighting a platform, you can use the Action menu to start or stop a
platform, or set it OnScan/OffScan. If the platform has security implemented,
you must be logged on as a user configured with the proper SMC permissions
to start SMC, Start/Stop engines and platforms, or write from the Object
Viewer.
277
Note These utilities are provided "as is." They are not supported by
Wonderware Technical Support.
You can also refer to the ArchestrA.biz web site http://archestra.biz for the
latest information about Archestra and third-party utilities.
Restore a Galaxy: The Restore function uses the backup file to overwrite
an existing corrupt Galaxy or reproduce the Galaxy in another Galaxy
Repository. This function restores the entire galaxy configuration. Note
that when the galaxy is restored, the objects will show the deployment
status they had at the point the original galaxy was backed up.
278
Chapter 12
After the backup is generated, it should be transferred immediately to CDROM or tape, and removed from direct network file browsing. Alternatively, it
should be protected with directory access privileges. Also, the Administrator
Password should be recorded and kept in a safe place.
With the backup, if IT makes changes in the OS Groups assigned with IDE and
SMC privileges and the Galaxy Administrator Password has been changed per
IT policy and then forgotten, the backup can be retrieved and restored by
anyone with access to the CD-ROM or tape and the saved Administrator
Password. Afterwards, any progressive changes can be restored from the
periodic Object export files (aaPKGs).
Best Practice
Back up a galaxy to another machine each time you make changes to it. Refer
to the Galaxy Database Manager User's Guide for directions for backing up
and restoring a galaxy.
Note When restoring a Galaxy, use the IDE first to create a new Galaxy in the
target Galaxy Repository. The names of the new Galaxy you create in the IDE
and the backup Galaxy from the restore process must be the same.
279
OS Diagnostic Tools
The following information describes tools packaged with the Microsoft
operating system.
Performance Monitor
The Windows Performance Monitor or System Monitor is located in
Administrative Tools in the Windows Control Panel. The Performance tool is
composed of two parts: System Monitor, and Performance Logs and Alerts.
System Monitor allows you to collect and view real-time data about memory,
disk, processor, network, and other activity. Performance Logs and Alerts
allows you to configure logs to record performance data and set system alarms.
For information about using Performance tools, click the Action menu in
Performance, and then click Help.
Event Viewer
The Event Viewer, located in Administrative Tools in the Windows Control
Panel, maintains logs about program, security, and system events. You can use
the Event Viewer to view and manage the event logs, gather information about
hardware and software problems, and monitor Windows operating system
security events.
280
Chapter 12
A P P E N D I X
281
This appendix includes tasks that may be overlooked or omitted from a scope
document or a bid.
The list items are compiled from integrator comments, Tech Support sources,
and documentation. The items include internal and external links to supporting
information.
The checklist is organized by the following areas:
General
Communication
Security
Administration (Local and Remote)
Redundancy Configuration
Migration
Compatibility
282
Appendix A
General
Use Time Synchronization
Configure the computers in a Galaxy to synchronize time at regular intervals.
This is particularly important for alarm and data historization. The Historian
node is a good candidate for the computer by which all other computers will
synchronize time.
Note Details about Time Synchronization are available throughout this
document by searching (Ctrl + F keys) for "time sync."
Disable Hyper-Threading
Disable Hyper-Threading on computers that support this hardware function.
Hyper-Threading gives a false impression of lowering processor load while
actually increasing the load on the actual (rather than virtual) processor.
Some computers vendors (for instance, Dell) may enable hyper-threading by
default.
To disable hyper-threading
1.
2.
3.
Disable hyper-threading.
4.
Communication
Configure IP Addressing
All nodes in your Galaxy must be able to communicate with each other by
using both IP address and Node Name in the Network Address option of the
WinPlatforms editor.
If PCs in the Galaxy are using fixed IP addresses, then create a hosts file with
the host name to IP Address mapping.
WinPlatform connection problems may result if computers cannot be accessed
by both Hostname and IP address.
This is true no matter which type of Network Address you choose to use.
For example, assume two nodes in your Galaxy (host name: NodeA, IP
address: 10.2.69.1; host name: NodeB, IP address: 10.2.69.2). NodeA must be
able to ping NodeB with both "NodeB" and "10.2.69.2".
283
The reverse must also be true for NodeB pinging NodeA. Failure in either case,
may result in the following: you may not be able to connect to a remote Galaxy
Repository node from the IDE or deployment operations may fail.
Security
Confirm User Name and Password
All nodes in the Galaxy need to have the same user name/password for the
Archestra Admin User (aaAdminUser.exe).
284
Appendix A
Install the IDE and Bootstrap on any PC that will browse the Galaxy. This
includes WindowMaker and SCADAlarm Event Notification Software
nodes.
Install the Bootstrap and Deploy a platform to any PC that will be an AOS
(Application Object Server) or will be doing I/O with a Galaxy (includes
WindowViewer and SCADAlarm Event Notification Software nodes).
Redundancy Configuration
Redundant AppEngines
WinPlatforms hosting redundancy-enabled AppEngines must run on the same
operating system.
For redundancy to function properly, WinPlatforms hosting redundancyenabled AppEngines must be deployed to computers running the same
operating system.
Multiple NICs
In general, multiple NIC configuration is recommended only for Redundancy
purposes. Using 1GB network cards in combination with managed switches
should be sufficient for most process network throughput needs.
285
Migration
Verify Version and Patches
Verify that all nodes in the Galaxy have the same version and patch level of
Industrial Application Server.
Upgrade Correctly
When upgrading from IAS 1.x to IAS 2.x be sure to uninstall IAS 1.x first
before installing IAS 2.x.
Compatibility
FactorySuite Component Version Compatibility
Ensure that there is no conflict between FactorySuite A2 System Components
and FactorySuite 2000 Components by following Tech Note 313, Installing
FactorySuite A Components Alongside FactorySuite 2000 Components.
IAS and IndustrialSQL Server have specific version compatability
requirements. Check the FactorySuite A2 Compatibility Matrix on the
FactorySuite support website http://www.wonderware.com/support/ for
detailed compatability information.
286
Appendix A
A P P E N D I X
287
This appendix contains the entire code base for the example objects described
in Chapter 6, "Implementing QuickScript .NET."
The code is commented and the user can copy/paste from the code content.
Note Hard line breaks are included for readability in this context. Other
formatting issues may arise when copying/pasting from this document into the
Script Editor or Visual Studio.
Contents
SqlConnCacheMgr Object
PostTOaORb Object
RunTime Object
ObjectCache.dll Visual Studio.NET C# Solution
288
Appendix B
SqlConnCacheMgr Object
The SqlConnCacheMgr Object provides hash table cache management of
.NET System.Data.SqlClient.SqlConnection objects in a Type Safe manner.
The following sections are excerpted from the Help.htm file of the
$SqlConnCacheMgr template Object.
SqlConnCacheMgr Overview
The SqlConnCacheMgr Object is a "manager" Object. It leverages the
SqlConnCache Class of the ObjectCacheExt.DLL. Other Objects "acquire" a
SqlConnection Object that belongs to the SqlConnCacheMgr Object for use in
performing ExecuteNonQuery transactions with an MSSQL database.
ObjectCacheExt.DLL is a .NET Function DLL created by the A5 Application
Consultant group explicitly to demonstrate the capabilities of function DLLs
leveraging ADO.NET database access.
To use the SqlConnCacheMgr Object
1.
2.
289
SqlConnCacheMgr Configuration
The SqlConnCacheMgr object is configured by filling in specific UDAs with
connection information for the desired SQL database connection. See
SqlConnCacheMgr Run-Time Object Attributes for details. This template
provides for two indexed .NET SqlConnection objects in the cache.
To expand the function of this cache manager (to handle more .NET
SqlConnection objects), change the array sizes of all of the array indexed
UDAs. When expanding the arrays be sure to also augment the associated lines
of QuickScript code in support of the number of array elements.
290
Appendix B
UDA
DataType
Category
Connection.Acquire
Boolean
Connection.AcquiredBy
String
Object
writeable
Connection.Connect
Boolean
false, false
Connection.Database.Name
String
Recipes, Recipes
Connection.Disconnect
Boolean
false, false
Connection.IntegratedSecurity
Boolean
true, true
Connection.NodeName
String
MyInSQL,
MyInSQL
Connection.NodeName.Validated
Boolean
false, false
Connection.Release
Boolean
false, false
Connection.Status
String
Object
writeable
Connection.User.Name
String
Connection.User.Password
String
LogMessages.Enabled
Boolean
User writeable
Query.Count
Integer
Object
writeable
[2]
0, 0
Query.Duration
ElapsedTime
Object
writeable
[2]
00:00:00.0000000,
00:00:00.0000000
SqlConn.GetStatistics.Now
Boolean
User writeable
false
SqlConn.GetStatistics.Periodic
Boolean
User writeable
true
SqlConn.Initialized
Boolean
Object
writeable
SqlConn.Name
String
SqlConn.Result
String
Object
writeable
false, false
[2]
[2]
true
[2]
false, false
ConnectionDBa,
ConnectionDBb
[2]
Each of the UDAs (with three exceptions) has two array elements. Each
indexed element relates to a single .NET
System.Data.SqlClient.SqlConnection object that is held in a hash table
cache in the Application Domain associated with the IAS Engine running the
instance of the SqlConnCacheMgr Object.
Connection.Acquire[n] indexed UDA is linked from an InputOutput
extension of the corresponding UDA in a $PostToDBaORb or
$PostTOaORb Object. This UDA represents the trigger boolean for the
ManageConnections QuickScript.
291
292
Appendix B
The Duration UDA provides the last measured amount of time taken for an
ExecuteNonQuery method from start to completion. See
SqlConn.GetStatistics.Now and SqlConn.GetStatistics.Periodic UDAs
below.
SqlConn.GetStatistics.Now UDA is an instantaneous trigger for collection of
statistics regarding the performance of ExecuteNonQuery transactions
managed by this Object. The UDA must be set to 'true' initiating the Statistics
QuickScript.
SqlConn.GetStatistics.Periodic UDA is a state trigger for collection of
statistics regarding the performance of ExecuteNonQuery transactions
managed by this Object. The UDA defaults to 'true' allowing the Statistics
QuickScript to run periodically. When set 'false,' the
SqlConn.GetStatistics.Now UDA used to trigger instantaneous updates of
statistics information.
SqlConn.Initialized[n] indexed UDA indicates whether the identified .NET
SqlConnection object has been created and placed in the hash table cache.
SqlConn.Name[n] indexed UDA contains the String name given to the .NET
SqlConnection object held in the hash table cache. This name is used to
identify the object in the cache for retrieval by Other Objects.
SqlConn.Result[n] indexed UDA contains the String result information
returned by executing a method of the SqlConnCache Class related to the
indexed .NET SqlConnection object.
The QuickScripts for the $SqlConnCacheMgr template Object are extracted
from the Industrial Application Server Galaxy IDE editor. For the Object
UDAs see the previous tables or Object Help.htm file.
Template Object:$SqlConnCacheMgr
UDA Extensions $SqlConnCacheMgr
n/a
$SqlConnCacheMgr - Initialize
n/a
293
n/a
294
Appendix B
295
ENDIF;
Me.Connection.NodeName.Validated[2] = true;
ELSE
IF Me.LogMessages.Enabled THEN
LogMessage(Me.Tagname + " Me.Connection.NodeName[2]
is invalid");
ENDIF;
Me.Connection.NodeName.Validated[2] = false;
ENDIF;
{Go ahead and count the SqlConnCache objects and report out
to the logger...}
sqlConnectionCount = A5.LakeForest.SqlConnCache.Count();
IF Me.LogMessages.Enabled THEN
LogMessage(Me.Tagname + " has added " + StringFromIntg(
sqlConnectionCount , 10) + " to SQLConnCache.");
ENDIF;
296
Appendix B
$SqlConnCacheMgr - Statistics
Aliases $SqlConnCacheMgr - Statistics: n/a
Declarations $SqlConnCacheMgr - Statistics:
Dim sqlConnectionDBa As
System.Data.SqlClient.SqlConnection;
Dim sqlConnectionDBb As
System.Data.SqlClient.SqlConnection;
Dim sqlConnectionCount As Integer;
Dim foundConnectionDBa As Boolean;
Dim foundConnectionDBb As Boolean;
Dim zeroDuration As ElapsedTime;
297
IF foundConnectionDBa THEN
sqlConnectionDBa = A5.LakeForest.SqlConnCache.Get
(Me.SqlConn.Name[1]);
ENDIF;
ENDIF;
IF NOT foundConnectionDBb THEN
IF Me.LogMessages.Enabled THEN
LogMessage(Me.Tagname + " OnScan transition checking
SqlConn.Name[2] in SqlConnCache...");
ENDIF;
foundConnectionDBb =
A5.LakeForest.SqlConnCache.ContainsKey
(Me.SqlConn.Name[2]);
IF foundConnectionDBb THEN
sqlConnectionDBa = A5.LakeForest.SqlConnCache.Get
(Me.SqlConn.Name[2]);
ENDIF;
ENDIF;
{If there are more SqlConnections in the SqlConnName array
add Get(...) for them here....}
Me.SqlConn.GetStatistics.Periodic OR
Me.SqlConn.GetStatistics.Now
WhileTrue
00:00:20.0000000
298
Appendix B
299
$SqlConnCacheMgr - ManageConnections
Aliases $SqlConnCacheMgr - ManageConnections: n/a
Declarations $SqlConnCacheMgr - ManageConnections:
Dim sqlConnectionDBa As
System.Data.SqlClient.SqlConnection;
Dim sqlConnectionDBb As
System.Data.SqlClient.SqlConnection;
Dim sqlConnectionCount As Integer;
Dim resultConnectionDBa As String;
Dim resultConnectionDBb As String;
Dim foundConnectionDBa As Boolean;
Dim foundConnectionDBb As Boolean;
Dim connectStringa As String;
Dim connectStringb As String;
Dim afterInitialized1 As Boolean;
Dim afterInitialized2 As Boolean;
(Me.Connection.Acquire[1] AND
Me.SqlConn.Initialized[1]) OR(
Me.Connection.Acquire[2] AND
Me.SqlConn.Initialized[2])
WhileTrue
00:00:00.0000000
300
Appendix B
301
Me.Connection.Status[1] =
sqlConnectionDBa.State.ToString();
IF Me.LogMessages.Enabled THEN
LogMessage(Me.Tagname + " attempting to Add "
+ Me.SqlConn.Name[1] + " to
SqlConnCache...");
ENDIF;
resultConnectionDBa =
A5.LakeForest.SqlConnCache.Add
(Me.SqlConn.Name[1],sqlConnectionDBa , true);
ENDIF;
Me.SqlConn.Result[1] = resultConnectionDBa;
ELSE
{Since sqlConnectionDBa exists. Go ahead and grab it
from SqlConnCache...}
sqlConnectionDBa =
A5.LakeForest.SqlConnCache.Get(Me.SqlConn.Name[1]);
IF Me.LogMessages.Enabled THEN
LogMessage(Me.Tagname + " ConnectionDBa exists;
connecting as sqlConnectionDBa.");
ENDIF;
Me.Connection.Status[1] =
sqlConnectionDBa.State.ToString();
{Check the status of sqlConnectionDBa. If "Closed" fill
in the connection string and open it...}
IF Me.Connection.Status[1] == "Closed" THEN
IF Me.Connection.NodeName.Validated[1] THEN
connectStringa = "server=" +
Me.Connection.NodeName[1] + ";";
IF Me.Connection.IntegratedSecurity[1] THEN
connectStringa = connectStringa + "Integrated
Security=SSPI;";
ELSE
connectStringa = connectStringa + "username="
+ Me.Connection.User.Name[1] + ";";
connectStringa = connectStringa + "password="
+ Me.Connection.User.Password[1] + ";";
ENDIF;
connectStringa = connectStringa + "database="
+ Me.Connection.Database.Name[1];
sqlConnectionDBa.ConnectionString =
connectStringa;
sqlConnectionDBa.Open();
Me.Connection.Status[1] =
sqlConnectionDBa.State.ToString();
IF Me.LogMessages.Enabled THEN
LogMessage(Me.Tagname + "sqlConnectionDBa
.ConnectionString is " + connectStringa);
LogMessage(Me.Tagname + " sqlConnectionDBa
state is " + Me.Connection.Status[1]);
ENDIF;
ELSE
IF Me.LogMessages.Enabled THEN
LogMessage(Me.Tagname + " there is no
NodeName designating an MSSQL Server
node.");
ENDIF;
ENDIF;
302
Appendix B
ELSE
{The sqlConnectionDBa isn't "Closed". Just log the
information...}
IF (Me.Connection.Status[1] == "Open")
OR (Me.Connection.Status[1] == "Connecting")
OR (Me.Connection.Status[1] == "Executing")
OR (Me.Connection.Status[1] == "Fetching") THEN
LogMessage(Me.Tagname + " sqlConnectionDBa is
already " + Me.Connection.Status[1]);
ENDIF;
ENDIF;
{This script pass is now complete so clear the Boolean
flag...}
Me.Connection.Acquire[1] = false;
ENDIF;
303
IF Me.Connection.IntegratedSecurity[2] THEN
connectStringb = connectStringb +
"Integrated Security=SSPI;";
ELSE
connectStringb = connectStringb +
"username=" + Me.Connection.User.Name[2]
+ ";";
connectStringb = connectStringb +
"password=" +
Me.Connection.User.Password[2] + ";";
ENDIF;
connectStringb = connectStringb + "database="
+ Me.Connection.Database.Name[2];
sqlConnectionDBb.ConnectionString =
connectStringb;
sqlConnectionDBb.Open();
Me.Connection.Status[2] =
sqlConnectionDBb.State.ToString();
IF Me.LogMessages.Enabled THEN
LogMessage(Me.Tagname +
"sqlConnectionDBb.ConnectionString is " +
connectStringb);
LogMessage(Me.Tagname + " sqlConnectionDBb
state is " + Me.Connection.Status[2]);
ENDIF;
ELSE
IF Me.LogMessages.Enabled THEN
LogMessage(Me.Tagname + " there is no
NodeName designating an MSSQL Server
node.");
ENDIF;
ENDIF;
ELSE
{The sqlConnectionDBb isn't "Closed". Just log the
information...}
IF (Me.Connection.Status[2] == "Open")
OR (Me.Connection.Status[2] == "Connecting")
OR (Me.Connection.Status[2] == "Executing")
OR (Me.Connection.Status[2] == "Fetching") THEN
LogMessage(Me.Tagname + " sqlConnectionDBb is
already " + Me.Connection.Status[2]);
ENDIF;
ENDIF;
{This script pass is now complete so clear the
Boolean flag...}
Me.Connection.Acquire[2] = false;
ENDIF;
304
Appendix B
$SqlConnCacheMgr - ConnectTo
Aliases $SqlConnCacheMgr - ConnectTo: n/a
n/a
n/a
Me.Connection.Connect[1] OR
Me.Connection.Connect[2]
WhileTrue
00:00:00.0000000
305
Me.Connection.Status[1] = status1;
ENDIF;
IF Me.Connection.Status[1] == "Closed" THEN
IF Me.Connection.NodeName.Validated[1] THEN
connectStr1 = "server=" + nodeName1 + ";";
IF Me.Connection.IntegratedSecurity[1] THEN
connectStr1 = connectStr1 + "Integrated
Security=SSPI;";
ELSE
connectStr1 = connectStr1 + "username=" +
Me.Connection.User.Name[1] + ";";
connectStr1 = connectStr1 + "password=" +
Me.Connection.User.Password[1] + ";";
ENDIF;
connectStr1 = connectStr1 + "database=" +
Me.Connection.Database.Name[1];
IF Me.LogMessages.Enabled THEN
LogMessage(Me.Tagname + "
connection1.ConnectionString is " +
connectStr1);
ENDIF;
connection1.ConnectionString = connectStr1;
connection1.Open();
Me.Connection.Status[1] =
connection1.State.ToString();
IF Me.LogMessages.Enabled THEN
LogMessage(Me.Tagname + " connection1 state
is " + Me.Connection.Status[1]);
ENDIF;
IF NOT A5.LakeForest.SqlConnCache.ContainsKey
(Me.SqlConn.Name[1])
THEN
IF Me.LogMessages.Enabled THEN
LogMessage(Me.Tagname + " attempting to Add "
+ Me.SqlConn.Name[1] + " to
SqlConnCache...");
ENDIF;
resultStr = A5.LakeForest.SqlConnCache.Add
(Me.SqlConn.Name[1],connection1, true);
IF Me.LogMessages.Enabled THEN
LogMessage(Me.Tagname + " added ConnectionDBa
to sqlConnCache.");
LogMessage(Me.Tagname + " w/result: " +
resultStr);
ENDIF;
ENDIF;
ELSE
IF Me.LogMessages.Enabled THEN
LogMessage(Me.Tagname + " there is no NodeName
designating an MSSQL Server node.");
ENDIF;
ENDIF;
ELSE
IF Me.LogMessages.Enabled THEN
LogMessage(Me.Tagname + " attempting to add " +
Me.SqlConn.Name[1] + ", but it's already in
SqlConnCache!");
ENDIF;
306
Appendix B
IF (Me.Connection.Status[1] == "Open")
OR (Me.Connection.Status[1] == "Connecting")
OR (Me.Connection.Status[1] == "Executing")
OR (Me.Connection.Status[1] == "Fetching") THEN
IF Me.LogMessages.Enabled THEN
LogMessage(Me.Tagname + " connection1 is
already" + Me.Connection.Status[1]);
ENDIF;
ENDIF;
ENDIF;
Me.Connection.Connect[1] = False;
ENDIF;
{Do the same for the seccond SqlConnection if it is
selected...}
IF Me.Connection.Connect[2] THEN
nodeName2 = Me.Connection.NodeName[2];
IF Me.LogMessages.Enabled THEN
LogMessage(Me.Tagname + " Attempting to open connection
on " + nodeName2);
ENDIF;
IF NOT A5.LakeForest.SqlConnCache.ContainsKey
(Me.SqlConn.Name[2]) THEN
Me.Connection.Status[2] = "Closed";
connection2 = new
System.Data.SqlClient.SqlConnection();
ELSE
connection2 = A5.LakeForest.SqlConnCache.Get
(Me.SqlConn.Name[2]);
status2 = connection2.State.ToString();
IF Me.LogMessages.Enabled THEN
LogMessage(Me.Tagname + " ConnectionDBb
exists; connecting as connection2.");
LogMessage(Me.Tagname + "
connection2.State.ToString() is " +
status2);
ENDIF;
Me.Connection.Status[2] = status2;
ENDIF;
IF Me.Connection.Status[2] == "Closed" THEN
IF Me.Connection.NodeName.Validated[2] THEN
connectStr2 = "server=" + nodeName2 + ";";
IF Me.Connection.IntegratedSecurity[2] THEN
connectStr2 = connectStr2 + "Integrated
Security=SSPI;";
ELSE
connectStr2 = connectStr2 + "username=" +
Me.Connection.User.Name[2] + ";";
connectStr2 = connectStr2 + "password=" +
Me.Connection.User.Password[2] + ";";
ENDIF;
connectStr2 = connectStr2 + "database=" +
Me.Connection.Database.Name[2];
IF Me.LogMessages.Enabled THEN
LogMessage(Me.Tagname + "
connection2.ConnectionString is " +
connectStr2);
307
ENDIF;
connection2.ConnectionString = connectStr2;
connection2.Open();
Me.Connection.Status[2] =
connection2.State.ToString();
IF Me.LogMessages.Enabled THEN
LogMessage(Me.Tagname + " connection2
state is " + Me.Connection.Status[2]);
ENDIF;
IF NOT
A5.LakeForest.SqlConnCache.ContainsKey
(Me.SqlConn.Name[2])
THEN
IF Me.LogMessages.Enabled THEN
LogMessage(Me.Tagname + " attempting to
Add " + Me.SqlConn.Name[2] + "
to SqlConnCache...");
ENDIF;
resultStr = A5.LakeForest.SqlConnCache.Add
(Me.SqlConn.Name[2],connection2, true);
IF Me.LogMessages.Enabled THEN
LogMessage(Me.Tagname + " added
ConnectionDBb to sqlConnCache.");
LogMessage(Me.Tagname + " w/result: " +
resultStr);
ENDIF;
ENDIF;
ELSE
IF Me.LogMessages.Enabled THEN
LogMessage(Me.Tagname + " there is no
NodeName designating an MSSQL Server
node.");
ENDIF;
ENDIF;
ELSE
IF Me.LogMessages.Enabled THEN
LogMessage(Me.Tagname + " attempting to add " +
Me.SqlConn.Name[2] + ", but it's already in
SqlConnCache!");
ENDIF;
IF (Me.Connection.Status[2] == "Open")
OR (Me.Connection.Status[2] == "Connecting")
OR (Me.Connection.Status[2] == "Executing")
OR (Me.Connection.Status[2] == "Fetching") THEN
IF Me.LogMessages.Enabled THEN
LogMessage(Me.Tagname + " connection2 is
already" + Me.Connection.Status[2]);
ENDIF;
ENDIF;
ENDIF;
Me.Connection.Connect[2] = False;
ENDIF;
308
Appendix B
$SqlConnCacheMgr - DisconnectFrom
Aliases $SqlConnCacheMgr - DisconnectFrom: n/a
n/a
n/a
Me.Connection.Disconnect[1] OR
Me.Connection.Disconnect[2]]
WhileTrue
00:00:00.0000000
309
IF Me.Connection.Disconnect[2] THEN
IF
A5.LakeForest.SqlConnCache.ContainsKey
(Me.SqlConn.Name[2]) THEN
connection2 = A5.LakeForest.SqlConnCache.Get
(Me.SqlConn.Name[2]);
Me.Connection.Status[2] = connection2.State();
IF Me.Connection.Status[2] <> "Closed" THEN
IF Me.Connection.Status[2] == "Open" THEN
connection2.Close();
Me.Connection.Status[2] =
connection2.State.ToString();
IF Me.LogMessages.Enabled THEN
LogMessage(Me.Tagname + " connection2 has
been " + Me.Connection.Status[2] + ".");
ENDIF;
IF Me.Connection.Status[2] == "Closed" THEN
A5.LakeForest.SqlConnCache.Remove
(Me.SqlConn.Name[2]);
IF NOT
310
Appendix B
A5.LakeForest.SqlConnCache.ContainsKey
(Me.SqlConn.Name
[2]) THEN
IF Me.LogMessages.Enabled THEN
LogMessage(Me.Tagname + " " +
Me.SqlConn.Name[2] + " been removed
from SqlConnCache.");
ENDIF;
Me.Connection.AcquiredBy[2] = "";
ELSE
IF Me.LogMessages.Enabled THEN
LogMessage(Me.Tagname + " " +
Me.SqlConn.Name[2] + " not removed;
still in the SqlConnCache.");
ENDIF;
ENDIF;
ELSE
IF Me.LogMessages.Enabled THEN
LogMessage(Me.Tagname + " failed to disconnect
connection2 - " + Me.SqlConn.Name[2] + ".");
ENDIF;
ENDIF;
ELSE
IF Me.LogMessages.Enabled THEN
LogMessage(Me.Tagname + " " +
Me.SqlConn.Name[2] + " is still " +
Me.Connection.Status[2] + ".");
ENDIF;
ENDIF;
ELSE
IF Me.LogMessages.Enabled THEN
LogMessage(Me.Tagname + " " +
Me.SqlConn.Name[2] + " is " +
Me.Connection.Status[2] + ".");
ENDIF;
ENDIF;
ELSE
IF Me.LogMessages.Enabled THEN
LogMessage(Me.Tagname + " attempt to disconnect "
+ Me.SqlConn.Name[2] + "; it is not in
SqlConnCache!");
ENDIF;
ENDIF;
Me.Connection.Disconnect[2] = False;
ENDIF;
311
$SqlConnCacheMgr - ReleaseAcquired
Aliases $SqlConnCacheMgr - ReleaseAcquired: n/a
Declarations $SqlConnCacheMgr - ReleaseAcquired:
Dim sqlConnectionDBa As
System.Data.SqlClient.SqlConnection;
Dim sqlConnectionDBb As
System.Data.SqlClient.SqlConnection;
Dim foundConnectionDBa As Boolean;
Dim foundConnectionDBb As Boolean;
Dim sqlConnectionStatusa As String;
Dim sqlConnectionStatusb As String;
n/a
n/a
Me.Connection.Release[1] OR
Me.Connection.Release[2]
WhileTrue
00:00:00.0000000
312
Appendix B
313
PostTOaORb Object
The PostTOaORb Object is derived from the $PostToDBaORb template. It
acquires a .NET SqlConnection object from a cache, prepares an INSERT
Query string using simulated randomized data and implements the
ExecuteNonQuery method against a database table.
All of the inherited parent UDAs and QuickScripts are documented.
PostTOaORb Overview
The PostTOaORb Object is a "worker" Object. It simulates live data using
.NET System.Random function calls and posts the data into a database table. It
leverages the .NET SqlConnCache Class of the ObjectCacheExt Function
DLL. It "acquires" a SqlConnection object that belongs to a
SqlConnCacheMgr Object instance. It implements an ExecuteNonQuery
transaction against an MSSQL database table.
ObjectCacheExt.DLL is a .NET Function DLL created by the A5 Application
Consultant group explicitly for demonstrating the capabilities of function
DLLs leveraging ADO.NET database access.
To use the PostTOaORb Object
Note See the specific help file for that Object template for information
regarding its configuration.
For general information on objects, including relationships, deployment, and
alarm distribution, see the Integrated Development Environment (IDE)
documentation.
For information on configuration options for object information, scripts, userdefined attributes (UDAs), or attribute extensions, click Extensions Help in
the Help file header.
314
Appendix B
If found in the cache this Object posts it own Tagname into the
corresponding UDA of the SqlConnCacheMgr Object instance.
315
This QuickScript repeats and increments a scan step counter. For each
successive scan step it causes the cycle of database acquisition, database
connection and open, and the posting of data via the INSERT Query.
Depending upon other ChainReaction UDAs it also dispatches triggers to
companion PostTOaORb instance Objects causing a chain reaction of database
events.
The ChainReactionTrigger QuickScript simply captures the OnTrue
transition of either ChainReaction or ChainReaction.Trigger and sets the
ChainReaction.Latch 'true' to begin the execution of the ChainReactionEvent
QuickScript.
The ChainReactionCleanUp QuickScript performs the work of clearing the
ChainReaction.ChainPrev and ChainReaction.ChainNext UDAs after they
have been used to propagate a chain reaction of events. It also clears
ChainReaction.BreakPrev and ChainReaction.BreakNext UDAs.
PostTOaORb Configuration
The PostTOaORb object is configured by filling in specific UDAs (including
the UDAs inherited from the $PostToDBaORb template) with table name,
column names, and column datatypes, prefixes and enumeration strings for the
desired SQL database table. These values are automatically applied to an
INSERT query. See PostTOaORb Run-Time Object Attributes for details.
PostTOaORb Run-Time Object Attributes
Following are the UDA attributes of the PostTOaORb Object. Note that the
UDAs incorporate 'dot' separators to enhance the grouping. See the next part of
this section for inherited UDAs from the $PostToDBaORb Object.
UDA
DataType
Catagory
ChainReaction
Boolean
User Writeable
false
ChainReaction.BreakNext
Boolean
User Writeable
false
ChainReaction.BreakPrev
Boolean
User Writeable
false
ChainReaction.ChainNext
Boolean
User Writeable
false
ChainReaction.ChainPrev
Boolean
User Writeable
false
ChainReaction.Latch
Boolean
User Writeable
false
ChainReaction.NoAutoDisconnect Boolean
User Writeable
true
ChainReaction.Trigger
Boolean
User Writeable
false
Product.CreationDate
Time
User Writeable
8/31/2005
12:00:00.000 PM
Product.Name
String
Product.Name.Prefix
String
Product.Parameter1
Float
User writeable
0.0
Product.Parameter2
Float
User writeable
0.0
Product.Randomize
Boolean
User Writeable
false
Product.Status
String
User Writeable
316
Appendix B
UDA
DataType
Catagory
Product.Status.Enum
String
OK, HOLD,
QUARANTINE,
WASTE,
SPECIAL
Product.Values.PrepNow
String
User writeable
false
317
UDA
DataType
Catagory
Connection.Acquired.ByA
String
Object
writeable
Connection.Acquired.ByB
String
Object
writeable
Connection.Attempts
Integer
User Writeable
10
Connection.Connect.ToA
Boolean
Object
writeable
false
Connection.Connect.ToAorB
Boolean
User Writeable
false
Connection.Connect.ToB
Boolean
Object
writeable
false
Connection.Disconn.FromA
Boolean
Object
writeable
false
Connection.Disconn.FromB
Boolean
Object
writeable
false
Connection.Disconnect
Boolean
User Writeable
false
Connection.IntegratedSec.A
Boolean
User Writeable
true
318
Appendix B
UDA
DataType
Catagory
Connection.IntegratedSec.B
Boolean
User writeable
Connection.NodeName.A
String
User writeable
Connection.NodeName.B
String
User writeable
Connection.Result
String
Object
writeable
Connection.State
String
Object
writeable
Connection.TryConnect
String
User writeable
DB.Column.DataType
String
DB.Column.LastIndex
Integer
User writeable
DB.Column.Name
String
ProductName,
Param1, Param2,
CreationDate,
Status
DB.Column.ValueString
String
DB.Name.A
String
User writeable
DB.Name.B
String
User writeable
DB.PostData
Boolean
User writeable
DB.PostData.Result
String
Object
writeable
DB.PostData.ResultEnum
String
Object
writeable
DB.TableName
String
User writeable
DB.TableName.Suffix
String
Object
writeable
LogMessages.Enabled
Boolean
User writeable
SqlConn.Name
String
User writeable
SqlConn.Name.Prefix
String
Object
writeable
true
[4]
OBJECT IS
NULL, INVALID
OPERATION,
EXCEPTION,
NO SQLCONN
[2]
Recipe,Recipe
true
ConnectionDB
Three of the UDAs include ten (10) array elements. Each indexed element of
these arrays relates to a column of a table in a database giving Column
DataType, Column Name, and (in Run-Time) a Column ValueString.
For one of the UDAs there is an array of four elements representing a String
enumeration holding and indexed list of possible Exception error messages.
Connection.Acquired.ByA - This UDA is extended with an InputOutput
source String to the Connection.AcquiredBy[1] UDA of the associated
SqlConnCacheMgr Object instance.
Connection.Acquired.ByB - This UDA is extended with an InputOutput
source String to the Connection.AcquiredBy[2] UDA of the associated
SqlConnCacheMgr Object instance.
319
320
Appendix B
321
Template Object:$PostTOaORb
UDA Extensions $PostTOaORb:
Connection.AcquiredBy.A
InputOutput extension
Source
SqlConnCacheMgr.Connection.AcquiredBy[2]
not checked
Destination
---
Connection.AcquiredBy.B
InputOutput extension
Source
SqlConnCacheMgr.Connection.AcquiredBy[1]
not checked
Destination
---
322
Appendix B
Connection.IntegratedSec.A
InputOutput extension
Source
SqlConnCacheMgr.Connection.IntegratedSecurity[1]
not checked
Destination
---
Connection.IntegratedSec.B
InputOutput extension
Source
SqlConnCacheMgr.Connection.IntegratedSecurity[2]
not checked
Destination
---
Connection.NodeName.A
InputOutput extension
Source
SqlConnCacheMgr.Connection.NodeName[1]
not checked
Destination
---
Connection.NodeName.B
InputOutput extension
Source
SqlConnCacheMgr.Connection.NodeName[2]
not checked
Destination
---
DB.Name.A
InputOutput extension
Source
SqlConnCacheMgr.Connection.Database.Name[1]
not checked
Destination
---
DB.Name.B
InputOutput extension
Source
SqlConnCacheMgr.Connection.Database.Name[2]
not checked
Destination
---
323
$PostTOaORb Scripts
$PostTOaORb - PrepValuesNow
Aliases $PostTOaORb - PrepValuesNow: n/a
Declarations $PostTOaORb - PrepValuesNow:
Dim cDateTime As System.DateTime;
Dim connectLetter As String;
Dim indexOfLetter As String;
Me.Product.Values.PrepNow
OnTrue
00:00:00.0000000
324
Appendix B
$PostTOaORb - RandomizeNow
Aliases $PostTOaORb - RandomizeNow: n/a
Declarations $PostTOaORb - RandomizeNow:
Dim randomValueBase As System.Random;
Dim randomPerCent As Float;
Dim randomIndex As System.Random;
Dim productIndex As Integer;
Dim enumIndex As Integer;
Me.Product.Randomize
OnTrue
00:00:00.0000000
325
326
Appendix B
$PostTOaORb - ChainReactionEvent
Aliases $PostTOaORb - ChainReactionEvent: n/a
Declarations $PostTOaORb - ChainReactionEvent:
Dim scanStep As Integer;
Dim waitCountDown As Integer;
Dim checkConnState As String;
Me.ChainReaction.Latch
WhileTrue
00:00:00.0000000
327
ELSE
IF Me.LogMessages.Enabled THEN
LogMessage(Me.Tagname + " ChainReaction
aborted; not ACQUIRED:");
ENDIF;
scanStep = 0;
waitCountDown = 10;
Me.ChainReaction.Latch = false;
Me.ChainReaction = false;
Me.ChainReaction.Trigger = false;
ENDIF;
ELSE
IF scanStep >= 4 THEN
IF NOT Me.ChainReaction.NoAutoDisconnect THEN
IF Me.LogMessages.Enabled THEN
LogMessage(Me.Tagname + " disconnecting
- NoAutoDisconnect is false.");
ENDIF;
Me.Connection.Disconnect = true;
ENDIF;
IF NOT Me.ChainReaction.BreakPrev THEN
Me.ChainReaction.ChainPrev = true;
ENDIF;
IF NOT Me.ChainReaction.BreakNext THEN
Me.ChainReaction.ChainNext = true;
IF Me.LogMessages.Enabled THEN
LogMessage(Me.Tagname + " ChainNext set
true, passing to" +
Me.ChainReaction.ChainNext.OutputDest +
".");
ENDIF;
ENDIF;
scanStep = 0;
waitCountDown = 10;
Me.ChainReaction.Latch = false;
Me.ChainReaction = false;
Me.ChainReaction.Trigger = false;
ENDIF;
ENDIF;
ENDIF;
ENDIF;
ENDIF;
328
Appendix B
$PostTOaORb - ChainReactionCleanUp
Aliases $PostTOaORb - ChainReactionCleanUp: n/a
Declarations $PostTOaORb - ChainReactionCleanUp:
Dim chainPrevLatchCount As Integer;
Dim chainNextLatchCount As Integer;
Dim breakPrevLatchCount As Integer;
Dim breakNextLatchCount As Integer;
Me.ChainReaction.Latch
WhileFalse
00:00:00.0000000
329
Me.ChainReaction.BreakPrev = false;
breakPrevLatchCount = 0;
ENDIF;
ENDIF;
IF Me.ChainReaction.BreakNext THEN
breakNextLatchCount = breakNextLatchCount + 1;
IF breakNextLatchCount >= 3 THEN
Me.ChainReaction.BreakNext = false;
breakNextLatchCount = 0;
ENDIF;
ENDIF;
n/a
$PostTOaORb - ChainReactionTrigger
Aliases $PostTOaORb - ChainReactionTrigger: n/a
Declarations $PostTOaORb - ChainReactionTrigger:
Me.ChainReaction OR
Me.ChainReaction.Trigger
OnTrue
330
Appendix B
331
Me.Connection.TryConnect
WhileTrue
00:00:05.0000000
332
Appendix B
333
Me.Connection.Disconnect
WhileTrue
00:00:05.0000000
334
Appendix B
335
336
Appendix B
Me.DB.PostData
OnTrue
Checked
60000 ms
00:00:00.0000000
337
338
Appendix B
sqlCommandText =
sqlCommandText + quoteStr
+ Me.DB.Column.ValueString[columnIterator] +
quoteStr;;
IF columnIterator < columnCount THEN
sqlCommandText = sqlCommandText + ",";
ENDIF;
NEXT;
sqlCommandText = sqlCommandText + ")";
IF Me.LogMessages.Enabled THEN
LogMessage(Me.Tagname + " CommandText: " +
sqlCommandText);
ENDIF;
qResultException = false;
{Finally! we are ready to run the query.......}
qResult =
A5.LakeForest.SqlConnCache.ExecuteNonQuery
(Me.SqlConn.Name, sqlCommandText, false);
IF Me.LogMessages.Enabled THEN
LogMessage(Me.Tagname + " ExecuteNonQuery " +
qResult);
ENDIF;
{For the query result look it up in the
DB.PostData.ResultEnum string enumeration }
{to find out if there was an exception...}
FOR qResultIndex = 1 TO 4
IF qResult ==
Me.DB.PostData.ResultEnum[qResultIndex] THEN
qResultException = True;
Me.DB.PostData.Result = qResult;
EXIT FOR;
ENDIF;
NEXT;
{If there was no exception then determine the number
of rows that were inserted into the database by }
{the INSERT query....}
IF NOT qResultException THEN
qResultLen = StringLen(qResult);
FOR qResultIterator = 1 TO qResultLen
qResultChar = StringMid(qResult,
qResultIterator, 1);
IF qResultChar == " " THEN
rowsChars = StringLeft(qResult,
qResultIterator - 1);
rows = StringToIntg(rowsChars);
Me.DB.PostData.Result = rowsChars + " row
inserted.";
{Note: rows Integer variable not required.
It is included as an example of conversion
from String to Integer.}
EXIT FOR;
ENDIF;
NEXT;
ENDIF;
ELSE
Me.DB.PostData.Result = "Attempted PostData but
ConnectionDB" + connectLetter + " was not Open.";
ENDIF;
339
ELSE
{Aparently the SqlConnection is not in the SqlConnCache...}
IF Me.LogMessages.Enabled THEN
LogMessage(Me.Tagname + " " + Me.SqlConn.Name + " is
not in SqlConnCache.");
ENDIF;
ENDIF;
{Clear the DB.PostData trigger...}
Me.DB.PostData = False;
Me.Connection.TryConnect
OnFalse
340
Appendix B
RunTime Object
The RunTime Object provides stop watch capabilities for capturing "running"
and "held" times as well as percent of time in these states.
RunTime Overview
The RunTime Object is a stop watch. It contains a "clock" that keeps track of
an elapsed period of time from a "start event" to an "end event". During the
clock timing period two states, "running" and "held," are also tracked as
regards accumulative elapsed time for each.
This Object also calculates the elapsed times as a percentage of the "clock"
time in each of the two states. UDAs of the Object funtion as the buttons of the
stop watch controlling "clock start event", "clock end event", "clock reset",
"running state", and "held state". A "calculation period" may be adjusted at any
time as the setting for the frequency of execution of the statistics calculations.
For general information on objects, including relationships, deployment, and
alarm distribution, see the Integrated Development Environment (IDE)
documentation.
For information on configuration options for object information, scripts, userdefined attributes (UDAs), or attribute extensions, click Extensions Help in the
Help file header.
341
RunTime Configuration
The RunTime object is configured by filling in specific UDAs with the
following RunTime Run-Time Object Attributes. Note that the UDAs
incorporate 'dot' separators to enhance the grouping:
UDA
DataType
Catagory
AutoResetEvents
Boolean
User writeable
false
CalcPeriod
ElapsedTime
User writeable
00:00:10.0000000
Clock.ElapsedTime
ElapsedTime
Calculated
retentive
00:00:00.0000000
Clock.ElapsedTimePrevious
ElapsedTime
Calculated
retentive
00:00:00.0000000
Clock.EndEvent
Boolean
User writeable
false
342
Appendix B
UDA
DataType
Catagory
Clock.EndTime
Time
Calculated
retentive
1/31/2005
12:00:00.000 PM
Clock.EndTimePrevious
Time
Calculated
retentive
1/31/2005
12:00:00.000 PM
Clock.Reset
Boolean
User writeable
false
Clock.Started
Boolean
Calculated
retentive
false
Clock.StartEvent
Boolean
User writeable
false
Clock.StartTime
Time
Calculated
retentive
1/31/2005
12:00:00.000 PM
Clock.StartTimePrevious
Time
Calculated
retentive
1/31/2005
12:00:00.000 PM
Clock.Stopped
Boolean
Calculated
retentive
false
Held.ElapsedTime
ElapsedTime
Calculated
retentive
00:00:00.0000000
Held.ElapsedTimePrevious
ElapsedTime
Calculated
retentive
00:00:00.0000000
Held.PercentOfClock
Float
Object
writeable
0.0
Held.PercentOfClockPrevious
Float
Calculated
retentive
0.0
Held.State
Boolean
User writeable
false
Running.ElapsedTime
ElapsedTime
Calculated
retentive
00:00:00.0000000
Running.ElapsedTimePrevious
ElapsedTime
Calculated
retentive
00:00:00.0000000
Running.PercentOfClock
Float
Object
writeable
0.0
Running.PercentOfClockPrevious Float
Calculated
retentive
0.0
Running.State
User writeable
false
Boolean
343
344
Appendix B
345
Template Object:$RunTime
$RunTime - Timer
Aliases $RunTime - Timer: n/a
Declarations $RunTime - Timer:
Dim calcDelayL As System.Int64;
Dim calcPeriodL As System.Int64;
Dim lastScan As System.DateTime;
Dim thisScan As System.DateTime;
Dim calcPeriodT As System.TimeSpan;
Dim zeroET As System.TimeSpan;
Dim calcET As System.TimeSpan;
Dim cstrtT As System.DateTime;
Dim cendT As System.DateTime;
Dim cET As System.TimeSpan;
Dim hET As System.TimeSpan;
Dim rET AS System.TimeSpan;
346
Appendix B
me.Running.ElapsedTime = zeroET;
me.Running.ElapsedTimePrevious = zeroET;
me.Running.PercentOfClockPrevious = 0.0;
me.Clock.Started = false;
me.Clock.Stopped = true;
ELSE
cstrtT = me.Clock.StartTime;
cendT = me.Clock.EndTime;
cET = me.Clock.ElapsedTime;
hET = me.Held.ElapsedTime;
rET = me.Running.ElapsedTime;
ENDIF;
[blank]
Periodic
00:00:00.0000000
347
348
Appendix B
me.Running.ElapsedTime = rET;
{Following execution of the code reset the calculation
delay and stamp the last scan time as Now()...}
calcDelayL = 0;
lastScan = Now();
ENDIF;
$RunTime - Start
Aliases $RunTime - Start: n/a
Declarations $RunTime - Start:
Dim zeroET As System.TimeSpan;
Me.Clock.StartEvent
OnTrue
00:00:00.0000000
349
Me.Clock.EndEvent
OnTrue
00:00:00.0000000
350
Appendix B
2.
Several additional files and subdirectories will have then been created in the
identified project directory.
3.
Rename the single Class file (Class1.cs) that was automatically created by
Visual Studio.NET as - ObjectCacheExt.cs
Copy the complete source code text found at the end of this appendix into
the body of the ObjectCacheExt.cs file in the Visual Studio editor window.
Save the solution (select 'File/Save All' or use the Ctrl+Shift+S hotkey).
5.
Compile this version of the DLL using 'Build\Build solution' (or the
Ctrl+Shift+B hotkeys). Observe whether there are any errors in the
Output window. Check the Task List for further recommended steps.
6.
When the Debug build is clean with no errors, proceed to the testing
phase. Use the copy of the ObjectCacheExt.dll file found in the solution
directory \bin\Debug as well as the ObjectCacheExt.pdb file for testing.
7.
8.
9.
The DLL (whether it is the Debug version or the Release version) must be
imported into a Galaxy for testing.
Place a copy of the desired Debug or Release version of the DLL in a
known ArchestrA directory, for example C:\Program
Files\ArchestrA\Framework\bin. If the Debug DLL is being used also
place a copy of the PDB file from the 'bin\Debug' Visual Studio.NET
solution.
351
10. Open the desired test Galaxy using the Industrial Application Server
Galaxy IDE application. Select Galaxy\Import\Script Function
Library. If it is the second or subsequent import of a recompiled version
of the DLL select the radio button designating Overwrite existing DLL.
11. To test the Debug version of this DLL using Visual Studio .NET 2003 first
import the it into a test Galaxy, create a derived template Object, add
QuickScript code that calls functions of the DLL.
Provide one or more UDAs with category 'User writeable' that serve to
exercize the Object's QuickScripts.
12. Create an instance of the object and place it under an Area on an Engine.
All tests described in this example are performed on a single node that has
Visual Studio.NET and Industrial Application Server installed.
Be sure that the test Platform is deployed and OnScan on the node that is
being tested and that the test Engine is undeployed or Shutdown. You can
'Shutdown' a deployed, running engine from the SMC using Platform
Manager.
2.
3.
4.
Open Task Manager and inspect the list of aaEngine processes along with
their PID (Process ID).
If the PID column is not visible use 'View\Select Columns' and check the
box to make it visible. There should be at least one aaEngine process
running at this time. If there is only one it represents the Platform.
5.
Now deploy the test Engine containing the test Object that references the
DLL. If it is already deployed and 'Shutdown', then 'Start' the test Engine
using the SMC Platform Manager and make sure it is running 'OnScan'.
6.
Inspect the Task Manager again and observe that one additional 'aaEngine'
process appears in the list. Take note of this PID (Process ID).
7.
8.
Open up the ArchestrA Object Browser with the test Object instance and
place selected Attributes and UDAs into the watch window.
Tip Save the watch window configuration to a file so that it can be reloaded
for later testing.
9.
Launch Object Viewer from the IDE or from the Platform Manager in the
SMC.
352
Appendix B
10. In the SMC select the Local LogViewer from the Default Group and
inspect the Log for errors, warnings etc. Periodically inspect the log
during the test.
11. In Visual Studio.NET select 'Debug\Processes' from the menu.
Inspect the list of 'Available Processes' looking for the ID of the aaEngine
'Process'.
A. Click on that line in the list, then select the 'Attach' button.
B. In the dialogue check 'Common Language Runtime' and uncheck the
other boxes.
C. Then click the 'OK' button. The aaEngine.exe with its ID will appear
in the 'Debugged Processes' list. Click the 'Close' button.
12. Add one or more Breakpoints in the ObjectCacheExt.cs source code in the
Visual Studio.NET editor.
13. Utilizing Object Viewer exersize the Object instance by modifying UDA
values that trigger QuickScripts and cause the QuickScript code to make
calls to the ObjectCacheExt Class functions.
14. Return immediately to the Visual Studio.NET application and observe
where the Breakpoints hit during execution of the function call. Use
'Debug\Continue' or the F5 function key to proceed to the next Breakpoint.
15. When the debugger stops at a Breakpoint it is possible to inspect values of
the variables within the .NET CLR that are currently in scope. Plase
variables and objects of interest in the Visual Studio .NET Watch Window.
Remember that not all variables of the ObjectCacheExt DLL will be in
scope at the same time depending upon the code that is executing when
reaching the Breakpoint.
16. When finished with this phase of testing select 'Debug\Stop Debugging'.
In rare cases this may cause the previously Attached aaEngine Process to
stop. If this happens to coninue redeploy the Engine or use the SMC to
Start it and set it back to OnScan.
17. If certain Breakpoints are visited repeatedly by Function Calls in Periodic
or other frequently executing QuickScripts then remove those Breakpoints
so that other methods in the Function DLL may be tested and debugged.
18. If it is necessary to make changes to the source code, recompile both the
Debug and Release versions and copy the Debug version to the working
directory. Reimport the Function DLL into the Galaxy and validate the
Objects that reference it.
19. When completely finished with Debug testing replace the copy of the
Debug version of the DLL in the working directory with the compiled
Release version and remove the Debug version's PDB file. Reimport the
DLL into the Galaxy and perform a final validation of the Objects that
reference it.
353
/// <summary>
/// Namespace A5.LakeForest identifies contained
/// classes developed by the Invensys Wonderware
/// ArchestrA A5 group in Lake Forest, CA.
/// </summary>
namespace A5.LakeForest
{
/// <summary>
/// ObjectCacheExt handles pooling of any kind
/// of .NET object. There is no type safe management
/// in this class.
/// </summary>
public class ObjectCacheExt
{
// a private static member variables is used
// as singleton fields of this class.
// Hashtable.Synchronized keeps the named list
// of objects and ensures thread safe operation.
private static Hashtable objects =
Hashtable.Synchronized(new Hashtable());
public ObjectCacheExt()
{
// Constructor logic would normally go here
// but it is not needed for this class as there
// is no need to explicitly construct an
// instance.
}
///
///
///
///
///
///
///
///
///
///
///
<summary>
ObjectCacheExt's Add method takes two arguments,
adding the supplied object argument 'o'
to the Hashtable cache under the name given
as the 'objectName' argument;
it provides no return value.
</summary>
<param name="objectName">
String key identifier for the object 'o'
to be added to the object cache.
</param>
354
Appendix B
355
/// </param>
/// <returns>
/// A reference to the object that is in the
/// object cache.
/// </returns>
public static object Get(string objectName)
{
return objects[objectName];
}
/// <summary>
/// ObjectCacheExt's second Get method takes
/// one argument,
/// returning a .NET object reference for the object
/// that is contained in the Hashtable cache under
/// the index value identified by 'objectIndex';
/// if that 'objectIndex' is not found
/// in the cache the method returns 'Null'.
/// </summary>
/// <param name="objectIndex">
/// An integer identifying an object held in
/// the object cache.
/// </param>
/// <returns>
/// A reference to the object held at
/// 'objectIndex' in the object cache;
/// Null if 'objectIndex' is not found in
/// the object cache.
/// </returns>
public static object Get(int objectIndex)
{
int objCount = 0;
int indexer = 0;
string objectName = "";
objCount = Count();
if ((objectIndex > 0) & (objectIndex <=
objCount))
{
foreach (string objectNameCheck in
objects.Keys)
{
indexer = indexer + 1;
if (indexer == objectIndex)
{
objectName = objectNameCheck;
break;
}
}
}
return objects[objectName];
}
/// <summary>
/// ObjectCacheExt's Count method does not
/// have any arguments;
/// it returns the integer count of the
/// object contained in the Hashtable cache.
/// </summary>
/// <returns>
/// An integer representing the number of objects
356
Appendix B
357
/// no arguments;
/// it provides a string return value describing
/// success or failure;
/// internally this function builds the cache object
/// and statistics DataSet;
/// static designation ensures initialization
/// or reinitialization of the common Cache
///and DataSet.
/// </summary>
public static string Initialize()
{
// Attach to the CurrentDomain of the
// AOS Engine ...
currentDomain = AppDomain.CurrentDomain;
// Get the System.Data DLL loaded in order
// to compare types...
// Inside the currentDomain.Load function
// cannot contain carriage return.
sqlDataAssembly = currentDomain.Load("System.Data,
Version=1.0.5000.0, Culture=neutral,
PublicKeyToken=b77a5c561934e089");
// Set the connection type according to
// the loaded DLL and copy the name
// to a string ...
sqlConnType =
sqlDataAssembly.GetType("System.Data.
SqlClient.SqlConnection",true);
sqlConnTypeStr = sqlConnType.ToString();
// Create the Hashtable...
sqlConns = Hashtable.Synchronized(new
Hashtable());
// Initialize the SQLConnInfo DataSet...
dsSQLConnInfo = new
DataSet("SQLConnectionInfo");
// Initialize and add the SQLConnectionStats
// DataTable ...
dtSQLConnStats = new
DataTable("SQLConnectionStats");
dsSQLConnInfo.Tables.Add(dtSQLConnStats);
// Initialize and add the Columns for
// SQLConnecitonStats...
dcSQLConnName = new DataColumn("SQLConnName",
typeof(string));
dtSQLConnStats.Columns.Add(dcSQLConnName);
dcSQLConnTransactStart = new
DataColumn("TransactStart",typeof(DateTime));
dtSQLConnStats.Columns.Add
(dcSQLConnTransactStart);
dcSQLConnTransactFinish = new
DataColumn("TransactFinish",
typeof(DateTime));
dtSQLConnStats.Columns.Add
(dcSQLConnTransactFinish);
358
Appendix B
dcSQLConnTransactDuration = new
DataColumn("TransactDuration",
typeof(TimeSpan));
dtSQLConnStats.Columns.Add
(dcSQLConnTransactDuration);
dcSQLConnTransactCount = new
DataColumn("TransactCount",typeof(int));
dtSQLConnStats.Columns.Add
(dcSQLConnTransactCount);
// Create the Column key and add to the DataTable
dcArrayKey = new DataColumn[1];
dcArrayKey[0] = dcSQLConnName;
dtSQLConnStats.PrimaryKey = dcArrayKey;
initializedStr = @"INITIALIZED";
return initializedStr;
}
/// <summary>
/// SqlConnCache's Initialized method takes
/// no arguments;
/// it provides a string return value
/// giving the cache initialization state;
/// static designation ensures getting the
/// state of the singleton cache object.
/// </summary>
public static string Initialized()
{
return initializedStr;
}
/// <summary>
/// SqlConnCache's Add method takes two arguments,
/// adding the supplied object argument 'sqlObj'
/// to the Hashtable cache under the name given
/// as the 'sqlConnName' argument;
/// it provides no return value; internally this
/// function calls the Add method that has a
/// three argument signature automatically passing
/// true as the third argument (see the
/// alternate Add method below);
/// static designation ensures adding to the
/// singleton instance of the cache.
/// </summary>
/// <param name="sqlConnName">
/// String key identifier for the object 'sqlObj'
/// to be added to the SqlConnCache.
/// </param>
/// <param name="sqlObj">
/// Reference to the object 'sqlObj' to be added to
/// the SqlConnCache.
/// </param>
public static void
Add(string sqlConnName, object sqlObj)
{
string addResult2 = "";
addResult2 = Add(sqlConnName, sqlObj, true);
}
359
/// <summary>
/// SqlConnCache's Add method takes three arguments,
/// adding the supplied object argument 'sqlObj'
/// to the Hashtable cache under the name given
/// as the 'sqlConnName' argument;
/// it provides a return value giving
/// success/failure info;
/// it is a type safe method that checks for
/// the Type of the object submitted for
/// inclusion in the cache, refusing to
/// Add objects that are not of Type
/// System.Data.SqlClient.SqlConnection;
/// try-catch-finally exception handling is
/// implemented and in the case of errors the return
/// value is a string that describes the
/// type of error; static designation ensures
/// adding to the singleton instance of the cache.
/// </summary>
/// <param name="sqlConnName"></param>
/// <param name="sqlObj"></param>
/// String key identifier for the object 'sqlObj'
/// to be added to the SqlConnCache.
/// </param>
/// <param name="sqlObj">
/// Reference to the object 'sqlObj' to be added
/// to the SqlConnCache.
/// </param>
/// <param name="withExceptions">
/// Set to true if for the Add method to return
/// Exception information.
/// </param>
/// <returns>
/// A result string; if 'withExceptions' argument
/// set to true the result string
/// may contain Exeption information.
/// </returns>
public static string Add(string sqlConnName, object
sqlObj,bool withExceptions)
{
string sqlObjTypeStr;
string eStr = "";
string addResult = "";
bool exceptionOccured = false;
System.Type sqlObjType;
System.Data.DataRow rowStats;
try
{
sqlObjType = sqlObj.GetType();
sqlObjTypeStr = sqlObjType.ToString();
if (sqlObjType.Equals( sqlConnType))
{
System.DateTime datetimeNow =
System.DateTime.Now;
sqlConns[sqlConnName] = sqlObj;
rowStats = dtSQLConnStats.NewRow();
rowStats["SQLConnName"] = sqlConnName;
rowStats["TransactStart"] = datetimeNow;
rowStats["TransactFinish"] = datetimeNow;
360
Appendix B
rowStats["TransactDuration"] =
new TimeSpan(0);
rowStats["TransactCount"] = 0;
dtSQLConnStats.Rows.Add(rowStats);
addResult = @"ADDED";
}
else
{
sqlObjTypeStr = sqlObjType.ToString();
addResult = @"INCORRECT TYPE";
}
eStr = @"NO ERROR";
}
catch (TypeLoadException tldException)
{
addResult = @"TypeLoad failed";
eStr = tldException.ToString();
exceptionOccured = true;
}
catch (NullReferenceException nRefException)
{
addResult = @"Null Reference";
eStr = nRefException.ToString();
exceptionOccured = true;
}
catch (InvalidOperationException
invalidOpException)
{
addResult = @"Invalid Operation.";
eStr = invalidOpException.ToString();
exceptionOccured = true;
}
catch (Exception e)
{
addResult = @"Other.";
eStr = e.ToString();
exceptionOccured = true;
}
finally
{
if (exceptionOccured)
{
// Prefix 'Exception - ' to exception text...
addResult = @"Exception - " + addResult
+ @".";
}
return addResult;
}
///
///
///
///
///
///
///
///
///
<summary>
SqlConnCache's ContainsKey method takes
one argument, returning a boolean true value
if the name supplied as 'sqlConnName'
is contained in the Hashtable cache,
returning a boolean false value
if it is not contained in the cache;
static designation insures reference to
the singleton instance of the cache.
361
/// </summary>
/// <param name="sqlConnName"></param>
/// String key identifier used the check if object
/// 'sqlObj' is contained in SqlConnCache
/// </param>
/// <returns>
/// True if the 'sqlConnName' identifier is in
/// the SqlConnCache;
/// False it it is not in the SqlConnCache.
/// </returns>
public static bool ContainsKey(string sqlConnName)
{
return sqlConns.ContainsKey(sqlConnName);
}
/// <summary>
/// SqlConnCache's Remove method takes one argument,
/// deleting a Hashtable entry
/// - both key and value - if the 'sqlConnName'
/// entry is found in the Hashtable cache;
/// it provides no return value;
/// static designation insures removing from the
/// singleton instance of the cache.
/// </summary>
/// <param name="sqlConnName">
/// String key identifier for the object 'sqlObj'
/// that is to be removed from the SqlConnCache.
/// </param>
public static void Remove(string sqlConnName)
{
string [] sqlConnNameFind = new string [1]
{sqlConnName};
System.Data.DataRow rowToRemove;
rowToRemove =
dtSQLConnStats.Rows.Find((string[])
sqlConnNameFind);
if (rowToRemove != null)
{
dtSQLConnStats.Rows.Remove(rowToRemove);
}
sqlConns.Remove(sqlConnName);
}
///
///
///
///
///
///
///
///
///
///
///
///
///
///
///
///
<summary>
SqlConnCache's first Get method takes one
argument,returning a .NET object reference
for the object that is contained in the
Hashtable cache under the key value identified
by 'sqlConnName'; if that 'sqlConnName' is not
found in the cache the method returns 'Null';
static designation ensures getting the
singleton instance of the cache.
</summary>
<param name="sqlConnName"></param>
String key identifier for the object 'sqlObj'
that is to be acquired for use from
the SqlConnCache.
</param>
<returns>
362
Appendix B
<summary>
SqlConnCache's Count method does not have
any arguments;
it returns the integer count of the object
contained in the Hashtable cache;
static designation insures counting the
singleton instance of the cache.
363
/// </summary>
/// <returns>
/// An integer representing the number of objects
/// in the object cache.
/// </returns>
public static int Count()
{
return sqlConns.Count;
}
/// <summary>
/// SqlConnCache's GetConnState method takes
/// one argument. It returns a string giving
/// the specific state of the SqlConnection object
/// named by 'sqlConnName';
/// try-catch checks for valid connection state
/// information, returning error information
/// if the object gives Null Reference or invokes
/// an Invalid Operation or other exception;
/// static designation insures getting
/// the state from the singleton instance
/// of the cache.
/// </summary>
/// <param name="sqlConnName">
/// String key identifier for the object 'sqlObj'
/// to be added to the SqlConnCache.
/// </param>
/// <returns>
/// The string describing 'sqlObj' State;
/// for an Exeption the string describing
/// the Exception.
/// </returns>
public static string GetConnState(string
sqlConnName)
{
SqlConnection sqlConnSelected;
if (ContainsKey(sqlConnName))
{
sqlConnSelected =
(SqlConnection)Get(sqlConnName);
try
{
return sqlConnSelected.State.ToString();
}
catch (NullReferenceException eNull)
{
string eNullStr = eNull.ToString();
return "OBJECT IS NULL";
}
catch (InvalidOperationException eInvalidOp)
{
string eInvalidOpStr =
eInvalidOp.ToString();
return "INVALID OPERATION";
}
364
Appendix B
catch (Exception e)
{
string eStr = e.ToString();
return "EXCEPTION";
}
}
else
{
return @"NOT IN CACHE!";
}
}
/// <summary>
/// SqlConnCache's GetServerName method has
/// one argument. It returns the DataSource name
/// of the server for the SqlConnection named
/// by 'sqlConnName' if it is in the cache;
/// try-catch exception handling checks for
/// problems and returns error information;
/// static designation ensures getting the name
/// from the singleton instance of the cache.
/// </summary>
/// <param name="sqlConnName">
/// String key identifier for the object 'sqlObj'
/// to be added to the SqlConnCache.
/// </param>
/// <returns>
/// The string describing sqlObj's DataSource;
/// for an Exception the string describing
/// the Exception.
/// </returns>
public static string GetServerName(string
sqlConnName)
{
SqlConnection sqlConnSelected;
if (ContainsKey(sqlConnName))
{
try
{
sqlConnSelected =
(SqlConnection)Get(sqlConnName);
return sqlConnSelected.DataSource;
}
catch (NullReferenceException eNull)
{
string eNullStr = eNull.ToString();
return "OBJECT IS NULL";
}
catch (InvalidOperationException eInvalidOp)
{
string eInvalidOpStr =
eInvalidOp.ToString();
return "INVALID OPERATION";
}
365
catch (Exception e)
{
string eStr = e.ToString();
return "EXCEPTION";
}
}
else
{
return @"NOT IN CACHE!";
}
}
/// <summary>
/// SqlConnCache's GetDatabaseName has one argument.
/// It returns the database name for the
/// SqlConnection named by 'sqlConnName'
/// if it is in the cache;
/// try-catch exception handling checks for
/// problems and returns error information;
/// static designation ensures getting the name
/// from the singleton instance of the cache.
/// </summary>
/// <param name="sqlConnName">
/// String key identifier for the object 'sqlObj'
/// to be added to the SqlConnCache.
/// </param>
/// <returns>
/// The string describing sqlObj's Database;
/// for and Execption the string describing
/// the Exception.
/// </returns>
public static string GetDatabaseName(string
sqlConnName)
{
SqlConnection sqlConnSelected;
if (ContainsKey(sqlConnName))
{
try
{
sqlConnSelected =
(SqlConnection)Get(sqlConnName);
return sqlConnSelected.Database;
}
catch (NullReferenceException eNull)
{
string eNullStr = eNull.ToString();
return "OBJECT IS NULL";
}
catch (InvalidOperationException eInvalidOp)
{
string eInvalidOpStr =
eInvalidOp.ToString();
return "INVALID OPERATION";
}
catch (Exception e)
{
string eStr = e.ToString();
return "EXCEPTION";
}
366
Appendix B
}
else
{
return @"NOT IN CACHE!";
}
}
/// <summary>
/// SqlConnCache's GetConnectionString has
/// one argument.
/// It returns the connection string for
/// the SqlConnection named by 'sqlConnName'
/// if it is in the cache;
/// try-catch exception handling checks for
/// problems and returns error information;
/// static designation ensures getting the
/// string from the singleton instance of the cache.
/// </summary>
/// <param name="sqlConnName">
/// String key identifier for the object 'sqlObj'
/// to be added to the SqlConnCache.
/// </param>
/// <returns>
/// The string representing sqlObj's
/// ConnectionString; for and Exception
///the string describing the Exception.
/// </returns>
public static string GetConnectionString(string
sqlConnName)
{
SqlConnection sqlConnSelected;
if (ContainsKey(sqlConnName))
{
try
{
sqlConnSelected =
(SqlConnection)Get(sqlConnName);
return sqlConnSelected.ConnectionString;
}
catch (NullReferenceException eNull)
{
string eNullStr = eNull.ToString();
return "OBJECT IS NULL";
}
catch (InvalidOperationException eInvalidOp)
{
string eInvalidOpStr =
eInvalidOp.ToString();
return "INVALID OPERATION";
}
catch (Exception e)
{
string eStr = e.ToString();
return "EXCEPTION";
}
}
else
367
{
return @"NOT IN CACHE!";
}
}
/// <summary>
/// SqlConnCache's ExecuteNonQuery method takes
/// two arguments.
/// It takes the supplied 'SqlCommandText' string
/// and applies it as an ExecuteNonQuery call
/// to the database of the named 'sqlConnName'
/// SqlConnection, if it is in the cache.
/// Success returns the number of rows
/// affected by the query.
/// try-catch exception handling checks for
/// problems and returns error information;
/// static designation insures executing
/// the query on the singleton instance of
/// the cache.
/// </summary>
/// <param name="sqlConnName">
/// String key identifier for the object
/// 'sqlConnName' found in the SqlConnCache.
/// </param>
/// <param name="SqlCommandText">
/// String representing the SqlCommand's Text
/// to be executed by 'sqlConnName'.
/// </param>
/// <param name="enableMonitor">
/// Boolean true indicates that
/// System.Threading.Monitor synchronization
/// should be invoked when executing this method.
/// </param>
/// <returns>
/// A string representing the result of the
/// execution of the method /// a string representation of the number of
/// datatable rows affected or a string
/// describing any Exception that occurred.
/// </returns>
public static string ExecuteNonQuery(string
sqlConnName,
string SqlCommandText, bool enableMonitor)
{
SqlConnection sqlConnForQuery;
SqlCommand sqlCommandForQuery;
int rows;
int prevTransactCount = 0;
string [] sqlConnNameFind = new string [1]
{sqlConnName};
System.Data.DataRow rowStatsForUpdate;
System.DateTime dateTimeQueryStart =
DateTime.Now;
System.DateTime dateTimeQueryFinish;
System.TimeSpan timespanQuery;
string qResult;
368
Appendix B
if (sqlConns.ContainsKey(sqlConnName))
{
rowStatsForUpdate =
dtSQLConnStats.Rows.Find((string[])
sqlConnNameFind);
try
{
sqlConnForQuery =
(SqlConnection)sqlConns[sqlConnName];
sqlCommandForQuery = new
SqlCommand(SqlCommandText,
sqlConnForQuery);
if (rowStatsForUpdate != null)
{
rowStatsForUpdate["TransactStart"] =
dateTimeQueryStart;
prevTransactCount =
(int)rowStatsForUpdate["TransactCount"];
}
if (enableMonitor)
{
System.Threading.Monitor.TryEnter(sqlConnForQuery,
10000);
rows =
sqlCommandForQuery.ExecuteNonQuery();
System.Threading.Monitor.Exit(sqlConnForQuery);
}
else
{
rows =
sqlCommandForQuery.ExecuteNonQuery();
}
dateTimeQueryFinish = DateTime.Now;
if (rowStatsForUpdate != null)
{
rowStatsForUpdate["TransactFinish"] =
dateTimeQueryFinish;
timespanQuery = dateTimeQueryFinish
- dateTimeQueryStart;
rowStatsForUpdate["TransactDuration"] =
timespanQuery;
rowStatsForUpdate["TransactCount"] =
prevTransactCount + 1;
}
qResult = rows.ToString() + " executed.";
}
catch (NullReferenceException eNull)
{
string eNullStr = eNull.ToString();
qResult = @"OBJECT IS NULL";
}
catch (InvalidOperationException eInvalidOp)
{
string eInvalidOpStr =
eInvalidOp.ToString();
qResult = @"INVALID OPERATION";
}
369
catch (Exception e)
{
string eStr = e.ToString();
qResult = @"EXCEPTION";
}
return qResult;
}
else
{
qResult = @"NO SQLCONN";
return qResult;
}
}
/// <summary>
/// SqlConnCache's GetExecuteNonQueryDuration
/// has one argument.
/// It returns the duration TimeSpan for the
/// SQLConnection named by 'sqlConnName'
/// if it is in the cache.
/// (It can be readily converted to ElapsedTime.)
/// try-catch exception handling checks for
/// problems and returns error information;
/// static designation insures getting the duration
/// from the singleton instance of the cache.
/// </summary>
/// <param name="sqlConnName">
/// String key identifier (sqlConnName) for the
/// object to be inspected
/// for TimeSpan duration info.
/// </param>
/// <returns>
/// The TimeSpan representing sqlConnName's
/// transaction duration.
/// TimeSpan of zero indicates
/// no transaction occurred.
/// </returns>
public static TimeSpan
GetExecuteNonQueryDuration(string sqlConnName)
{
System.TimeSpan timespanQueryDuration = new
TimeSpan(0);
string [] sqlConnNameFind = new string [1]
{sqlConnName};
System.Data.DataRow rowWithDuration;
rowWithDuration =
dtSQLConnStats.Rows.Find((string[])
sqlConnNameFind);
if (rowWithDuration != null)
{
timespanQueryDuration =
(TimeSpan)rowWithDuration
["TransactDuration"];;
}
return timespanQueryDuration;
}
370
Appendix B
/// <summary>
/// SqlConnCache's GetExecuteNonQueryCount
/// has one argument.
/// It returns the execution count for
/// ExecuteNonQuery calls using the SQLConnection
/// named by 'sqlConnName' if it is in the cache;
/// try-catch exception handling checks for
/// problems and returns error information;
/// static designation insures adding to the
/// singleton instance of the cache.
/// </summary>
/// <param name="sqlConnName">
/// String key identifier (sqlConnName) for the
/// object to be inspected for count info.
/// </param>
/// <returns>
/// The integer representing sqlConnName's
/// transaction count.
/// Count of zero indicates
/// no transactions have occurred.
/// </returns>
public static int GetExecuteNonQueryCount(string
sqlConnName)
{
int transactCount = 0;
string [] sqlConnNameFind = new string [1]
{sqlConnName};
System.Data.DataRow rowWithCount;
rowWithCount =
dtSQLConnStats.Rows.Find((string[])
sqlConnNameFind);
if (rowWithCount != null)
{
transactCount =
(int)rowWithCount["TransactCount"];;
}
return transactCount;
}
}
public class OleDbConnCache
{
// private static member variables are used
// as singleton fields of this class.
// Hashtable.Synchronized keeps the named list
// of OleDbConnection objects and ensures
// thread safe operation.
// oledbDataAssembly is needed to retrieve
// a valid OleDbConnection Type object used
// in comparing the Type of objects
// submitted for inclusion in the cache.
// oledbConnType and its oledb...Str capture
// the submitted object's Type info.
private
private
private
private
private
static
static
static
static
static
Hashtable oledbConns;
DataSet dsOLEDBConnInfo;
DataTable dtOLEDBConnStats;
DataColumn dcOLEDBConnName;
DataColumn dcOLEDBConnTransactStart;
371
372
Appendix B
373
/// <summary>
/// OleDbConnCache's Add method takes
/// two arguments, adding the supplied
/// object argument 'oledbObj' to the Hashtable
/// cache under the name given as the
/// 'oledbConnName' argument;
/// it provides no return value;
/// internally this function calls the Add method
/// that has a three argument signature
/// automatically passing true as the
/// third argument (see the alternate
/// Add method below);
/// static designation ensures adding
/// to the singleton instance of the cache.
/// </summary>
/// <param name="oledbConnName">
/// String key identifier for the object
/// 'oledbObj' to be added to the OleDbConnCache.
/// </param>
/// <param name="oledbObj">
/// Reference to the object 'oledbObj'
/// to be added to the OleDbConnCache.
/// </param>
public static void Add(string oledbConnName, object
oledbObj)
{
string addResult2 = "";
addResult2 = Add(oledbConnName, oledbObj, true);
}
/// <summary>
/// OleDbConnCache's Add method takes
/// three arguments, adding the supplied
/// object argument 'oledbObj' to the
/// Hashtable cache under the name given as the
/// 'oledbConnName' argument;
/// it provides a return value giving
/// success/failure info;
/// it is a type safe method that checks for
/// the Type of the object submitted for
/// inclusion in the cache,
/// refusing to Add objects that are not of Type
/// System.Data.OleDB.OleDbConnection;
/// try-catch-finally exception handling is
/// implemented and in the case of errors
/// the return value is a string
/// that describes the type of error;
/// static designation insures adding to the
/// singleton instance of the cache.
/// </summary>
/// <param name="oledbConnName"></param>
/// <param name="oledbObj"></param>
/// String key identifier for the object
/// 'oledbObj' to be added to the OleDbConnCache.
/// </param>
/// <param name="oledbObj">
/// Reference to the object 'oledbObj'
/// to be added to the OleDbConnCache.
374
Appendix B
/// </param>
/// <param name="withExceptions">
/// Set to true if for the Add method to
/// return Exception information.
/// </param>
/// <returns>
/// A result string; if 'withExceptions'
/// argument set to true the result string
/// may contain Exeption information.
/// </returns>
public static string Add(string oledbConnName,
object oledbObj,
bool withExceptions)
{
string oledbObjTypeStr;
string eStr = "";
string addResult = "";
bool exceptionOccured = false;
System.Type oledbObjType;
System.Data.DataRow rowStats;
try
{
oledbObjType = oledbObj.GetType();
oledbObjTypeStr = oledbObjType.ToString();
if (oledbObjType.Equals( oledbConnType))
{
System.DateTime datetimeNow =
System.DateTime.Now;
oledbConns[oledbConnName] = oledbObj;
rowStats = dtOLEDBConnStats.NewRow();
rowStats["OLEDBConnName"] = oledbConnName;
rowStats["TransactStart"] = datetimeNow;
rowStats["TransactFinish"] = datetimeNow;
rowStats["TransactDuration"] = new
TimeSpan(0);
rowStats["TransactCount"] = 0;
dtOLEDBConnStats.Rows.Add(rowStats);
addResult = @"ADDED";
}
else
{
oledbObjTypeStr = oledbObjType.ToString();
addResult = @"INCORRECT TYPE";
}
eStr = @"NO ERROR";
}
catch (TypeLoadException tldException)
{
addResult = @"TypeLoad failed";
eStr = tldException.ToString();
exceptionOccured = true;
}
catch (NullReferenceException nRefException)
{
addResult = @"Null Reference";
eStr = nRefException.ToString();
exceptionOccured = true;
}
375
catch (InvalidOperationException
invalidOpException)
{
addResult = @"Invalid Operation.";
eStr = invalidOpException.ToString();
exceptionOccured = true;
}
catch (Exception e)
{
addResult = @"Other.";
eStr = e.ToString();
exceptionOccured = true;
}
finally
{
if (exceptionOccured)
{
// Prefix 'Exception - '
// to exception text...
addResult = @"Exception - " + addResult
+ @".";
}
}
return addResult;
}
/// <summary>
/// OleDbConnCache's ContainsKey method takes
/// one argument, returning a boolean true value
/// if the name supplied as 'oledbConnName'
/// is contained in the Hashtable cache,
/// returning a boolean false value
/// if it is not contained in the cache;
/// static designation ensures checking
/// for the singleton instance of the cache.
/// </summary>
/// <param name="oledbConnName"></param>
/// String key identifier used the check if object
/// 'oledbObj' is contained in OleDbConnCache.
/// </param>
/// <returns>
/// True if the 'oledbConnName' identifier
/// is in the OleDbConnCache;
/// False it it is not in the OleDbConnCache.
/// </returns>
public static bool ContainsKey(string oledbConnName)
{
return oledbConns.ContainsKey(oledbConnName);
}
///
///
///
///
///
///
///
///
<summary>
OleDbConnCache's Remove method takes
one argument, deleting a Hashtable entry both key and value if the 'oledbConnName' entry is found
in the Hashtable cache;
it provides no return value.
Static designation ensures adding
376
Appendix B
377
/// </summary>
/// <param name="oledbConnIndex">
/// Integer index identifier for the object
/// 'oledbObj' that is to be acquired for
/// use from the OleDbConnCache.
/// </param>
/// <returns>
/// A reference to the object that is
/// in the OleDbConnCache.
/// </returns>
public static object Get(int oledbConnIndex)
{
int oledbConnCount = 0;
int oledbConnIndexer = 0;
string oledbConnName = "";
oledbConnCount = Count();
if ((oledbConnIndex > 0) & (oledbConnIndex <=
oledbConnCount))
{
foreach (string oledbConnNameCheck in
oledbConns.Keys)
{
oledbConnIndexer = oledbConnIndexer + 1;
if (oledbConnIndexer == oledbConnIndex)
{
oledbConnName = oledbConnNameCheck;
break;
}
}
}
return oledbConns[oledbConnName];
}
/// <summary>
/// OleDbConnCache's Count method
/// does not have any arguments;
/// it returns the integer count of
/// the object contained in the Hashtable cache;
/// Static designation insures counting the
/// singleton instance of the cache.
/// </summary>
/// <returns>
/// An integer representing the number
/// of objects in the object cache.
/// </returns>
public static int Count()
{
return oledbConns.Count;
}
///
///
///
///
///
///
///
///
<summary>
OleDbConnCache's GetConnState method
takes one argument.
It returns a string giving the specific
state of the OleDbConnection object
named by 'oledbConnName';
try-catch checks for valid connection state
information, returning error information
378
Appendix B
<summary>
OleDbConnCache's GetServerName method has
one argument.
It returns the DataSource name of the server for
the OleDbConnection named by 'oledbConnName'
if it is in the cache;
try-catch exception handling checks for
problems and returns error information;
static designation insures getting the name of
379
380
Appendix B
/// </summary>
/// <param name="oledbConnName">
/// String key identifier for the object 'oledbObj'
/// to be added to the OleDbConnCache.
/// </param>
/// <returns>
/// The string describing oledbObj's Database;
/// for and Execption the string
/// describing the exception.
/// </returns>
public static string GetDatabaseName(string
oledbConnName)
{
OleDbConnection oledbConnSelected;
if (ContainsKey(oledbConnName))
{
try
{
oledbConnSelected =
(OleDbConnection)Get(oledbConnName);
return oledbConnSelected.Database;
}
catch (NullReferenceException eNull)
{
string eNullStr = eNull.ToString();
return "OBJECT IS NULL";
}
catch (InvalidOperationException eInvalidOp)
{
string eInvalidOpStr =
eInvalidOp.ToString();
return "INVALID OPERATION";
}
catch (Exception e)
{
string eStr = e.ToString();
return "EXCEPTION";
}
}
else
{
return @"NOT IN CACHE!";
}
}
///
///
///
///
///
///
///
///
///
///
///
///
///
<summary>
OleDbConnCache's GetConnectionString
has one argument.
It returns the connection string for the
OleDbConnection named by 'oledbConnName'
if it is in the cache;
try-catch exception handling checks for
problems and returns error information;
static designation insures getting the string
from the singleton instance of the cache.
</summary>
<param name="oledbConnName">
String key identifier for the object
381
///
///
///
///
///
///
<summary>
OleDbConnCache's ExecuteNonQuery method
takes two arguments.
It takes the supplied 'OleDbCommandText'
string and applies it
as an ExecuteNonQuery call to the database
of the named 'oledbConnName' OleDbConnection,
if it is in the cache.
Success returns the number of rows
affected by the query.
try-catch exception handling checks for
problems and returns error information;
static designation insures executing the
query of the singleton instance of the cache.
382
Appendix B
/// </summary>
/// <param name="oledbConnName">
/// String key identifier for the object
/// 'oledbConnName' found in the OleDbConnCache.
/// </param>
/// <param name="OleDbCommandText">
/// String representing the OleDbCommand's Text
/// to be executed by 'oledbConnName'.
/// </param>
/// <param name="enableMonitor">
/// Boolean true indicates that
/// System.Threading.Monitor synchronization
/// should be invoked when executing this method.
/// </param>
/// <returns>
/// A string representing the result of the
/// execution of the method /// a string representation of the number of
/// datatable rows affected or a string describing
/// any Exception that occurred.
/// </returns>
public static string ExecuteNonQuery(string
oledbConnName,string OleDbCommandText, bool
enableMonitor)
{
OleDbConnection oledbConnForQuery;
OleDbCommand oledbCommandForQuery;
int rows;
int prevTransactCount = 0;
string [] oledbConnNameFind = new string [1]
{oledbConnName};
System.Data.DataRow rowStatsForUpdate;
System.DateTime dateTimeQueryStart =
DateTime.Now;
System.DateTime dateTimeQueryFinish;
System.TimeSpan timespanQuery;
string qResult;
if (oledbConns.ContainsKey(oledbConnName))
{
rowStatsForUpdate =
dtOLEDBConnStats.Rows.Find((string[])
oledbConnNameFind);
try
{
oledbConnForQuery =
(OleDbConnection)oledbConns
[oledbConnName];
oledbCommandForQuery = new
OleDbCommand(OleDbCommandText,
oledbConnForQuery);
if (rowStatsForUpdate != null)
{
rowStatsForUpdate["TransactStart"] =
dateTimeQueryStart;
prevTransactCount =
(int)rowStatsForUpdate
["TransactCount"];
}
383
if (enableMonitor)
{
System.Threading.Monitor.TryEnter
(oledbConnForQuery, 10000);
rows = oledbCommandForQuery.ExecuteNonQuery();
System.Threading.Monitor.Exit(oledbConnForQuery);
}
else
{
rows = oledbCommandForQuery.ExecuteNonQuery();
}
dateTimeQueryFinish = DateTime.Now;
if (rowStatsForUpdate != null)
{
rowStatsForUpdate["TransactFinish"] =
dateTimeQueryFinish;
timespanQuery = dateTimeQueryFinish dateTimeQueryStart;
rowStatsForUpdate["TransactDuration"] =
timespanQuery;
rowStatsForUpdate["TransactCount"] =
prevTransactCount + 1;
}
qResult = rows.ToString() + " executed.";
}
catch (NullReferenceException eNull)
{
string eNullStr = eNull.ToString();
qResult = @"OBJECT IS NULL";
}
catch (InvalidOperationException eInvalidOp)
{
string eInvalidOpStr =
eInvalidOp.ToString();
qResult = @"INVALID OPERATION";
}
catch (Exception e)
{
string eStr = e.ToString();
qResult = @"EXCEPTION";
}
return qResult;
}
else
{
qResult = @"NO OLEDBCONN";
return qResult;
}
}
/// <summary>
/// OleDbConnCache's GetExecuteNonQueryDuration
/// has one argument.
/// It returns the duration TimeSpan for
/// the OleDbConnection named by 'oledbConnName'
/// if it is in the cache.
/// (It can be readily converted to ElapsedTime.)
/// try-catch exception handling checks for
/// problems and returns error information;
384
Appendix B
<summary>
OleDbConnCache's GetExecuteNonQueryCount
has one argument.
It returns the execution count for
ExecuteNonQuery calls using the OLEDBConnection
named by 'oledbConnName' if it is in the cache;
try-catch exception handling checks for
problems and returns error information;
static designation insures getting the query
count of the singleton instance of the cache.
</summary>
<param name="sqlConnName">
String key identifier (oledbConnName) for the
object to be inspected for count info.
</param>
<returns>
The integer representing oledbConnName's
transaction count.
Count of zero indicates no transactions
have occurred.
</returns>
385
386
Appendix B
Index
387
Index
A
ActiveFactory, FactorySuite A2 System
integration 92
Alarm DB Manager, FactorySuite A2 System
integration 97
Alarm Logger
as alarm consumer 217
installation 217
alarms 97
and SCADAlarm 97
client/server network topology 216
consumers 213
distributed local network topology 215
general considerations 214
InTouch Alarm Logger 217
providers 213
queries 214
redundant configuration 78
topology categories 215
Windows XP 41
WWAlarmDBLogger in WAN 263
$AnalogDevice template 112
Anti-Virus software
FactorySuite A2 Systems 53
files exclusion list 283
AppEngine
checkpoint attributes 232
relocation 84
scan interval execution order 126
store forward attributes 77
tuning redundant engine attributes 87
undeploy scenarios 84
ApplicationObject
example interactions 155
areas, defining area model 22
AutomationObject Server node
client/server 40
configuration and run-time component
requirements 32
DIObjects 42
function 32
Terminal Services on same node 48
workstation 37
workstation node 37
B
backup and restore, secured administrator
password 277
backup vs. export 278
best practice
alarm and historical databases 35
alarm buffering for loss prevention 217
alarm system in client/server network
topology 217
alarm system in distributed local network
topology 215
backing up the galaxy 121
backup and restore 278
C
checkpoint
after failover 76
AppEngine execution phase 126
attributes 232
redundancy synchronization 63
scripting considerations 75
.Starting_FromCheckpoint attribute 75
UDAs 117, 124
client/server topology
AutomationObject Server node 40
dedicated I/O Server nodes 42
I/O Server node 42
components, installation order 53
connectivity tools 106
D
DAServer Manager 275
Data Access Server (DAServer) 54
database, configuring growth options 220
DCOM
ports 207
ports listing 202, 203
security 200
dedicated standby configuration
performance 242
redundancy 67
388
Index
E
Engineering Station node
configuration and run-time component
requirements 34
definition 34
function 34
standalone workstation topology 39
errors, .NET error handing 170
events
deployment issues 213
See also alarms
F
FactorySuite A2 System integration
ActiveFactory 92
Alarm DB Manager 97
DA Servers 107
DT Analyst 98
FactorySuite Gateway (FS Gateway) 99
FS Gateway 106
I/O Servers 106
InBatch 102
InControl 108
IndustrialSQL Server Historian 92
InTouch HMI 93
InTrack 99
Microsoft SQL Server 108
other system components 101
Production Event Module (PEM) Objects 107
QI Analyst 98
SCADAlarm 97
SPCPro 96
SQL Access Manager 95
SuiteVoyager Portal 36, 97
WindowMaker 93
WindowViewer 93
failover
causes 80
sizing and performance data 242
using Startup and OnScan scripts 76
field devices, sizing and performance issues 230
$FieldReference template 113
G
galaxy
backup and restore 277
definition 30
re-using templates in different galaxies 118
Galaxy Database Manager 275
Galaxy Repository
configuration and run-time component
requirements 32
configuring database growth options 220
function 31
installation options 31
installing in a distributed network topology 31
GR See Galaxy Repository
H
Historian node
configuration and run-time component
requirements 35
definition 35
fine-tune primitive for SCADA system 261
standalone workstation topology 39
historical data, redundant processing behavior 77
I
I/O Server node
best practice 34
client/server topology 42
configuration and run-time component
requirements 33
dedicated node 43
function 33
standalone workstation topology 38
I/O Servers
connectivity 54
running on AutomationObject Server nodes 42
IAS
as alarm provider 217
Engineering Station node for application
maintenance 39
fixed IP address 51
I/O Server node data source 33
integration with DT Analyst 99
integration with InBatch 102
integration with IndustrialSQL Server
Historian 92
integration with InTouch HMI 93
integration with InTrack 99
integration with QI Analyst 98
Index
L
load balancing, Terminal Services 47
load sharing
CPU levels 68
redundant configuration 68
restore after failover script example 245
sizing and performance 244
Log Viewer 276
M
maintenance tools, Galaxy Database Manager 277
Microsoft SQL Server
configuring database growth options 220
FactorySuite A2 System integration 108
N
.NET See QuickScript .NET
network utilization 94
and galaxy reference to remote node 94
subscriptions 95
networks
large 237
topology categories 37
O
object templates, derivation 24
Object Viewer
assess execution time for multiple engines 233
assess failover performance 243
389
P
performance
bulk operations 232
checkpointing 232
multiple engines 233
permissions 26
Platform Manager 276
Production Event Module (PEM) Objects 107
project planning
define functional requirements 20
define naming conventions 21
define object shape templates 23
define the deployment model 27
define the security model 26
defining area model 22
identify field devices 18
.NET project planning 147
protocols
Dynamic Data Exchange (DDE) 33, 59
FS Gateway translator 106
Message Exchange (MX) 57, 67
Open Process Control (OPC) 33, 57, 58
Remote Desktop Protocol (RDP) 57
SuiteLink 33, 57, 59
summary list 57
Q
QI Analyst, FactorySuite A2 System integration 98
QuickScript .NET
ApplicationObject interactions 155
define project scope and requirements 147
definition 144
error handling 170
handling data quality 138
introduction 133
ObjectCache.dll 146
script example "ConnectTo" 168
script example "DisconnectFrom" 169
script example "Initialize" 158
script example "ManageConnections" 163
script example "PostDataNow" 177
script example "RandomizeNow" 182
script example "TryConnectNow" 172
script example "TryConnectOnFalse" 176
scripting database access 150
scripting practices 145
shape template objects 148
SqlConnCache function calls 152
syntax differences 146
template object examples 155
R
RAM
create objects 223
data for initial installation 222
sizing guidelines 220
390
Index
templates 223
Recipe Manager, integration with Industrial
Application Server 95
redundancy
alarms 78
AppEngine states 63
checkpoint item synchronization 63
configuration combinations 67
CPU levels for load sharing 68
dedicated standby configuration 67
dedicated Visualization nodes 79
definition 62
DIObject configuration requirements 66
DIObject run-time behavior 67
failover causes 80
historical data processing 77
load shared configuration 68
NIC configuration 63
object deployment considerations 74
redundant DIObject 65
redundant engine configuration 61
Remote Partner Address (RPA) 73
run-time considerations 73
script behaviors 75
scripting considerations 75
server name recommendations 65
store forward attributes 77
system checklist 85
system requirements 62
timeout parameters in large systems 86
tuning redundant engine attributes 87
Redundant Message Channel (RMC)
configuring the connection 64
establishing RMC communication on engine
start 73
remote nodes, communication 231
roles 26, 204
S
SCADA
distributed IDE 259
distributed InTouch HMIs 265
distributed topologies 259
load balancing 265
security 206, 256
system test benchmarks 268
tuning Historian primitive 261
tuning timeout and heartbeat attributes 264
Universal Time Synchronization 252
wide-area networks overview 250
WWAlarmDBLogger 263
SCADAlarm, FactorySuite A2 System integration 97
script behaviors
active engine after failover 76
optimizing dynamic referencing and Data Change
scripts 131
re-initiate asynchronous scripts after failover 76
scripting
asynchronous 76
common Startup and OnScan use for failover 76
data quality controlled execution 141
data quality propagation 139
dimensioning statements 146
T
tablet and panel PCs 96
template modeling
complex reactor 115
discrete valve example 114
templates
$AnalogDevice 112
containment 24
derivation 24
Index
$Discrete 112
disk space and RAM 223
$FieldReference 113
.NET examples 155
re-use in different galaxies 118
shape 23
$Switch 113
$UserDefined 113, 150
Terminal Server 48
Terminal Services
AutomationObject Server node 48
dedicated node 46
server-based load balancing 47
topology categories 45
Using the IDE 49
Windows Server 2003 46
Windows2000 Server 45
time synchronization
galaxy 53
SCADA systems 252
topologies
deploying objects in a large or very large
system 241
Galaxy Repository installation 231
sizing and performance data 230
topology categories
about 37
alarms 215
client/server 40
components list 30
distributed local network 37
general considerations 41
introduction 29
standalone workstation 37
Terminal Services 45
widely-distributed networks 50
tuning Platforms and Engines
Historian in SCADA environments 261
redundancy attributes 87
redundancy in large systems 86
timeout and heartbeat attributes in SCADA
system 264
391
U
UDA
best practice 116
checkpointing 117
function 116
Input/Output 142
locking 116
using the OR operator 162
Unit Application, definition 227
user interface, security 204
$UserDefined template 113, 150
V
Visualization node
configuration and run-time component
requirements 33
function 33
redundant configuration 79
workstation 37
W
widely-distributed networks See SCADA
WindowMaker, FactorySuite A2 System
integration 93
Windows Server 2003
distributed networks 38
Terminal Services 46
Windows XP
connections 38
connections in client/server topology 41
InBatch 103
Object Viewer 41
System Restore 226
Windows2000 Server, Terminal Services 45
WindowViewer, FactorySuite A2 System
integration 93
$WinPlatform, communicating with remote
nodes 231
workstation node, definition and context 37
WWAlarmDBLogger, deploying in WAN
environment 263
392
Index