You are on page 1of 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/283882805

OPC UA server aggregation - The foundation for an internet of portals

Article · January 2015


DOI: 10.1109/ETFA.2014.7005354

CITATIONS READS

15 2,822

5 authors, including:

Daniel Grossmann Suprateek Banerjee


Technische Hochschule Ingolstadt Technische Hochschule Ingolstadt
18 PUBLICATIONS   27 CITATIONS    10 PUBLICATIONS   44 CITATIONS   

SEE PROFILE SEE PROFILE

Dirk Schulz Roland Braun


Justus-Liebig-Universität Gießen ABB
17 PUBLICATIONS   47 CITATIONS    8 PUBLICATIONS   18 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Aggregation of Information Models View project

Electrical Integration of Drives View project

All content following this page was uploaded by Suprateek Banerjee on 21 January 2017.

The user has requested enhancement of the downloaded file.


OPC UA Server Aggregation – The Foundation for an Internet of Portals

Prof. Dr.-Ing Daniel Großmann, Prof. Dr.-Ing. Dr. Dirk Schulz, Dipl.-Inform. (FH) Roland
Markus Bregulla, M.Sc. Suprateek Banerjee Braun
TH Ingolstadt ABB Corporate Research Center Germany
Esplanade 10 Wallstadter Str. 59
D-85049 Ingolstadt 68526 Ladenburg
Daniel.grossmann@thi.de Dirk.schulz@de.abb.com

Abstract technologies follow the same approach (e.g. Field


Device Tool (FDT), Analyzer Device Integration (ADI)).
Devices in industrial automation systems are All these approaches define a central integration
becoming more and more intelligent. Consequently, platform (Figure 1).
functions such as server services are migrating into the
device level. To solve the resulting connection mesh, this Consumer Consumer
paper entails the concept of aggregation of servers (Client) (Client)
connected to devices in an industrial automation
Consumer Consumer Consumer
scenario. The first section discusses the basic (Client) (Client) (Client)
requirements for aggregation and proposes an
architecture for server aggregation as a solution. The
following section describes the building blocks of the Central Integration
architecture. Finally, the paper presents a prototype Platform (Server)
based on this architecture model as a proof of concept
implementation of the concept introduced in this paper.
The last section discusses the results of the prototyping
phase including the possible improvements of the same. Dev. Dev. Dev. GW Subsyst.

Dev. Dev. Dev. Dev. Dev. Dev.


1. Introduction
Figure 1. Centralized integration platform.
In today’s industrial automation systems, vertical
integration plays an important role since information
needs to be exchanged throughout the various layers of 1.2. Integration platforms
the automation pyramid. Device integration technologies Integration platforms represent one component or a
for example, make the information of field devices set of components of an automation system in the form
available to higher levels of an automation system. of an information model. An information model typically
consists of objects that have relations to other objects
1.1. State of the art (structure information). Objects are instances of a
Current state of the art integration technologies such specific type. This type information is also available
as Field Device Integration (FDI) define client server within the information model. Objects carry data that can
architectures where a server centrally represents the data be read or written (instance data) and functions that can
and functions of the devices of the automation system be called (behavior).In conclusion, an information model
[1]. FDI uses OPC Unified Architecture (UA) as provides
information modeling and middleware technology [2]. • type information,
The FDI server represents the topology of the field • structure information (objects and relations),
networks and devices. It is connected to the devices via • instance data (data of objects) and
(industrial) communication protocols (e.g. Profibus, • behavior of objects
Profinet, HART and Foundation Fieldbus). Once the
server receives a request from a client, it gathers the data 1.3. Trend towards distribution
necessary to serve the request from the device. After Devices and subsystems are becoming more and more
applying the appropriate computations, the server passes powerful regarding CPU power, memory and
the response back to the requestor. Other integration communication bandwidth. This trend towards increased
intelligence in devices enables devices to fulfil tasks on centralized integration platform. This reduces the above
their own that are currently provided externally in the mentioned complexity of connections (Figure 3). At the
centralized integration platform. As a result devices and same time, the aggregation component can act as a
subsystems will act as integration platforms on their own security supervisor that can manage the connections
in future, directly providing functionality and data. from a security perspective.
Already today, devices with embedded OPC UA server
functionality exist which also form the basis of ‘Internet Consumer Consumer
of Things’ [7]. One of the benefits of such a (Client) (Client)
development is that such devices and subsystems can be
Consumer Consumer Consumer
directly accessed already at a stage where the overall (Client) (Client) (Client)
automation system has not been engineered or
commissioned. The current trends of Cyber Physical
Systems and Industry 4.0 further drive the collaborative Aggregation Server
work of highly distributed system components. The
resulting situation (Figure 2) would be a system with
(fully) meshed connections between providers of
functions and data (servers) and consumers (clients).
Dev. Dev. Dev. GW Subsyst.

Consumer Consumer
(Client) (Client) Dev. Dev. Dev. Dev. Dev. Dev.

Consumer Consumer Consumer


(Client) (Client) (Client)
Figure 3. Aggregation server.

2. System architecture

This section describes the overall system architecture


from an aggregation standpoint (Figure 4) along with the
general aggregation requirements. The components
Dev. Dev. Dev. GW Subsyst. involved are described below.

Dev. Dev. Dev. Dev. Dev. Dev.

Figure 2. Resulting connection mesh.

1.4. The resulting connection mesh


While the trend towards devices and subsystems with
their own integration platforms has its advantages, there
are also drawbacks: During the lifecycle of the
automation system, different tools need to access the
devices and subsystems as clients (e.g. to configure
them). With a centralized integration platform, this
access is provided, managed and supervised centrally. In
a scenario where multiple decentralized integration
platforms exist, the connection of the clients need to be
set up manually multiple times. To make matters worse,
Figure 4. Overall aggregation architecture.
clients typically work on a set of devices, which means
that such clients would need to “know” multiple
decentralized integration platforms. In addition, each of 2.1. Aggregated server
these client server connections needs to be managed and Aggregated servers represent the entities of the
supervised from a security perspective. Strictly enforcing automation system. These are the underlying servers
security rules for each connection would cause an which may either represent a single component (e.g. a
enormous engineering effort. field device) or a subsystem that consists of a set of
As a result, an architecture is necessary in which an components, parts of the automation system or the entire
aggregation component provides the “illusion” of a automation system.
2.2. Aggregation server technology (e.g. FDI) and also the OPC UA standard
The aggregation server is the core of the aggregation itself. The instance mapping rules can either be
architecture. It connects to underlying servers via OPC configured locally in the aggregation server or it can be
UA services and aggregates their type, instance and read dynamically from the aggregated server. This paper
structure information. The aggregation server is proposes the necessary extensions to the aggregated
described in detail in section 3. server’s information model (section 4.2).

2.3. Aggregation configurator 3. Aggregation server architecture


The aggregation configurator is a tool that configures
the aggregation server. It creates information about the
aggregated servers that the aggregation server should 3.1. Aggregation node manager
aggregate. The aggregation configurator can be used The aggregation node manager is the central singleton
during engineering or during commissioning/run time. If that manages the nodes in the address space of the
used during engineering, the aggregation configurator aggregation server. A node manager can only cater to
creates configuration information (section 2.4) that is requests targeted towards the nodes belonging to its own
stored in a persistent fashion (e.g. file, database etc.). If namespaces. Therefore the aggregation node manager
used during commissioning/run time, that means when updates its own namespaces to contain the new
the aggregation server and the aggregated servers are namespaces present in the underlying server (after
available and running, the aggregation configurator connecting to it) in order to handle the incoming client
directly accesses the aggregation server via a defined requests (read, write, subscribe etc.).The aggregation
OPC UA information model to configure the servers that node manager manages the flow of information to and
are to be aggregated. from the below mentioned components of the
aggregation server.
2.4. Configuration
The configuration contains information about the 3.2. OPC UA client
servers that need to be aggregated (e.g. urls etc.). The The OPC UA client provides functionality to connect
aggregation server reads this information to know which to aggregated servers and to access nodes in the address
servers to aggregate. space of the aggregated server. The aggregation node
manager creates one OPC UA client per underlying
2.5. Type mapping rules server and then browses each underlying server with the
The type mapping rules provide information that is help of the respective clients.
necessary to identify semantically identical types in The node manager forwards all requests (read, write,
different underlying servers. Since such types may have subscribe etc.) to and from the underlying node in the
different NodeIds and different browse names (which are aggregated server via the respective OPC UA client
used to uniquely identify a particular node in an address connected to that particular server. The NodeIds that are
space)[4][5], the aggregation server needs additional part of the request need to be resolved to the respective
information for identification (e.g. properties that need to NodeIds in the aggregated server(s).
match). In case of FDI, a device type is identified via the
properties Manufacturer, Model and Device Revision 3.3. Node factory
[6]. This means that in the prototype developed for this Depending upon the matching criteria (type/instance
paper, each of the type nodes were further browsed for matching rules), the node manager determines whether
the above mentioned properties and the values of these or not each browsed node in the underlying server is to
properties were compared to that of the already added be aggregated. A proxy node is created for each
type nodes in the Type Manager (section 3.4). If the aggregated node from the underlying server and
values corresponding to all the properties do match, the subsequent mapping information is maintained within
two types were considered to be identical. the aggregation node manager which maps each proxy
node to its original aggregated node in the underlying
2.6. Instance mapping rules server. The node factory creates these proxy nodes.
The instance mapping rules provide information When a node with a fixed node id is being aggregated
about the handling of objects/instances. Also information for the first time, the node factory provides the same
(e.g. a list) about fixed objects is contained in the unique identifier to the proxy node as that mentioned in
instance mapping rules so that the aggregation server is the fixed node id list (section 2.6). The corresponding
able to identify objects that are available multiple times references between the proxy nodes are also maintained
in different underlying servers and that need to be just as they are maintained in the underlying servers, so
merged into a single object in the aggregation server. that the structure information of the underlying address
Such fixed objects typically have a definite NodeId, space is preserved. In case of a duplicate node
defined as per the specification of the integration (overlapping structures or identical type nodes) only the
mapping information is updated and no new proxy node 4.1. Aggregation server
is created. The information model definitions are as follows:

3.4. Type manager The OpcUaServerType represents an OPC UA server.


The Node manager contains the type manager which Therefore it references a variable ServerUrl that holds
caters to the consolidation of type nodes from the the url of this OPC UA server (fig.Figure 5). The
underlying aggregating server in accordance with the AggregatedServerType inherits from the
type matching rules. OpcUaServerType and references a ServerType object.
The ServerType is defined by the OPC UA Specification
3.5. OPC UA server and contains detailed information of the server including
The OPC UA server provides the interface of the diagnostic status. The AvailableServerSetType contains
aggregation server to external OPC UA clients. Besides references to OpcUaServers that are available for
providing access to aggregated servers via the aggregation. This list may either be populated via
aggregated proxy nodes, the OPC UA server provides engineered configuration information or via OPC UA
the aggregation specific information (section 4.1). This discovery. The AggregatedServerSetType contains
contains functions to manage aggregation as well as references to AggregatedServers. These Aggregated-
information about servers that are already aggregated Servers are represented including the aggregated Server
and servers that are available for aggregation object (ServerType). This allows clients to also monitor
dynamically via the discovery manager. the status of aggregated servers within the aggregated
information model. The AggregatedServerSetType
3.6. Discovery manager provides a Method AggregateServer. This method takes
The Discovery Manager scans for available OPC UA the NodeId of an OpcUaServer object in the
servers that can be aggregated. The available OPC UA AvailableServerSet. The aggregation server then
servers are represented in the aggregation server’s aggregates this server so that it also is listed in the
specific information model (section 4.1). OPC UA AggregatedServerSet and disappears from the
clients can advise the aggregation server to aggregate AvailableServerSet. The ResidesIn reference type
such OPC UA servers during runtime. This aggregation inherits from the NonHierarchicalReferenceType. It
is also done with the help of the aggregation node describes the original location of the object in an
manager. aggregated server.

3.7. Security manager


The security manager manages additional access
control for specific underlying servers or nodes present
in specific underlying servers. In a scenario where the
aggregation server is being used to aggregate the
information models of different integration platforms, it
is necessary to limit access to certain client applications.
These security restrictions and access control
information is handled by the security manager and only
those requests which remain impervious to the security
restrictions are allowed to pass through to the underlying
servers.

4. Information model extensions Figure 5. Aggregation Server Information


Model.
This paper defines two extensions to information
models. Section 4.1 extends the information model of the
aggregation server. It mainly provides means to manage 4.2. Aggregated server
aggregation. Section 4.2 defines an optional extension to The information model definitions are as follows:
the information model of servers that are aggregated. The TypeMappingRuleSetType references Type-
The extensions mainly aim at providing type and MappingRules (Figure 6). The TypeMappingRuleType
instance mapping rules directly from within the represents the rules to map types. These rules may come
aggregated server for fully automated aggregation. If the from a standard (e.g. rules to map DeviceTypes in FDI).
optional extensions are not implemented, type and The InstanceMappingRuleSetType references Instance-
instance mapping may need to be implemented using the MappingRules. The InstanceMappingRuleType
aggregation configurator (section 4.3). represents the rules to map instances. These rules may
come from a standard (e.g. rules to map the
DeviceTopology node in FDI). The underlying aggregated servers were also observed
TypeManagementServiceType provides standardized alongside the aggregation servers to make sure that the
services to manage the types of an OPC UA Server. address spaces were merged appropriately. Read, write,
Therefore it provides two methods. subscribe and unsubscribe requests were tested against
ExportTypeDefinition and ImportTypeDefinition. The the aggregation server. Finally, an FDI client application
ExportTypeDefinition method serializes all necessary was connected to the aggregation server, to observe
information to establish this type in another server. The whether the FDI client can work with the aggregated
method takes the NodeId of a Type node and then information model of the three underlying FDI servers
creates the type information for export. In case of FDI without errors. Since the aggregation server provided the
this is the FDI Package. The return value of this service perfect illusion of a central integration platform, the FDI
is a NodeId that can then be used to read the exported client was able to perform its tasks (e.g. read/write data,
type information. The ImportTypeDefinition method subscribe, call methods). The prototype successfully
takes serialized type information and establishes the demonstrated the described features such as resolving
corresponding type in the type system of the OPC UA semantically identical types in different information
server. In case of FDI, this means that a FDI Package models or mapping of instances.
would be provided via this method thus creating the
corresponding DeviceType. The proposal of the creation
of new type definitions pertains only to the subsystems
that represent a range of types of lower-level devices
themselves, it is not of importance at the device level per
se. The Aggregation-SupportType references the
TypeMappingRuleSet, the InstanceMappingRuleSet and
the Type-ManagementServices.

Figure 7. Aggregation Prototype.

5.1. Implementation
1) There are two ways of aggregating servers into
the aggregation server. The first method is to include the
server uris into a locally saved ServerUris.xml file.
When the aggregation server is first started, it always
reads this file to aggregate the servers, the Uris of which
have been listed in this file. This maps to the
configuration that is explained in section 2.4 above. The
second method is to aggregate a server with OPC UA
discovery. The aggregation server contains a node
Figure 6. Aggregated Server Information Model. Available Servers in its address space which contains
nodes corresponding to the discovered OPC UA servers
in the same system (by default). This Available Servers
5. Prototype node also contains a Discover Now method which when
called, prompts for an IP address or machine name as
A prototype was developed as a proof of concept argument. Upon providing a valid IP address or name
implementation of the proposed architecture. The and the method called, the Available Servers node is
prototype is still work-in-progress and is in its refreshed with new nodes corresponding to the OPC UA
preliminary stage, however it does successfully servers running in the target machine with the provided
showcase the basic idea outlined in this paper. The IP address. All of the discovered server nodes within the
aggregation server prototype developed was used to Available Servers node contain a method node
aggregate three FDI Servers along with sample OPC UA Aggregate Server, which when called prompts a check
servers provided in the OPC UA software development box with the name Remember Me. If this method is
kit (fig.Figure 7). After the aggregation, the resulting called, the corresponding information model of the
address space structure of the aggregation server was underlying server will be aggregated into that of the
observed with the help of the UA Expert OPC UA aggregation server. If the check box is checked the
Client. The UA Expert client was used to test the server uri will be added to the locally saved
functionalities of the aggregation server. The original ServerUris.xml file if it is already not present in the
same. The aggregation server need not be stopped in and live values of the aggregated server node could be
order to aggregate servers using the discovery method. observed by subscribing to the proxy node in the
That is, aggregation is possible dynamically. aggregating server. When the FDI Client was connected
2) Read: When the Aggregation Server receives a to the aggregation server, three different information
read request for one of its nodes, the aggregation node models corresponding to the three underlying FDI
manager refers its mapping dictionaries and forwards the servers could be rendered and visualized in the client
read request to the actual node in the underlying server without any changes necessary. The additional sample
(resolved with the help of the mapping dictionaries)via servers that were also integrated along with the FDI
the corresponding OPC UA client session object for that servers did not affect the functioning of the FDI client.
aggregated server. The response thus obtained from the All of these tests were also made using UA Expert OPC
aggregated server is forwarded to the external OPC UA UA client providing similar results.
client which had made the initial read request.
3) Write: The procedure is similar to the read 6. Outlook
request and in this case the original node in the
aggregated server is written to. Collaboration of distributed intelligent units is one of
4) Subscribe/unsubscribe: This also follows a the foundations of Industry 4.0 [7]. Meshed
similar methodology as above and the original node in communication, often referred to as Internet of Things, is
the underlying server is subscribed/unsubscribed to, via the result. While communication is vital for Industry 4.0
the corresponding OPC UA client present within the the authors are confident, that meshed communication
aggregation server. When the value of a monitored item may not be necessary in all cases, apart from the high
in the aggregated server changes, the OPC UA client in complexity and security questions that come with
the aggregated server is notified. Upon receiving the meshed communication. Therefore, the authors propose
notification the aggregation node manager updates the an Internet of Platforms in which aggregation platforms
value of the corresponding proxy node, with the help of talk to each other. Each aggregation platform
the bi-directional mapping dictionaries. This value encapsulates a set of devices, subsystems or entire
change in the proxy node of the aggregation server in systems making the resulting connections easier to
turn triggers a data change notification in the external oversee and to handle from a security point of view.
OPC UA client subscribed to this particular proxy node. With features such as discovery and dynamic
5) In case of the FDI servers, the type comparison aggregation, the ecosystem of connected aggregation
was done with respect to the value attributes platforms can change over time, adopting new system
Manufacturer, DeviceRevision and Model of each of the structures. The mentioned concept of aggregation of
underlying type nodes (section 2.5). If this algorithm OPC UA servers is a first step into this direction.
returns true, this means that both are the essentially same
type and need not be integrated, and an entry is made in References
the mapping dictionaries.
6) In order to test mapping of instances, an [1] IEC 62769-1 CDV: Field Device Integration (FDI),
instance mapping rule node was added to the aggregated Technical Specification - Part 1: Overview. IEC, 2013.
server. A NodeIdValues node contains the information [2] IEC 62541-1: OPC UA Specification – Part 1: Concepts.
(identifiers and namespaces) about the nodes with fixed IEC, 2010.
NodeIds in a specific xml file format. The aggregation [3] Imtiaz, J.; Jasperneite, J.: Scalability of OPC-UA down
node manager then refers to this information (by saving to the chip level enables “Internet of Things”, 11th IEEE
it offline) each time it has to assign a new node id to a International Conference on ”, Industrial Informatics
new proxy node. (INDIN), pp.500-505, July 2013. Bochum.
[4] IEC 62541-3: OPC UA Specification – Part 3: Address
5.2. Results Space Model. IEC, 2010.
The prototype of the aggregation server can aggregate [5] IEC 62541-5: OPC UA Specification – Part 5:
all types of OPC UA servers. It is not restricted to any Information Model. IEC, 2011.
particular integration platform. It was successfully tested [6] IEC 62769-5 CDV: Field Device Integration (FDI),
with 3 different FDI Servers as well as OPC UA sample Technical Specification - Part 5: FDI Information Model.
servers. All servers were successfully integrated into a IEC, 2013.
single information model. Runtime dynamic aggregation [3] Promotorengruppe Kommunikation der Forschungsunion
was also possible with the help of OPC UA discovery. Wirtschaft – Wissenschaft (Hrsg.): Umsetzungs-
Read requests displayed the attribute values from the empfehlungen für das Zukunftsprojekt Industrie 4.0.
underlying server. Write requests could write the values April 2013
to the node in the aggregated server. Subscribe and
unsubscribe requests were also handled appropriately

View publication stats

You might also like