You are on page 1of 134

High Availability

Administration Guide

Included with PI Server 2018 SP3


OSIsoft, LLC
1600 Alvarado Street
San Leandro, CA 94577 USA
Tel: (01) 510-297-5800
Fax: (01) 510-357-8136
Web: http://www.osisoft.com

High Availability Administration Guide


© 2009-2019 by OSIsoft, LLC. All rights reserved.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or
by any means, mechanical, photocopying, recording, or otherwise, without the prior written permission
of OSIsoft, LLC.
OSIsoft, the OSIsoft logo and logotype, Managed PI, OSIsoft Advanced Services, OSIsoft Cloud Services,
OSIsoft Connected Services, PI ACE, PI Advanced Computing Engine, PI AF SDK, PI API,
PI Asset Framework, PI Audit Viewer, PI Builder, PI Cloud Connect, PI Connectors, PI Data Archive,
PI DataLink, PI DataLink Server, PI Developers Club, PI Integrator for Business Analytics, PI Interfaces,
PI JDBC Driver, PI Manual Logger, PI Notifications, PI ODBC Driver, PI OLEDB Enterprise,
PI OLEDB Provider, PI OPC DA Server, PI OPC HDA Server, PI ProcessBook, PI SDK, PI Server, PI Square,
PI System, PI System Access, PI Vision, PI Visualization Suite, PI Web API, PI WebParts, PI Web Services,
RLINK, and RtReports are all trademarks of OSIsoft, LLC. All other trademarks or trade names used herein
are the property of their respective owners.
U.S. GOVERNMENT RIGHTS
Use, duplication or disclosure by the U.S. Government is subject to restrictions set forth in the OSIsoft, LLC
license agreement and as provided in DFARS 227.7202, DFARS 252.227-7013, FAR 12-212, FAR
52.227-19, or their successors, as applicable. OSIsoft, LLC.
Version: 3.4.430
Published: 23 August 2019
Contents

Introduction to high availability.................................................................................. 1


High availability benefits................................................................................................................................. 2
High availability limitations............................................................................................................................. 3
High availability business considerations.........................................................................................................4
Data loss versus data availability................................................................................................................. 4
PI System components with high availability...................................................................................................5
Hardware load balancers................................................................................................................................. 8
Monitoring tools for high availability............................................................................................................... 8

Deployment scenarios for PI Systems with HA............................................................. 9


PI Server and SQL Server configuration options.............................................................................................. 9
Small PI Server deployment........................................................................................................................ 9
Larger, higher performance PI Server deployment.....................................................................................10
Distributed, highly available PI Server deployment.................................................................................... 11
PI AF architecture.......................................................................................................................................... 12
PI AF deployment options..........................................................................................................................12
PI AF high-availability solutions................................................................................................................. 16

PI Data Archive high availability administration......................................................... 21


PI Data Archive collective management........................................................................................................ 21
PI Data Archive collective security............................................................................................................. 21
PI Data Archive collective performance..................................................................................................... 26
PI Data Archive collectives and backups.....................................................................................................30
Operating system updates.........................................................................................................................30
Replication and archive management........................................................................................................ 30
Secondary PI Data Archive server management.........................................................................................31
Force synchronization with piartool........................................................................................................... 33
Create a primary PI Data Archive server or promote a secondary server..................................................... 33
Control PI Data Archive stand-alone mode................................................................................................ 36
Verify communication between collective members.................................................................................. 37
Manage a PI Data Archive server collective with command-line tools............................................................ 38
Create a collective manually...................................................................................................................... 38
Reinitialize a secondary server manually in PI Data Archive....................................................................... 46
Force synchronization with piartool........................................................................................................... 50
Set synchronization and communication frequencies manually................................................................. 50
Remove a secondary server from a collective............................................................................................. 51
PI Data Archive collective reference topics.....................................................................................................51
PI Data Archive collective configuration tables...........................................................................................51
Replicated tables....................................................................................................................................... 62
Non-replicated tables................................................................................................................................ 63
Message logs............................................................................................................................................. 63

PI AF high availability administration........................................................................ 65

PI System data collection interfaces and high availability............................................ 67


Interface failover............................................................................................................................................67
Output points and interface failover.......................................................................................................... 68
Interface failover configuration approaches...............................................................................................68

High Availability Administration Guide iii


Contents

Configure interface failover using shared-file synchronization................................................................... 69


N-way buffering for PI interfaces....................................................................................................................71
Buffering services.......................................................................................................................................71
PI Buffer Subsystem configuration............................................................................................................. 71
Buffering from an interface on the PI Data Archive computer.................................................................... 76
Batch interfaces and buffering................................................................................................................... 77

PI System clients and high availability....................................................................... 79


Client failover................................................................................................................................................ 80
Configure failover...................................................................................................................................... 80
Client failback............................................................................................................................................... 80
Configure failback......................................................................................................................................81
Client connection balancing...........................................................................................................................81
Configure connection balancing.................................................................................................................81
Configure connection for AF SDK clients....................................................................................................... 82
Specify connection preferences for AF SDK clients.................................................................................... 82
Specify connection priority for AF SDK clients........................................................................................... 83
Configure connection for PI SDK clients........................................................................................................ 84
View the PI Data Archive server providing data to a client......................................................................... 84
Switch to the primary server in a PI Data Archive collective....................................................................... 85
Switch to a secondary PI Data Archive collective member......................................................................... 85
Specify the connection preference for PI SDK clients................................................................................. 85
Specify the connection priority for PI SDK clients...................................................................................... 86
Clear the known servers table.................................................................................................................... 87
N-way buffering for PI clients.........................................................................................................................87
Configure n-way buffering for AF SDK clients............................................................................................ 88
Configure n-way buffering for PI SDK clients............................................................................................. 89
Buffering configuration when you add a PI Data Archive server to a collective........................................... 89
Change the buffered server or collective....................................................................................................89
Verify that buffered data is being sent to PI Data Archive.......................................................................... 92

Special cases for high availability.............................................................................. 95


PI to PI interface and high availability............................................................................................................ 95
PI to PI interface configuration considerations...........................................................................................95
Data transfer between PI Data Archive collective members.......................................................................98
Data aggregation between PI Data Archive collectives.............................................................................101
PI AF collective installation and upgrade...................................................................................................... 101
PI AF collective setup and configuration...................................................................................................101
Troubleshoot PI AF collectives .................................................................................................................116

Hardware load balancers and PI System products..................................................... 119


Configure your hardware load balancer to monitor your system.................................................................. 119
Check TCP response for HTTP status code 200.........................................................................................121
Check HTTP content to verify PI Vision application server and SQL Server availability............................. 121
Monitor the AF Health Check counter...................................................................................................... 121
Maintain server affinity to the PI Vision server or AF server ......................................................................... 123
Recommendations for using AF with a hardware load balancer....................................................................123
Load balancing with mirrored SQL Servers and with PI Notifications....................................................... 126

Technical support and other resources..................................................................... 129

iv High Availability Administration Guide


Introduction to high availability
A successful PI System requires data that is always available with minimal if any planned
outages, and the system must be scaled quickly and easily as business requirements change.
PI Systems today are being used for and integrated with more and more business critical
functions. Because of the PI System’s growing role in an enterprise, its more common usage
across a variety of tasks and business functions, and the fact that PI data is being consumed by
a larger population of users ranging from operators to executives, the availability of data
coming from the PI System has become increasingly vital. From a systems perspective, this
means having uninterrupted access to PI data and continuous availability of various PI System
components that are collecting data, performing calculations, sending critical notifications, and
displaying data to users in the form of displays, reports, and web applications. For these
reasons, OSIsoft recommends implementing a PI System with an architecture that meets the
goal of high availability for all critical system components so you can mitigate the risk of data
loss.
A highly available PI System provides continuous access to data during planned and unplanned
outages. Planned outages include scheduled maintenance of software or hardware. Unplanned
outages are unexpected system failures such as power interruptions, network outages,
hardware failures, and operating system or other software errors. In the event of a disaster,
there is the possibility of extensive system failure. Basically, PI System high availability features
provide redundancy for your system to prevent planned or unplanned interruptions.
Redundant system components allow you to bring down one component and upgrade,
maintain, or repair it while a secondary component continues to perform critical system
functions. This flexibility enables IT departments to perform rolling upgrades at more
convenient hours without impacting production. In most cases, upgrades do not need to be tied
to the production schedule or performed during yearly plant shutdowns. If routine
maintenance such as a "Microsoft Patch Tuesday" shuts down a PI Data Archive server, there is
no need for late night heroics to restore a backup because a secondary server can always take
over.
You can use PI high availability features in conjunction with PI backups and virtualization. High
availability capabilities complement but are not a substitute for a solid backup and recovery
plan. You must perform regular data backups following documented backup procedures for
your organization.
This section presents the benefits and limitations of high availability, introduces
complementary HA technologies, and discusses the high availability capabilities of all PI
System components.

Topics in this section


• High availability benefits
• High availability limitations
• High availability business considerations
• PI System components with high availability
• Hardware load balancers
• Monitoring tools for high availability

High Availability Administration Guide 1


Introduction to high availability

High availability benefits


So, let's take a look at the benefits of a highly available PI System. PI Data Archive high
availability solutions increase availability and eliminate or minimize data loss and both
planned and unplanned downtime.

• Reliability
With high availability, data has multiple paths from the source to the end user. If one
component fails, data can traverse an alternate path. Therefore, you can eliminate single
points of failure, protect against potential data loss, ensure access to current data, and
decrease downtime.
System upgrades, such as new server hardware, can be implemented during normal hours.
The server can be configured, then introduced into the collective. A collective is a set of PI
Data Archive servers that act as the logical PI Data Archive server for your system. From
there it can be fully tested and qualified before making it available to users. Since collective
PI Data Archive servers do not have to share machine or operating specifications it is
possible to introduce new hardware such as 64 bit machines.
Unplanned outages can be dealt with during normal working hours. Recovering a system
during the weekend is extremely disruptive and resources required for an efficient fix may
not be available. Outages during normal working hours can be addressed on a schedule as
well, allowing current activities to be completed rather than disrupting and wasting current
work in progress.

• Redundancy and failover


High availability enables the PI System to transfer all of the workload from a failed server to
another server. A PI Data Archive collective consists of a single primary server and one or
more secondary servers connected by a network. If the primary server in a collective fails, a
secondary server can take over the role of the primary server. In the event of a failure of the
primary server, you can quickly promote a secondary server to be the primary server.
Having a completely redundant system removes the single point of failure and the odds of a
failure go way down.

• Performance and scalability


You can share retrieval and computing loads between servers, and therefore increase the
scale of your PI System. For example, you can expand your PI System as your business
grows, such as during seasonal business peak periods and for end-of-month or end-of-year
processing.

• Maintainability
PI Data Archive server maintenance is easier because you can bring down a collective
member with no impact to the other collective members. PI can be more easily patched or
upgraded without having to schedule downtime. With high availability, you can perform
scheduled maintenance with minimal impact on your user applications. You can
troubleshoot a secondary server offline, giving you time to analyze and diagnose problems
without adversely affecting users.

• Workload balancing
You can automatically direct client requests to the server with the most workload capacity.
Client applications can start on any server. Applications are not required to be aware of any

2 High Availability Administration Guide


Introduction to high availability

particular server. You can distribute connections and workloads among servers, reducing
demands on individual servers.

• Security
You can configure all components in a highly available PI system to be secure. Network
traffic is secure between primary and secondary servers, and traffic is secure between client
applications and all servers.

High availability limitations


It is important to understand the limitations of a highly available PI System.

• All servers and interfaces must be in a single Active Directory domain


OSIsoft designed the PI System to support high availability in environments with all servers
and interfaces in a single domain—a domain configured with a domain controller and a
reliable DNS (domain name system) resolution. You must use special configuration
procedures if:
◦ You have components not installed in a homogeneous security environment, such as
components installed in different, non-trusted domains, or components installed in a
work group.
◦ You do not have access to Active Directory (AD) and must configure authentication
through local Windows security.

• PI Data Archive servers distributed geographically


For enterprise-wide PI Data Archive servers that are distributed geographically, a PI to PI
Interface instead of PI Data Archive collectives is better suited because it is likely that
different sites use different security models. The PI to PI interface transfers data from one PI
Data Archive server to another PI Data Archive server via TCP/IP.

• Collective Manager requires Windows file copy access


You can easily create PI Data Archive collectives and manage servers those collectives with
PI Collective Manager. However, PI Collective Manager requires Windows file copy access
between servers. This requires properly opened TCP ports. Without this access, you must
manually create collectives and initialize secondary servers.

• Not all data is replicated


Some data is only written to the primary PI Data Archive server, so if the primary goes
down, you need to recover data from the primary, and not simply promote a secondary.
The PI System uses the buffer mechanism to replicate data from interfaces to the servers in
a PI Data Archivecollective. Therefore, data not sent to PI Data Archive server through the
buffering system is not replicated.

• No replication of batch records


The PI Batch Database identifies objects by a unique ID. Each PI Data Archive server
randomly assigns a unique ID to a batch object. Therefore, each collective member will
generate a unique ID for batch objects with the same data and configuration. Though you
think it is the same object, the software will interpret them as different objects.

High Availability Administration Guide 3


Introduction to high availability

• Performance Equation Scheduler limitations


Performance Equation Scheduler is not aware of high availability features: it can interact
with only one PI Data Archive server. Because all servers in a PI Data Archive collective will
have the same input tags to a Performance Equation, results will be the same in most cases,
regardless of which server the scheduler connects to. However, buffering and network
connection issues can introduce variation.
To avoid any variation, use applications that are aware of PI Data Archivecollectives, like the
calculation functions of PI AF.

High availability business considerations


Even if high availability is not required by your business today, you should have an
understanding of its benefits and limitations, and how it works so that your PI System
architecture can easily be expanded in the future to enable a highly available PI System without
complete reinstallation and moving of various system components.
Consider the following:
• How long can your business afford to have the PI System down?
• What is the business impact of this?
• Are parts of your process more critical than others from an availability standpoint?
• How much cost and performance you are willing to trade for scalability and high
availability?
The cost of your infrastructure is directly linked to the level of component and data
availability. Evaluate the business loss that comes with infrastructure downtime, and ensure
that the business case justifies the costs.
• What parts of your architecture are important from an availability standpoint? Interfaces?
PI Data Archive servers? PI AF? Analytics? Data Access? Visualization? Client applications?
All?
Implementing redundancy raises a number of concerns: increased size, complexity, power
consumption, cost, as well as additional design, verification, and testing time. Both fault-
tolerant components and redundant components tend to increase cost. The more complex
your system, the more carefully you must consider and prepare for all possible interactions
between components. Therefore, a number of choices have to be examined to determine
which components should be fault-tolerant:
◦ How critical is the component?
◦ How likely is the component to fail?
◦ How expensive is it to make the component fault-tolerant?
To find the answers to these business questions, your IT department and PI System
administrator need to engage with your business departments and your end users. You can
then match the high availability options to meet your business requirements.

Data loss versus data availability


It is important to understand the difference between data loss and data availability. Data loss is
what happens if a PI interface goes down without another failover interface picking up data

4 High Availability Administration Guide


Introduction to high availability

collection or not having history recovery functionality available. The data is often lost forever.
All customers want to avoid data loss.
Lack of data availability means that PI data is not available for consumption by a display,
report, or application at that time (but will be available for consumption at a later time). For
example, if a non-high availability PI Data Archive server is down, the data is not available to PI
ProcessBook but the PI Interfaces would still be collecting and buffering data (to later forward
to the PI Data Archive server) to ensure that there is no data loss.
The following table summarizes some of the differences between data loss and data
availability.
Concerns Data loss Data availability
Who is concerned? Everyone is concerned about data Many are concerned about data
loss. availability.
Drivers for concern No one ever wants to lose data. Availability concerns are driven
Loss of data has potential by your use of the data and how
regulatory issues, and it may much it is integrated into your
impact the perceived integrity of business processes.
a controlled or regulated system.
Questions to ask If the PI Interface, a PI Data If the PI Data Archive server goes
Archive server, or other PI down, can my end users wait [4
System component goes down, hours] to see their data? What is
will I lose data? the business impact of this?
Risk mitigation technologies • Interface buffering • Interface failover
• Interface failover (Redundancy)
(Redundancy) • PI System component
• Interface history recovery redundancy and high
availability (PI, Asset
Framework, ACE,
Notifications, etc.)

PI System components with high availability


Now, consider all of the PI System components, and the high availability features that are
supported.
Highly available PI System configurations range from small systems with a primary PI Data
Archive server, PI AF server, and SQL Server on the same computer to larger systems that
include a secondary PI Data Archive server and AF server on a different computer. The primary
interface node and one or more secondary interface nodes support failover and buffering.

High Availability Administration Guide 5


Introduction to high availability

For distributed systems with large workloads and PI point counts, and with multiple PI Data
Archive servers or PI Data Archive collectives that link to a central PI AF database, OSIsoft
recommends that you install PI Data Archive collectives and Microsoft SQL Server on separate,
redundant computers to achieve the best level of performance and scalability.
High availability capabilities are available for all components in the PI System:

High availability features are available for all PI System components:

6 High Availability Administration Guide


Introduction to high availability

• Data sources
Data sources can be configured to support redundant, replicated nodes.

• Interfaces
A primary interface node and one or more secondary interface nodes ensures failover so
that time-series data reaches PI Data Archive even if one interface fails. Buffering ensures
that identical time-series data reaches each PI Data Archive server in a collective. When one
interface is unavailable, the redundant interface automatically starts collecting, buffering,
and sending data to PI Data Archive.

• PI Data Archive server


To implement high availability, install more than one PI Data Archive server and configure
the PI System to store and write identical data on each server. Together, this set of servers,
called a PI Data Archive collective, acts as the logical PI Data Archive server for your system.
These computers can be geographically dispersed. The collective receives data from one or
more interfaces and responds to requests for data from one or more clients. Because more
than one server contains your system data, system reliability increases. If one server
becomes unavailable, another server contains the same data and responds to requests for
that data. Similarly, when demand for accessing data is high, you can spread that demand
among the servers.

• Asset Framework
To implement HA for PI AF, you can configure multiple instances of PI AF application service
in a Windows Failover Cluster or Network Load Balancer deployment. In addition, you can
configure Microsoft SQL Servers in an AlwaysOn Availability Group, Mirrored SQL Server
System, or as a Failover Cluster. See the PI Server topic " PI AF server installation and
upgrade" in Live Library (https://livelibrary.osisoft.com).

• PI Analytics and Notifications


A PI Analytics collective is also a set of servers that acts as one logical PI Analytics server. A
PI Analytics collective supports PI ACE, PI Performance Equations, PI Totalizers, and PI Real-
time Statistical Quality Control.
To implement high availability for PI Notifications, install instances of PI Notifications
Service on more than one computer and configure them to run the same set of notifications.
One instance acts as the primary service and sends the notifications. The other instances act
as backup services and stand by. If the primary service stops for any reason, one of the
backup services becomes the primary service.

• PI Data Access
The PI Data Access products PI OLEDB Enterprise, PI OLEDB Provider and PI Web Services
support high availability. PI OLEDB Enterprise supports connection failover to servers in a
PI collective when used with PI Asset Framework 2010 and later.
PI Web Services retrieves data from either the primary or second member PI collective,
using connection information from its host machine. PI OLEDB Enterprise and PI OLEDB
Provider clients connect to collectives according to connection preference settings; you can
also use PI System utilities to select another server in the collective.
If a server in the collective becomes unavailable, SQL statements that are in progress might
fail. This occurs if a PI OLEDB Enterprise or PI OLEDB Provider client cannot connect to an

High Availability Administration Guide 7


Introduction to high availability

unavailable server, or reconnect to another collective member, within the time set for the
Command Timeout. To avoid this timeout, increase the Command Timeout property in the
OLE DB client, which is by default set to 60 seconds. For more information, see the user
guides for PI OLEDB Enterprise or PI OLEDB Provider, which are available on the OSIsoft
Customer Portal (https://my.osisoft.com/).

• Client applications
To implement high availability at the PI client layer, configure clients to connect to any
server in a PI Data Archive collective and switch to another server if necessary, without
requiring any user intervention to fail over from one server to another. Clients can be
configured to support redundant, replicated nodes.
You can automatically direct client requests to the server with the most workload capacity.
Client applications can start on any server. Applications are not required to be "aware" of
any particular server. You can distribute connections and workloads among servers,
reducing demands on individual servers.

Hardware load balancers


Hardware load balancers provide advanced capabilities for ensuring your infrastructure is
highly available. You can set up a hardware load balancer to monitor PI Vision application
servers and AF servers and adjust load balancing accordingly. For more information, see
Hardware load balancers and PI System products.

Monitoring tools for high availability


Ensure early detection of problems by continuously monitoring the PI System, network,
database operations, applications, and other system components. OSIsoft provides several
tools and utilities for monitoring a PI System, including:
• PI System Management Tools (SMT) for performing routine PI Data Archive administration
tasks.
• PI System Tray monitors your PI Data Archive servers and AF servers. You can see normal,
error, or critical status at a glance.
• PI Interface Configuration Utility (ICU) for configuring PI interfaces.
• Collective Manager for creating and managing PI Data Archive collectives for implementing
high availability (HA) in your PI Data Archive server.
• PI SDK Utility for troubleshooting tasks.
• PI System Explorer for managing PI AF.
OSIsoft also provides powerful command-line utilities, described in the PI Data Archive
Reference Guide.
The monitoring tools must be also highly available and adhere to the same operational best
practices as the components they monitor.

8 High Availability Administration Guide


Deployment scenarios for PI Systems with HA
This section presents various deployment scenarios for PI Systems with high availability.

Topics in this section


• PI Server and SQL Server configuration options
• PI AF architecture

PI Server and SQL Server configuration options


In order to deploy a basic deployment of PI Server (PI Data Archive and the PI AF server), you
will need one or more Microsoft Windows compatible computers with a 64-bit operating
system.
For best performance and improved security, OSIsoft recommends that you install SQL Server
on a different host computer than PI Data Archive. OSIsoft also recommends at least two
physical drives on the PI Data Archive host computer.
OSIsoft recommends that you install PI AF server and PI Data Archive on separate host
computers for optimal performance and to avoid a scenario where these Server Roles are
competing for system resources.
The number of required computers depends on the size and complexity of your PI Server.
Note:
OSIsoft does not recommend installing any PI Server components on a domain controller.
If you install the PI AF application service or its SQL Server back end on a domain
controller, the installation will fail.

Small PI Server deployment


A small PI Server deployment can involve installing the PI Data Archive and the PI AF server (PI
Application Service and PI AF SQL Server database) on a single computer.

High Availability Administration Guide 9


Deployment scenarios for PI Systems with HA

Small-scale PI Server deployment

OSIsoft recommends that SQL Server Standard and SQL Server Enterprise be used for most PI
Server installations, but you can consider using SQL Server Express for systems with few assets
(10,000 assets or less) and low-to-moderate workloads (25,000 PI points or fewer). However,
because SQL Server Express imposes limitations on CPU, memory, and disk usage, you must
also factor in object sizes, concurrent load, and usage patterns of PI AF clients.
To assess whether you can use SQL Server Express, see the OSIsoft Knowledge Base article
KB00309 - Is the SQL Server Express edition sufficient for running PI AF 2.x (https://
customers.osisoft.com/s/knowledgearticle?knowledgeArticleUrl=KB00309).
Note:
If you use SQL Server Standard or SQL Server Enterprise, you should install it on a
different computer from PI Data Archive to ensure that the performance of PI Data
Archive is not degraded.

Larger, higher performance PI Server deployment


For larger-scale PI Server deployments, where you are planning on more than 10,000 assets
and moderate-to-high workloads and point counts (more than 25,000 PI points), OSIsoft
recommends that you:
• Install your PI AF SQL Server database on a Microsoft SQL server on a separate host
computer from your PI Data Archive.
• Install PI AF server on either the PI Data Archive host computer or on the SQL Server
computer.
• Use Microsoft SQL Server Enterprise edition for your PI AF SQL Server database. Review the
PI AF Release Notes for a current list of SQL Server Versions and Editions that support the
Audit Trail feature in the PI AF Server.
• Consider deploying high availability options for your PI Data Archive and PI AF server. For
PI Data Archive, you can deploy multiple PI Data Archive servers in a collective. See the
"High Availability Administration Guide" in Live Library (https://livelibrary.osisoft.com).
For PI AF server, you can deploy in various high availability deployments. See PI AF high-
availability solutions.

10 High Availability Administration Guide


Deployment scenarios for PI Systems with HA

Larger, higher performance PI System

Distributed, highly available PI Server deployment


For distributed systems with large workloads and point counts and with multiple PI Data
Archive servers that link to a central PI AF database, OSIsoft recommends that you:
• Install PI Data Archive in a collective deployment. See the "High Availability Administration
Guide" in Live Library (https://livelibrary.osisoft.com).
• Install PI AF server using an approach based on network load balancing and Microsoft high
availability technologies. See PI AF high-availability solutions.
• Install Microsoft SQL Server for your PI AF SQL Server database in an Always On Availability
group. See PI AF high-availability solutions.
• Install PI Analysis Service in a failover cluster.
• Install PI Notifications Service in a failover cluster.
OSIsoft recommends deploying PI AF servers and Microsoft SQL Servers on separate,
redundant computers to achieve the best level of performance and scalability.

Distributed PI Data Archive collective deployments


PI Data Archive collectives can be geographically-distributed. For example, you might deploy
the primary PI Data Archive server and one secondary PI Data Archive server at a local
operations center, and deploy two secondary servers at a remote backup operations center. You
can configure workstations to connect to their local servers before connecting to remote
servers. You might even configure some workstations to connect only to local servers. Such a
configuration separates loads and separates functions between the operations centers.
You might have interfaces at both operations centers. You might configure the interfaces to use
n-way buffering to send time-series data to all the servers in the PI Data Archive collective.
However, to reduce network traffic, you might have the primary PI Data Archive server send
configuration information and outputs only to the interfaces at its local center and have a

High Availability Administration Guide 11


Deployment scenarios for PI Systems with HA

secondary PI Data Archive server send configuration information to interfaces at the remote
center.
You can also use the PI to PI interface to aggregate data between PI Data Archive collectives.
For example, you might have a collective that collects data at each plant, and have a separate
collective at your headquarters that gathers key indicators from the plants.

PI AF architecture
PI AF uses a multi-tiered architecture. A minimal system consists of three tiers:

• A client application or the PI AF SDK


• The PI AF Application Service
• The PI AF SQL Server database
In terms of physical topology, any configuration of the three tiers is possible, including running
all tiers on the same system or running each tier on a separate system.
• Clients can communicate with multiple PI AF servers and multiple PI Data Archive servers.
• A single PI AF server can service multiple clients.
• A single PI AF SQL Server database can host multiple PI AF servers.
• High availability features can be configured many ways, including load-balanced PI AF
servers, SQL AlwaysOn Availability Groups, SQL Server mirroring, SQL Server replication,
Windows Server Failover Clustering (WSFC), or combinations of these methods.

PI AF deployment options
Depending on your needs and goals, you have various options for deploying PI AF server,
ranging from a simple deployment that uses one computer to a complex mirrored collective
that uses multiple computers. Carefully consider which deployment option is best for your
needs and resource constraints before installation.

Simple PI AF deployment
For systems with few assets (10,000 or less) and low to moderate workloads (25,000 PI points
or fewer), OSIsoft recommends that you follow these guidelines:
• If using SQL Server Express, install PI Data Archive, PI AF server, and SQL Server on the
same computer.
• If using SQL Server Standard or Enterprise, consider installing SQL Server on a different
computer from the PI Data Archive computer. Installing SQL Server Standard or Enterprise
edition on the same computer as the PI Data Archive computer can significantly degrade PI
Data Archive performance.
Note:
Review the PI AF Release Notes for a current list of SQL Server Versions and Editions that
are supported for the PI AF Server.
Possible deployment scenarios include:

12 High Availability Administration Guide


Deployment scenarios for PI Systems with HA

• Deploy the PI AF Application Service and PI AF SQL Server database on the same computer,
and deploy a PI AF client on the same computer or on a different computer.
• Deploy the PI AF Application Service and PI AF SQL Server database on separate computers,
and deploy a PI AF client on one of these computers or on a different computer.
• Deploy the PI AF Application Service on multiple computers that point to a single PI AF SQL
Server database, and deploy a network load balancer between the PI AF client and the PI AF
Application Services.
For example:

PI AF on a mirrored SQL Server


Deploy PI AF on a mirrored SQL Server for a highly available system. Possible scenarios
include:
• Deploy the PI AF Application Service and PI AF SQL Server database on separate computers,
with the PI AF SQL Server database on a mirrored SQL Server, and deploy the PI AF client on
a different computer.
• Deploy the PI AF Application Service on multiple computers pointing to a PI AF SQL Server
database that is installed on a mirrored SQL Server, and deploy a network load balancer
between the PI AF client and the PI AF Application Services.

High Availability Administration Guide 13


Deployment scenarios for PI Systems with HA

PI AF server in a Windows Failover Cluster or Network Load Balancing


deployment
Two scenarios demonstrate high availability deployment for the components of PI AF server in
a failover cluster:
• The first scenario uses a Windows clustering solution by deploying the PI AF Application
Service and the PI AF SQL Server database on separate computers. Install the PI AF
Application Service on a separate machine that uses Windows Failover Clustering. As
recommended, the PI AF Application Service is configured to run under a domain account.
◦ Install the PI AF SQL Server database on a SQL Server failover cluster.
◦ Install the PI AF Application Service on a separate machines that use Windows Failover
Clustering. As recommended, the PI AF Application Service is configured to run under a
domain account.
◦ Install the PI AF client on a different computer.
• The second scenario uses Network Load Balancing by deploying the PI AF Application
Service on multiple computers that point to a PI AF SQL Server database that is installed on
a SQL Server failover cluster. Deploy a Network Load Balancer between the PI AF client and
the PI AF Application Services.
Note:
OSIsoft assumes that you are familiar with the configuration and operation of Network
Load Balancers, failover clusters, and with the cluster administration tools in your
Windows operating system.

Deployment considerations
Depending on your needs and goals, you have various options for deploying PI AF server,
ranging from a simple deployment that uses one computer to a complex mirrored collective
that uses multiple computers. Carefully consider which deployment option is best for your
needs and resource constraints before installation.
The main components in a PI Server are PI AF, and PI Data Archive. The Microsoft SQL Server is
not actually a part of the PI Server, but is a dependency. OSIsoft recommends that you use these
guidelines to deploy PI AF within a PI Server:
• If the PI Data Archive host computer is heavily loaded, move SQL Server to a different
computer.
• It is acceptable to use a shared SQL Server that contains databases for other non-OSIsoft
applications. Often these are already running on a cluster.
• Hardware sizing should be based upon workload, not PI AF object count, since they do not
correlate. RAM is the most important hardware sizing consideration for implementing PI
AF, mainly due to the fact that the SQL Server tends to utilize a considerable amount of
system resources. This consideration applies for deployments where PI AF server and the
SQL server are on the same computer.
• As I/O workload increases, it is important to consider the disk subsystem to handle the IO
count as well as the storage requirements. Specifications to consider include: number of
disk spindles, solid-state drives, and so on. For very large PI AF systems, where you are
planning on more than 10,000 assets and moderate-to-high workloads and point counts

14 High Availability Administration Guide


Deployment scenarios for PI Systems with HA

(more than 25,000 PI points), use drive arrays that can sustain at least 3000 random read
I/O Per Second (IOPS).
• Adding SQL Server RAM improves SQL Server read and write performance and is the
variable that most affects performance of PI AF. In particular, if you use a very large PI AF
system, specify that the SQL Server RAM to be 60-65 percent of the database size.

Frequently asked questions about PI AF deployment


The following table provides answers to frequently asked questions about PI AF deployment.
Question Answer Explanation
Can the PI AF Application Service run on Yes
the database server system?
Can the PI AF Application Service run on Yes
a different system from the database
server?
Can the PI AF Application Service run on Yes Configure the PI AF Application Service
a system in a domain that is not trusted to use a SQL Server login, instead of
by the domain of the database server Windows Authentication when
system? connecting to the SQL Server.
Can the database server use the default Yes Modify the PI AF Application Service
instance? connection string to use the default
instance or an appropriate alias.
Can the database server use a named Yes Modify the PI AF Application Service
instance? connection string to use the named
instance or an appropriate alias.
Is there a standalone Notifications No As of PI AF 2.8.5, Notifications Service is
installation setup? part of the PI Server install kit.
Is there a standalone Analysis Service No As of PI AF 2.8.5, Analysis Service is part
installation setup kit available? of the PI Server install kit.
If the PI AF Application Service is not None
installed on the database server system,
what software, other than the SQL
Server components, gets installed on the
database server system?
Will PI AF server operate correctly when Yes
the database is installed on a shared
SQL Server instance?
How many SQL Server databases does 1 (without PI HA) The setup program creates a single PI
the application require? or 2 (with HA). AF SQL Server database with a default
name of PIFD.
PI AF creates a second database named
<PIFD>_Distribution on the primary
server used for SQL Server replication.

Is any specific collation required? Yes. The collation Although the installation procedure
is required to be does not specify any particular collation,
case insensitive. SQL_Latin1_General_CP1_CI_AS has
had the most testing.
Does PI AF expect SQL Server to listen No
on a specific port?

High Availability Administration Guide 15


Deployment scenarios for PI Systems with HA

Question Answer Explanation


Does the database run in MULTI_USER Yes
mode?
Are any additional SQL Server features Yes SQL Server Agent service is required for
required? automated backup or if PI AF is
configured for high availability. PI AF
high availability requires the replication
feature of SQL Server.
Review the PI AF Release Notes for a
current list of SQL Server Versions and
Editions that support the Audit Trail
feature in the PI AF Server.

Is IIS required on the database server No


system?
Is .NET Framework required on the Yes Unless the DBA manually installs the PI
database server system? AF database objects, the setup program
requires .NET Framework version 4.8.
.NET 4.8 is installed as part of the setup
kit for the AF server.
Note:
.NET 4.8 is required by the AF
server. The AF server will NOT
start if .NET 4.8 is NOT installed.
.NET 4.8 is installed as part of the setup
kit for the AF Client. However, users can
choose to use .NET 4.5 and later
versions in conjunction with a .NET
development project.

Is MS-DTC required? No
Is it necessary to enable remote Yes Yes, if the PI AF Application Service is
database connections? not installed on the database server
system.

PI AF high-availability solutions
To implement high availability for PI AF, OSIsoft recommends an approach based on network
load balancing and Microsoft high availability technologies. However, there are many other
possible solutions to achieve high availability that you can choose based on your own
requirements.
For detailed information about high-availability options, refer to the OSIsoft Knowledge Base
article High Availability (HA) options for PI Asset Framework (AF) (https://
customers.osisoft.com/s/knowledgearticle?knowledgeArticleUrl=KB00634). That article
provides a list of the advantages and disadvantages of various high availability technologies.

Topics in this section


• Recommended approach for PI AF high availability
• Other scenarios for implementing PI AF high availability

16 High Availability Administration Guide


Deployment scenarios for PI Systems with HA

Recommended approach for PI AF high availability


To implement high availability for PI AF, OSIsoft recommends the following measures:

• Deploy the PI AF application service on multiple computers and the PI AF SQL Server
database on another set of two or more computers. The PI AF application service should be
configured to run under a domain account.
• Configure the PI AF SQL Server database computers as an Always On availability group.
• Set up a network load balancer that manages all communication between PI AF clients and
the PI AF application service tier.

This recommended configuration is based on the following technologies:

• Windows Failover Clusters for an Always On availability group


• Network load balancing, to distribute PI AF client-to-PI AF application service
communication

High Availability Administration Guide 17


Deployment scenarios for PI Systems with HA

Note:
OSIsoft assumes you are familiar with the configuration and operation of network load
balancers, Windows failover clusters, and the cluster administration tools provided with
the Windows operating system. For an overview of Microsoft high availability solutions,
see the Microsoft article Business continuity and database recovery - SQL Server
(https://docs.microsoft.com/en-us/sql/database-engine/sql-server-business-continuity-
dr?view=sql-server-2017).

Other scenarios for implementing PI AF high availability


In addition to the recommended approach for PI AF high availability (see Recommended
approach for PI AF high availability, there are other possible scenarios, all based on different
combinations of Microsoft high availability technologies and load balancers.
Note:
OSIsoft assumes you are familiar with the configuration and operation of network load
balancers, failover clusters, and the cluster administration tools provided with the
Windows operation system. For an overview of Microsoft high availability solutions, see
the Microsoft article Business continuity and database recovery - SQL Server (https://
docs.microsoft.com/en-us/sql/database-engine/sql-server-business-continuity-dr?
view=sql-server-2017).
Here are some of the most common configurations:

• High availability using Clustered SQL Servers and a network load balancer:
◦ Deploy the PI AF application service on multiple computers and the PI AF SQL Server
database on another set of two or more computers. The PI AF application service should
be configured to run under a domain account.
◦ Configure the PI AF SQL Server database computers as a Clustered SQL Server.
◦ Point all instances of the PI AF application service toward the Clustered SQL Server.
◦ Deploy a network load balancer between the PI AF client and the PI AF application
service.
◦ Install the PI AF client on separate computers. Direct the PI AF clients toward the
network load balancer.
• High availability using only Windows Failover Clusters:
◦ Deploy the PI AF application service on multiple computers and the PI AF SQL Server
database on another set of two or more computers. The PI AF application service should
be configured to run under a domain account.
◦ Set up a Windows Failover Cluster for all instances of the PI AF application service and
another Windows Failover Cluster for the Clustered SQL Servers. Then create a SQL
Server Cluster for the PI AF SQL Server database computers.
◦ Install the PI AF client on separate computers. Direct the PI AF clients toward the name
of the Windows Failover Cluster used for the PI AF application service.
• High availability using Windows Failover Clusters and a Microsoft Always On availability
group but no load balancer:

18 High Availability Administration Guide


Deployment scenarios for PI Systems with HA

◦ Deploy the PI AF application service on multiple computers and the PI AF SQL Server
database on another set of two or more computers. The PI AF application service should
be configured to run under a domain account.
◦ Configure all instances of the PI AF application service as a Windows Failover Group.
◦ Configure the PI AF SQL Server databases as a Microsoft Always On availability group.
◦ Install the PI AF client on separate computers. Direct the PI AF clients toward the PI AF
application Service configured as a Windows Failover Cluster.
• High availability using SQL Server mirroring: and an optional load balancer:
◦ Deploy the PI AF application service and the PI AF SQL Server database on separate
computers.
◦ Set up the PI AF SQL Server database on a mirrored SQL Server.
Note:
Although SQL Server mirroring is still available, Microsoft has deprecated that
functionality. For more information about deprecated capabilities, see the Microsoft
article Deprecated Database Engine Features in SQL Server 2016 (https://
docs.microsoft.com/en-us/sql/database-engine/deprecated-database-engine-
features-in-sql-server-2016?view=sql-server-2017).
◦ Deploy the PI AF client on a different computer. Optionally, you can deploy a network
load balancer between the PI AF client and the PI AF application service.

High Availability Administration Guide 19


Deployment scenarios for PI Systems with HA

20 High Availability Administration Guide


PI Data Archive high availability administration
This section contains PI Data Archive collective administration topics, instructions for
managing PI Data Archivecollectives with command-line tools, and PI Data Archive collective
reference topics.

Topics in this section


• PI Data Archive collective management
• Manage a PI Data Archive server collective with command-line tools
• PI Data Archive collective reference topics

PI Data Archive collective management


This section discusses issues you need to know and procedures you need to follow in day-to-
day management of PI Data Archive collectives.

Topics in this section


• PI Data Archive collective security
• PI Data Archive collective performance
• PI Data Archive collectives and backups
• Operating system updates
• Replication and archive management
• Secondary PI Data Archive server management
• Force synchronization with piartool
• Create a primary PI Data Archive server or promote a secondary server
• Control PI Data Archive stand-alone mode
• Verify communication between collective members

PI Data Archive collective security


When you create a PI Data Archive collective, you must properly configure security to support
the collective.

Overview of security in PI Data Archive collectives


PI Data Archive collectives support Windows authentication. In a standard configuration, a
collective replicates the PI security mappings defined on the primary server across all
collective members. In non-homogeneous security environments or environments without
Microsoft Active Directory (AD), PI mappings on a specific collective member will reference
Windows users or groups that are not valid on other collective members. In this case, the
replication process will fail. Therefore, you cannot simply replicate mappings: you must use a
custom configuration.

High Availability Administration Guide 21


PI Data Archive high availability administration

In a standard configuration, where all collective members are in the same security
environment and you are using AD, you configure security on the collective’s primary server
just as you would configure a single PI Data Archive server. The collective’s PI Data Archive
replication service copies the configuration to all secondary servers in the collective. This
replication process requires that all collective members be on a single domain or part of fully-
trusted domains.
You must use a custom security configuration if:
• Collective members are not contained in a homogeneous security environment, such as
when members are on different non-trusted domains or on no domain.
• You do not have access to AD and must configure authentication through local Windows
security on the primary and secondary servers.
Custom configuration in collective servers can affect PI applications and users when accessing
PI Data Archive information. If the same mappings are not available on all collective members,
applications might fail to connect or might receive different permissions on failovers. OSIsoft
recommends avoiding custom configurations whenever possible. Custom configurations are
more complex. To set up and maintain a custom configuration, you must consider who needs
access to each collective member, and who will need to fail over. Visit the OSIsoft Customer
Portal (https://my.osisoft.com/) if you need help.

Access permissions for PI Data Archive collective management


To set database security permissions, use the Database Security tool in PI System Management
Tools.
To create or modify a PI Data Archive collective, you must have a Windows account that has
administrative access to all machines on the collective. Additionally, on the primary PI Data
Archive server, you need replication permission:
PIREPLICATION (r,w)

This access enables you to:


• Add, edit, rename, or delete entries in the PISERVER and PICOLLECTIVE tables.
• Force a synchronization.
• Open or close a server.
• Control PI Data Archive stand-alone mode.
• Promote a secondary server to a primary server.
• Remove a server from a collective.
PI Collective Manager performs a backup before doing PI Data Archive collective operations. If
you are using the PI Collective Manager to create or modify the collective, then you also need
backup permission:
PIBACKUP (r,w)

You do not need this access if you are creating or modifying the collective manually.
Note:
These access permissions are valid for PI Data Archive version 3.4.380 and later. Earlier
versions do not include the PIBACKUP entry in database security so piadmin access is
required for PI Collective Manager in that version. PI Data Archive collectives were
introduced in version 3.4.375.

22 High Availability Administration Guide


PI Data Archive high availability administration

Mapping unresolved users


To use a custom security configuration in a PI Data Archive collective, you must configure the
PI Data Archive server to accept unresolvable security mappings during replication. The PI
Data Archive server includes a lookup-failure tuning parameter that tells it to ignore
unresolvable mappings during replication. (Collectives do not replicate tuning parameters.)
With this tuning parameter enabled, you can create mappings on one collective member that
other collective members cannot resolve, but replication between collective members will
succeed. For information on enabling the tuning parameter, see Enable the lookup-failure
tuning parameter.
For example, suppose the primary server is in the domain where you want to create mappings
and you have a secondary server that is not part of that domain. If you create mappings on the
primary server with domain accounts, the replication of these mappings will fail on the
secondary server (because that domain does not exist for the secondary server). Replication
will stop and the secondary server will fall out of synchronization. If you enable the tuning
parameter on the secondary server, the server will accept the mappings and replication will
succeed.
Similarly, suppose the primary server defines a mapping against a local Windows group.
Because secondary servers do not know about that local group, the mappings will cause
replication to fail. If you enable the tuning parameter on the secondary servers, they will accept
the mappings and replication will succeed. In this case, you might also need to define mappings
against local Windows groups on the secondary servers. Therefore, you must also enable the
tuning parameter on the primary server.
After you enable the lookup-failure tuning parameter, you must use the Windows Security ID
(SID) of a group instead of the group name when you configure a mapping for a local Windows
group. Because you cannot use PI SMT to create mappings based on SIDs, you must use
piconfig. See Creation of mappings with a Windows Security ID (SID).

Enable the lookup-failure tuning parameter


You must enable the lookup-failure tuning parameter on any secondary PI Data Archive server
in a PI Data Archive collective that cannot resolve security mappings from the primary server.
You must also enable the lookup-failure tuning parameter on the primary server in the PI Data
Archivecollective if you define mappings valid only on secondary servers.
Note:
Like any tuning parameter, collectives do not replicate this setting.

Procedure
1. Click Start > All Programs > PI System > PI System Management Tools.
2. Under Collectives and Servers, select the PI Data Archive server where you want to enable
the tuning parameter.
3. Under System Management Tools, select Operation > Tuning Parameters.
4. Click the New Parameter button.
5. In Parameter name, type:
Base_AllowSIDLookupFailureForMapping
6. In Value, type:

High Availability Administration Guide 23


PI Data Archive high availability administration

1
7. Click OK.
8. Restart the server’s PI Base Subsystem.

Creation of mappings with a Windows Security ID (SID)


After you enable the lookup-failure tuning parameter, you must use a group’s SID instead of its
name when you configure a mapping for a local Windows group. Use PI SMT to determine the
SID, and use piconfig to create the mapping based on that SID.
OSIsoft recommends that you enable the lookup-failure tuning parameter only when you create
mappings. After you create mappings and the primary server replicates the mappings to the PI
Data Archive collective, you can disable the parameter to protect against the accidental
creation of invalid mappings.

Find Windows SID

Procedure
1. Click Start > All Programs > PI System > PI System Management Tools.
2. Under Collectives and Servers, select the secondary server that needs the security mapping.
3. Under System Management Tools, select Security > Mappings and Trusts.
4. Find the SID on the Mappings tab.
◦ If a mapping based on the desired Windows group already exists:
▪ Right-click the mapping and choose Properties.
▪ View the Windows SID on the Mapping Properties dialog box.

If a mapping based on the desired Windows group does not exist:
▪ Click New to open the Add New Mapping dialog box.
▪ In Windows Account, specify the Windows group.
▪ View the SID in Windows SID.
▪ Click Cancel.

Create a mapping based on a SID

Procedure
1. At a command prompt, navigate to the ..\PI\adm directory.
2. Type: piconfig
3. Update the PI Identity Mapping table (PIIDENTMAP). You must set at least three attributes:

◦ IdentMap
Name of the PI identity mapping

24 High Availability Administration Guide


PI Data Archive high availability administration
◦ PIIdent
Name of the PI identity that you want to map to a local Windows group

◦ Principal
SID of the Windows group you want to map to the specified PI identity
You can also specify other table attributes, if desired.
For example, to create a new mapping called My_Mapping that maps the Windows group
specified by SID S-1-5-21-1234567890-1234567890-1234567890-12345 to the PI
group, piadmins, you would enter the following commands at the piconfig prompts:
@table PIIdentmap
@mode create
@istr IdentMap,Principal,PIIdent
My_Mapping,S-1-5-21-1234567890-1234567890-1234567890-12345,piadmins

PIIDENTMAP attributes
The following table lists all attributes in the PIIDENTMAP table. You can specify any of these
attributes when you create a mapping.
Attribute Description
IdentMap The name of the PI mapping. This must be unique,
but is not case-sensitive. This field is required to
create a new mapping.
Desc Optional text describing the mapping. There are no
restrictions on the contents of this field.
Flags Bit flags that specify optional behavior for the
mapping. There are two options:
• 0x01 = Mapping is inactive and will not be used
during authentication.
• 0x00 = (Default value). Mapping is active and
will be used during authentication after initial
setup.
IdentMapID A unique integer that corresponds to the identity
mapping. The system will automatically generate
the value upon creation. Value will not change for
the life of the identity mapping.
PIIdent Name of the PI identity to which the security
principal specified by Principal will be mapped.
The contents of this field must match Ident in an
existing entry in the PIIDENT table. The target
identity must not be flagged as Disabled or
MappingDisabled. Multiple IdentMap entries
can map to the same PIIdent entry.
This field is required to create a new identity
mapping.

High Availability Administration Guide 25


PI Data Archive high availability administration

Attribute Description
Principal The name of the security principal (domain user or
group) that is to be mapped to the identity
named in PIIdent.
For principals defined in an Active Directory
domain, the format of input to this field can be any
of the following:
• Fully qualified account name (my_domain
\principal_name)
• Fully qualified DNS name (my_domain.com
\principal_name)
• User principal name (UPN)
(principal_name@my_domain.com)
• SID (S-1-5-21-nnnnnnnnnn-…-nnnn).
For security principals defined as local users or
groups, only the fully qualified account name
(computer_name\principal_name) or SID formats
may be usedcomputer_name. Output from piconfig
for this field will always be in SID format,
regardless of which input format was used.
This field is required to create a new identity
mapping.

PrincipalDisp User-friendly rendering of the principal specified


by Principal. This is an output-only field. The
principal name will be displayed in the fully-
qualified account name format.
Type This is a reserved field indicating the type of the
mapping. In this release, this attribute is always set
to 1.

PI Data Archive collective performance


Use the PI Performance Monitor interface (either PerfMon or PerfMon_Basic) to read
performance counters and to archive the values.
For more information on installing the PI Performance Monitor interface, see "Overview of PI
interfaces" in Live Library (https://livelibrary.osisoft.com).
Each PI Data Archive server contains several counters that you can use to measure the
performance of a PI PI Data Archive collective. For example:
• IsCommunicating
• IsAvailable
• IsInSync
• LastSyncRecordID

Topics in this section


• PI points for tracking replication performance
• Monitor PI Data Archive collective performance

26 High Availability Administration Guide


PI Data Archive high availability administration

PI points for tracking replication performance


PI System collects several performance points related to replication. You can use PI PerfMon to
scan these points or you can use the Windows Performance administrative tool.
PI Replication Performance Description
Counters
PI Collective Statistics
Is Running Normally Is the status normal for all members of the PI Data Archive
collective?
Last Config Change Time Last time the configuration of the collective was modified.
Current Server The index of the current server of the collective.
Number of Servers The number of member servers in the collective.
PI Server Statistics
Is Communicating Is the server communicating with the other members of the PI Data
Archive collective?
Is In Sync Is the server in sync with the other members of the collective?
Is Available Is the server available?
Is Current Server Is the server the member of the collective that is sending this
information?
Last Sync Record ID Last sync record processed.
Role The role this server plays in a collective.
Sync Records/sec Sync records processed per second.
Communication Period The frequency that the server is configured to communicate with the
collective.
Sync Period The frequency that this server is configured to synchronize with the
collective.
Last Communication Time Last time that the server communicated with the collective.
Last Sync Time Last time that the server synchronized with the collective.
Server Index The index of the server in the list of members of the collective.

Monitor PI Data Archive collective performance


You can use the PI Performance Monitor interface (PerfMon) to read performance counters for
the PI Data Archive server and to archive those values. In a PI Data Archive collective, each
server has unique values. Therefore, you must configure the interface and create unique
performance-monitor tags on each PI Data Archive in a collective.
The method for differentiating interfaces in a collective is to use PerfMon to assign a unique
combination of pointsource and location1 to each installed interface.
To view the status of different servers from all the collective servers, use buffering to send the
data to each server in the collective. If one server fails, other servers in the collective will
contain performance data collected prior to the failure. Because the performance monitor
interfaces are located on the PI Data Archive computer, the interface configuration must specify
the explicit name of the server host rather than specifying "localhost" . This change allows the
interface to use n-way buffering.

High Availability Administration Guide 27


PI Data Archive high availability administration

Note:
The PI Performance Monitor is not installed by default with the PI Data Archive server.
For more information about installing the PI Performance Monitor interface, see
"Overview of PI interfaces" in Live Library (https://livelibrary.osisoft.com).

Topics in this section


• Configure PI PerfMon interface on each PI Data Archive server
• Create performance PI points
• Export performance PI points
• Configure a buffering service
• Start interface service

Configure PI PerfMon interface on each PI Data Archive server


Use PI ICU to configure PI PerfMon interface on each PI Data Archive server in the collective.

Procedure
1. Open PI ICU.
2. Import the interface.
a. Choose Interface > New Windows Interface Instance from BAT File.
b. Navigate to the PIPerfMon directory.
c. Select PIPerfMon.bat_new and click Open.
3. On the General page:
a. Set API Hostname to the host server. (Do not set to localhost.)
b. Set Point Source to a unique string.
Note:
Each installed PerfMon interface must have a unique Point Source.

c. Click Apply.
4. On the Service page, click Create to create a service for the interface.

Create performance PI points


Use PI SMT to create the performance PI points on each PI Data Archive server.
You might create a set of points on the primary server and then use PI Builder to create
duplicate PI points for secondary servers. For performance points, the critical parameters are:
Tag, Descriptor, Exdesc, Location1, Location4, and Pointsource.

Procedure
1. Open PI SMT on the primary server.
2. Under System Management Tools, select IT Points > Performance Counters.
3. Select the Build Tags tab.

28 High Availability Administration Guide


PI Data Archive high availability administration

4. In the counter list, expand PI Server Statistics and select the check boxes next to the PI
points you want to add:
◦ IsAvailable
◦ IsCommunicating
◦ IsInSync
◦ LastSyncRecordID
5. Under Build Tags, select Write tags to CSV File.
6. Click Create Tags.
7. Specify the directory and file name for the spreadsheet, and click Save.
8. Open the spreadsheet in Microsoft Excel.
9. Set the pointsource field to the value you set for the interface.
10. Create a copy of all the PI points for each secondary server.
11. Edit the copied points to create server-specific points.
Check the Tag, Descriptor, Exdesc, Location1, Location4, and Pointsource fields.
12. Save the spreadsheet.

Export performance PI points


Use PI Builder to export PI points from a spreadsheet to the primary server.

Procedure
1. Open the Excel file where you created the performance points.
2. Under Data Server, select the primary server.
3. Click Select All.
4. Click Publish.
PI Builder creates the PI points on the primary PI Data Archive server, which replicates the
tags to the secondary servers in the collective.

Configure a buffering service


Configure a buffering service to send the data to all collective members. You can use either API
Buffer Server or PI Buffer Subsystem. See:
• "Configure API Buffer Server on a PI Server machine" in Live Library (https://
livelibrary.osisoft.com)
• Configure PI Buffer Subsystem 3.4.380 or later on a PI Data Archive computer

Start interface service

Procedure
1. In PI ICU, select the Service page.
2. Click the Start interface service button

High Availability Administration Guide 29


PI Data Archive high availability administration

PI Data Archive collectives and backups


At a minimum, configure backups for the primary server in a PI Data Archive collective. A
collective is not a substitute for a backup.
Consider whether to configure backups for the secondary servers as well as the primary. There
are several good reasons to back up secondary servers.
• Not all configuration information is replicated. Non-replicated data includes tuning
parameters and PI Data Archive message logs. In part, these files can be enumerated by the
piartool -backup -identify -verbose command; the non-replicated components
where the data may differ between the primary and secondary nodes include the timeout
parameters, pimsgss, and pibatch components. However, non-replicated data also includes
customized batch scripts, .ini files, and logs that can be backed up with the
pisitebackup.bat script.
• Database corruption can occur independently on primary and secondary nodes. The
validation step at the end of the backup may, for example, detect corruption on a secondary
node that did not occur on the primary node.
• If the secondary and primary are geographically separated across a slow network, then it
might be more expedient to restore the secondary from a backup rather than reinitializing
from the primary.
The start and end time of archives are not the same on primary and secondary nodes.
Reinitializing a secondary typically requires manual steps to eliminate data gaps. If a secondary
is restored from backup, there are no data gaps.
If you restore a primary PI Data Archive server from a backup, you must reinitialize all
secondary servers from the primary PI Data Archive server. If you restore the primary PI Data
Archive server from a backup of a secondary PI Data Archive server, you must reinitialize the
other secondary servers.

Operating system updates


In a PI Data Archive collective, you want at least one PI Data Archive server available at all
times. Therefore, you want to stagger updates to the servers' computer operating systems. If
you use Windows Update to push upgrades to server computers, you might:
• Put each server in a different update group.
• Disallow unattended or automatic reboots of the operating system.

Replication and archive management


The replication process automatically manages archives. When you initialize or reinitialize a
secondary server, the process automatically registers the archives in the location that you
specified when you added the secondary server to the collective.
You can place archive files on a storage-area network (SAN) drive. Multiple servers can mount
the same archives. For example, this might be useful if you have many old archives and you
want all the servers in the PI Data Archive collective to access them. In this case, you should
place the archives in a read-only partition.

30 High Availability Administration Guide


PI Data Archive high availability administration

A PI Data Archive collective does not synchronize archive shifts on different servers. Shifts will
occur at different times on each PI Data Archive server. This can increase the availability of
archive data. For example, if the shift takes a long time or fails on one PI Data Archive server,
other servers can still receive and retrieve data. However, before moving an archive from one
server to another server, you must reprocess the archive to change the start and end times to
match the destination.

Secondary PI Data Archive server management


This section describes tasks for secondary servers in a PI Data Archive collective.

Topics in this section


• Add a secondary server to a PI Data Archive collective
• Remove a secondary server from a PI Data Archive collective with Collective Manager
• Reinitialize a secondary server with Collective Manager
• Configure non-replicated parameters at secondary servers

Add a secondary server to a PI Data Archive collective


Before you start
To modify a PI Data Archive collective by using PI Collective Manager, you need permission and
access to the secondary server to be added, as described in the PI Collective Manager Guide.
When you add a server, you must select the archives that you want to copy to the secondary
servers. OSIsoft recommends that you copy all archives when you add a server to a PI Data
Archive collective.

Procedure
1. Log on to the primary server computer.
2. In Collective Manager, in the Collectives list, select the collective where you want to add a
member server.
3. Select Edit > Add Server to Collective to open the wizard.
4. In Server, select a server that you want to add to the collective as a secondary server. The
following options are available at the prompt to verify selections:
◦ You can choose to copy PI message logs into the PI\log directory. By default, the
message logs are not copied. Click Advanced Options to make this change.
◦ You can set an alternative directory for archive files on the secondary server. To do this,
click Advanced Options and under Member Servers, select the secondary server that you
want to set. The default value is the directory that stores archives on the primary server.
If you set a different directory, the replication process automatically registers archives to
this directory.
Note:
You cannot change the Advanced Options settings for a secondary server after you
add the server to the collective.
5. To add a server to the collective:

High Availability Administration Guide 31


PI Data Archive high availability administration

a. To the right of the server selection menu, click to open the PI Connection Manager
window.
b. Select Server > Add Server to open the Add Server window.
c. In the Network Node text field, enter the fully qualified domain name (FQDN) of the
server.
d. Enter a Default User Name.
e. Click OK.
f. Click Close.

Remove a secondary server from a PI Data Archive collective with


Collective Manager
Use Collective Manager to remove a secondary server from a PI Data Archive collective.
Note:
Do not attempt to remove the last remaining server in your collective.

Procedure
1. Log on to the primary PI Data Archive computer.
2. Click Start > All Programs > PI System > Collective Manager.
3. Under Collectives, select the collective you want to edit.
4. In the diagram of collective members, select the secondary server you want to remove.
5. Choose Edit > Remove Server from Collective.
6. Click Yes at the confirmation prompt.
Collective Manager removes the server from the collective and updates the display.
7. Clear the known servers table at each client connected to the PI Data Archivecollective.
See Clear the known servers table.

Reinitialize a secondary server with Collective Manager


If the configuration database at a secondary server in the PI Data Archive collective is not
synchronized with the database at the primary PI Data Archive server, use Collective Manager
to reinitialize the secondary server.

Procedure
1. Log on to the computer of the primary server.
2. Open Collective Manager. Click Start > All Programs > PI System > Collective Manager.
3. Under Collectives, select the collective.
4. In the diagram of collective members, select the secondary server you want to reinitialize.
5. Choose Edit > Reinitialize Secondary Server.

32 High Availability Administration Guide


PI Data Archive high availability administration

6. Follow the wizard prompts to indicate which archives to copy to the secondary server, and
file locations.
The wizard stops the secondary server, backs up the primary server, copies that data to the
secondary server, and restarts the secondary server.
7. Click Finish.

Configure non-replicated parameters at secondary servers


In most deployments, you will not change configuration at secondary PI Data Archive servers.
In some cases, however, you might configure parameters at the secondary server. Non-
replicated parameters that you might change include:

• Tuning (timeout) parameters


You might consider setting different tuning parameters on secondary servers due to
network and hardware changes. Most tuning parameters relevant to HA have a "Replication"
prefix. You can find these tuning parameters on the Base tab of the Tuning Parameters tool
in PI SMT.

• Firewall parameters
If networks change, you must change these non-replicated parameters at all members in the
PI Data Archive collective.

Force synchronization with piartool


If you cannot use Collective Manager, you can use piartool to force synchronization between
a primary and secondary server in a PI Data Archive collective.

Procedure
1. Open a command window on the computer that hosts the secondary server.
2. Navigate to the ..\PI\adm directory.
3. Enter: piartool -sys -sync

Create a primary PI Data Archive server or promote a secondary server


If your primary PI Data Archive server becomes damaged or unavailable for an extended time
period, you need to designate a new server as the primary server. You can create a server on a
new machine or promote an existing secondary PI Data Archive server to be the primary
server.

Topics in this section


• Create primary PI Data Archive server on a new computer
• Promote a secondary server to primary
• Synchronize the new primary PI Data Archive server with PI AF server

High Availability Administration Guide 33


PI Data Archive high availability administration

Create primary PI Data Archive server on a new computer


If you create a primary server on a new target computer, you can simplify your tasks by giving
the target computer the same name as the source computer. If the target computer has a
different name, then you need to add the target computer to the PI Data Archivecollective and
promote that computer.
To create a new primary server on a machine with the same name:

Procedure
1. Prepare the new PI Data Archive machine.
a. Generate a license activation file for the new machine.
b. Install PI Data Archive software on the new machine.
2. Prepare the source PI Data Archive machine.
a. Force an archive shift.
b. Stop PI Data Archive.
3. Move files from the source PI Data Archive server to the target PI Data Archive server.
These files include current data files, queue files, and archive files.
4. Start the new PI Data Archive server.

Promote a secondary server to primary


Use the piartool utility to promote a secondary PI Data Archive server to a primary server.
You might need to do this if the primary server becomes unavailable or you want to
decommission the existing primary server and replace it with a new server.
Note:
This procedure assumes that you have an independent PI AF server or a primary PI AF
server that you are not moving or promoting. If that is not the case, additional steps
might be necessary.

Procedure
1. Review the buffering configuration at interfaces and servers and the host server
specification at each interface.
The configuration and specification must refer to the server that you want to promote.
See Buffering configuration when you add a PI Data Archive server to a collective and
additional steps for using interface failover and the Buffering topic "Upgrade to n-way
buffering for interfaces with API Buffer Server" in Live Library (https://
livelibrary.osisoft.com).
2. If applicable, shut down the existing source primary server.

34 High Availability Administration Guide


PI Data Archive high availability administration

a. Verify that no updates are pending for a secondary server. In Collective Manager, check
that LastSyncRecordID contains the same value at the primary server and all secondary
servers.
b. Shut down the primary server.
c. If the primary or independent PI AF server is installed on the same machine as this
primary PI Data Archive server, then start PI AF server.
3. Update the collective definition on the secondary server that you want to promote.
a. Open a command window on the secondary server that you want to promote.
b. Navigate to the ..\PI\adm directory.
c. To drop the primary server from the collective, type:
piartool -sys -drop OldPrimaryServerName
d. To promote the secondary server to primary server, type:
piartool -sys -promote SecondaryServerName
4. Synchronize the new primary server with the PI AF server (see Synchronize the new
primary PI Data Archive server with PI AF server). If you do not need to synchronize with
the PI AF Server, you must still restart the PI Base Subsystem service on the new primary
server.
5. Reinitialize any other secondary servers in the collective. See Reinitialize a secondary server
with Collective Manager.
6. Generate a new machine signature file and a new license activation file on the computer that
will host the new primary PI Data Archive server.
7. Copy the new license activation file to all members of the PI Data Archive collective.
8. Clear the known servers table at each client connected to the collective.
See Clear the known servers table.

Synchronize the new primary PI Data Archive server with PI AF server


Procedure
1. Stop the PI AF Link Subsystem service on the new primary PI Data Archive server. Note the
Log On As user for the PI AF Link Subsystem service (this is the aflink user).
2. Open a command prompt, navigate to the ..\PI\bin directory, and type:
piaflink -fixafmdbmap
3. Give the aflink user read/write permissions to the pimdbafmapping.dat file in the ..\PI
\dat folder.
4. If the aflink user is a domain account and the same user that ran PI AF Link Subsystem on
the old primary PI Data Archive server, then skip the next two steps.
5. If PI AF server is on a different machine than the new primary PI Data Archive server:
a. On the PI AF server machine, find the local Windows group with the name AF Link to PI-
Old Server where Old Server is the name of the old primary PI Data Archive server.
b. Rename this local Windows group to AF Link to PI-New Server where New Server is the
name of the new primary PI Data Archive server.

High Availability Administration Guide 35


PI Data Archive high availability administration

c. If the aflink user is a domain account, then add the aflink user as a member of this local
Windows group.
d. If the aflink user is Network Service, then add the machine account name of the new
primary PI Data Archive server as a member of this local Windows group.
e. If the aflink user is a local user on the new primary PI Data Archive machine, then create
the same user with the same password on the AF Server machine and add that local user
to the local group on the AF Server machine.
6. If AF Server is on the same machine as the new primary PI Data Archive server:
a. Find the local Windows group with the name AF Link to PI-Old Server where Old Server
is the name of the old primary PI Data Archive server.
b. Rename this local Windows group to AF Link to PI-New Server where New Server is the
name of the new primary PI Data Archive server.
c. Add the aflink user as a member of this local Windows group.
7. Restart the PI Base Subsystem service on the new primary PI Data Archive server.
8. Start the PI AF Link Subsystem service on the new primary PI Data Archive server.

Control PI Data Archive stand-alone mode


You can put a PI Data Archive server into stand-alone mode if you need to isolate that server,
such as to perform emergency maintenance. In stand-alone mode, the PI Data Archive server
closes all existing connections from clients and interfaces, and does not allow any new
connections. When the server is in stand-alone mode, tools such as PI SMT cannot connect to
the server. The server is essentially shut down. Command-based tools can connect locally to a
server in stand-alone mode. You must explicitly end stand-alone mode. Restarting the PI Data
Archive server does not bring it out of stand-alone mode.
Note:
Stand-alone mode is useful for low-level maintenance tasks, such as editing an attribute
set or point class. You must return the server to normal mode after using stand-alone
mode in a production environment.

Topics in this section


• Find the current mode of PI Data Archive
• Turn on stand-alone mode
• Turn off stand-alone mode

Find the current mode of PI Data Archive

Procedure
1. Open a command window on the computer that hosts the server.
2. Navigate to the ..\PI\adm directory.
3. Enter:
piartool -sys -standalone query

36 High Availability Administration Guide


PI Data Archive high availability administration

Turn on stand-alone mode


Procedure
1. Open a command window on the computer that hosts the PI Data Archive server.
2. Navigate to the ..\PI\adm directory.
3. Enter:
piartool -sys -standalone on

Turn off stand-alone mode


Procedure
1. Open a command window on the computer that hosts the server.
2. Navigate to the ..\PI\adm directory.
3. Enter:
piartool -sys -standalone off

Verify communication between collective members


You can use Collective Manager to verify that members of a collective are communicating and
that replication is occurring. Ideally, you check the collective from a computer that does not run
a PI Data Archive server in the collective. However, you can check the collective from the
machine running the primary server.
To verify that a collective replicates configuration changes made at the primary server to all
secondary servers, you can edit a point on the primary server and verify the change at the
secondary servers in the collective.

Procedure
1. Click Start > All Programs > PI System > Collective Manager.
2. Under Collectives, select the collective.
If the collective does not appear, you must enable communication between Collective
Manager and the collective:

a. Select File > Connections.


b. Select the check box corresponding to the collective in PI Connection Manager.
If there is no check box for the collective, add the collective:

▪ Select Server > Add Server.


▪ In Network Node, enter the fully qualified domain name (FQDN) for the primary
server in the collective.
▪ Click OK.
c. Click Save to close PI Connection Manager.
3. Verify communication between collective members.

High Availability Administration Guide 37


PI Data Archive high availability administration

Collective Manager shows a diagram of collective members. An icon represents each server
in the collective. A green check mark on the icon indicates that the server is communicating
correctly. A red X indicates that the server is not communicating correctly.

4. If a server icon is not communicating correctly, you can:


◦ Wait a few moments. Occasionally, the status of the secondary server will update at the
next attempt to synchronize.
◦ Try to reinitialize that server (select Edit > Reinitialize Secondary Server).

Manage a PI Data Archive server collective with command-line


tools
If your PI System is in an environment without Windows file access and fully qualified domain-
name resolution to all servers in a PI Data Archive collective, you must use command-line tools
to manage your collective rather than Collective Manager.

Topics in this section


• Create a collective manually
• Reinitialize a secondary server manually in PI Data Archive
• Force synchronization with piartool
• Set synchronization and communication frequencies manually
• Remove a secondary server from a collective

Create a collective manually


You can utilize command-line tools to create a PI Data Archive collective manually, specifically
the piconfig command. If you do create your collective using this command, you must
initialize each secondary server manually as well. This is done automatically when you use
Collective Manager to create a collective.
When creating a PI Data Archive collective, do not make configuration changes to the PI Data
Archive server configuration databases, such as changes to PI points. If necessary, put PI Data
Archive in stand-alone mode.
Note:
For PI Data Archive 2017, the version of the primary member must match the version of
the secondary member.

Procedure
1. Determine the server ID of the existing PI Data Archive server.
2. Force an archive shift on the primary server.
3. Verify that the snapshot queue is empty.

38 High Availability Administration Guide


PI Data Archive high availability administration

4. Flush the archive write cache.


5. Configure the PICOLLECTIVE and PISERVER tables on the primary server.
6. Register certificates on the primary server.
7. Back up the PI Server configuration database and archives.
8. Copy backup files to each secondary server.
9. Create a non-computer-specific license file.
10. Start secondary servers.
11. Register certificates on the secondary server.
12. Verify PI Data Archive collective communication with piconfig.

Determine the server ID of the existing PI Data Archive server


If you are adding an existing PI Data Archive server to a collective, set the name and ID of the PI
Data Archive collective to match the name and ID of the existing server. Doing so will allow
clients to continue connecting to the collective without any changes.

Procedure
1. Open a command prompt window.
2. Navigate to the ..\PI\adm directory.
3. Enter: piconfig < pisysdump.dif

Results
The display shows configuration output.
For example, before creating a collective, output looks similar to:
Collective Configuration
Name, CollectiveID, Description
--------------------------------------------------------------

Member Server Configuration


Name, IsCurrentServer, ServerID, Collective, Description, FQDN, Role
-------------------------------------+------------------------
uc-s1,1,08675309-0007-0007-0007-000000001001,,UC 2006 Demo Server 1,uc-
s1.osisoft.int,0

Collective Status
Name, Status
--------------------------------------------------------------

Member Server Status


Name,IsAvailable,CommStatus,SyncStatus,LastSyncRecordID,
LastCommTime,LastSyncTime,SyncFailReason,UnavailableReason
--------------------------------------------------------------
uc-s1,1,0,0,0,6-Nov-06 17:17:18,31-Dec-69 16:00:00,,

In this example, uc-s1 is the server name and 08675309-0007-0007-0007-000000001001 is


the server ID.

High Availability Administration Guide 39


PI Data Archive high availability administration

Note:
When you create a collective or specify new secondary servers, you can either explicitly
specify a UID, or you can have the creation process generate one automatically.

Force an archive shift on the primary server


OSIsoft recommends that you force an archive shift if the primary archive is nearly full and will
shift soon. Forcing an archive shift on the primary server before starting n-way buffering and
the replication service will allow all servers in the collective to have similar archive shifts.

Procedure
1. Open a command window.
2. Navigate to the ..\PI\adm directory.
3. Enter: piartool -fs

Verify that the snapshot queue is empty


Procedure
1. Open a command prompt window.
2. Navigate to the ..\PI\adm directory.
3. Enter: piartool -ss

Results
The display shows counter output:
Counters for 8-Sep-06 11:51:44
Point Count: 364 0
Snapshot Events: 518157 0
Out of Order Snapshot Events: 0 0
Snapshot Event Reads: 154276 0
Events Sent to Queue: 308873 0
Events in Queue: 0 0
Number of Overflow Queues: 0 0
Total Overflow Events: 0 0
Estimated Remaining Capacity: 2590182 0

The display updates periodically. As you monitor the Events in Queue parameter, you may
occasionally see the value grow greater than 0 and then become 0. This indicates that the
queue is receiving time-series events from the snapshot subsystem, and that the archive
subsystem is able to send the data to the archives.

Flush the archive write cache


Flush the archive write cache to write any data in memory to the disk.

40 High Availability Administration Guide


PI Data Archive high availability administration

Procedure
1. Open a command window.
2. Navigate to the ..\PI\adm directory.
3. Enter:
piartool -flush

Configure the PICOLLECTIVE and PISERVER tables on the primary


server
Use piconfig to create one record in the PICOLLECTIVE table for the collective and to create a
record for each server in the collective in the PISERVER table. You only need to specify
identifying fields.

• PICOLLECTIVE Identifying Fields

Field Description Example


Name Name of PI collective. Use the uc-sl
host name of an PI Server
selected to become the primary
member.
Description Text describing collective. UC 2006 Demo Collective
CollectiveID UID of collective. Use the SID of 08675309-0007-0007-0007-
the existing PI Server selected 000000001001
to become the primary member.

• PISERVER Identifying Fields

Field Description Example


Name Host name of machine running uc-sl
PI Server.
Description Text describing PI Server UC 2006 Demo Server 1
Collective Name of PI collective containing uc-s1
server. Must match Name in
PICOLLECTIVE table.
FQDN Fully qualified domain name for uc-s1.osisoft.int
host machine.
Role Role of server in collective: 1
◦ 0 — Not replicated
◦ 1 — Primary server
◦ 2 — Secondary server
ServerID SID of machine hosting PI 08675309-0007-0007-0007-
Server. 000000001001

OSIsoft recommends creating a command file and using piconfig to run the commands in
that file.

High Availability Administration Guide 41


PI Data Archive high availability administration

Procedure
1. Create a text file, such as collective_create_uc.txt, in the ..\PI\adm directory.
2. Copy the following text into the file.
* Collective information
*
@tabl picollective
@mode create,t
@istr name, Description, CollectiveID
uc-s1,UC 2006 Demo Collective,08675309-0007-0007-0007- 000000001001
*
* Individual server member information
*
* valid values for Role include:
* 0 NotReplicated
* 1 Primary
* 2 Secondary
*
@tabl piserver
@mode create,t
@istr name,Description,Collective,FQDN,Role,ServerID
uc s1,UC 2006 Demo Server 1,uc s1,uc s1.osisoft.int,1,08675309
0007 0007 0007 000000001001
uc s2,UC 2006 Demo Server 2,uc s1,uc s2.osisoft.int,2,08675309
0007 0007 0007 000000001002
3. Edit the text to specify the information for your collective and servers. If necessary, add
additional lines for additional servers in your collective.
4. Open a command window.
5. Navigate to the ..\PI\adm directory.
6. Enter: piconfig < collective_create_uc.txt

Register certificates on the primary server


Starting with PI Data Archive 2017, PI Data Archive collectives support certificate-based
authentication for each member. With this release, each secondary PI Data Archive server, as
well as the primary PI Data Archive server, can have its own unique certificate to use for
authentication purposes with the primary server.
To support this authentication mechanism, servers within the collective must register their
certificates with each other.
The piartool utility has registration functionality through the -registerhacert --
updatePublicCertOnPrimary option to register certificates amongst the collective members
that cannot run Collective Manager to do this automatically.
In addition, the piartool utility has reporting functionality through the -registerhacert
--reportInfoOnTarget localhost option to assist with troubleshooting certificate-related
issues.
See the PI Data Archive topic "Options for the piartools command" in Live Library (https://
livelibrary.osisoft.com).

42 High Availability Administration Guide


PI Data Archive high availability administration

Note:
If you want to use your own certificate on a primary or secondary member, open PI
Collective Manager on that computer and use the Import Certificate option. All imported
certificates must meet the following requirements:
• Have a private key
• Be configured for both client authentication and server authentication
• Have the key usage options for digital signature and key encipherment enabled

Before you start


Ensure that a PI mapping exists on the primary server that allows the Windows user to
perform the procedure to connect to the primary, as well as write to the server table.

Procedure
1. On the primary server, open the command window.
2. From the command prompt, change the directory to the \pi\adm path.
3. Run the command piartool -registerhacert -u:
piartool -registerhacert -u

The following message appears:


Updating the public certificate on the primary.

Back up the PI Server configuration database and archives


Use the PiBackup.bat command file. For example, to back up the PI Server database with up to
9,999 archives into the C:\temp\pibackup directory.

Procedure
1. Open a command window on the computer that hosts the primary server.
2. Navigate to the ..\PI\adm directory.
3. Enter: pibackup c:\temp\pibackup 9999 "01-jan-70

Copy backup files to each secondary server


You must copy the files to the proper directories on the secondary server. PI Server must not be
running during the copy process.
Note:
Stop the secondary server by entering: pisrvstop.
Note:
If the secondary server stores the primary PI Data Archive in a different directory than
the primary server, you will need to register the primary PI Data Archive on the
secondary server.

High Availability Administration Guide 43


PI Data Archive high availability administration

Create a non-computer-specific license file


If appropriate, create a non-computer-specific license file. If you copied the license file from the
primary server to the secondary server and if you previously ran that primary server outside of
a collective, then you have a computer-specific license file. In that case, you must edit a registry
key and delete a file to enable the secondary server to use the license.

Procedure
1. Create a text file named PI_Set_ServerRole_2.reg.
2. Insert the following text into the file:
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SOFTWARE\PISystem\PI]
"ServerRole"="2"
3. Double-click the file in Windows Explorer to run regedit32.exe.
4. Click Yes.
Windows updates the registry key so that PI Data Archive server can start and read the
configuration database.
5. Delete the pilicense.ud file from the ..\PI\dat directory.

Start secondary servers


Procedure
1. Start each secondary PI Data Archive server in the collective.
Run the pisrvstart.bat file in the ..\PI\adm directory. When the secondary server
starts, it connects to the primary server, reports its status, receives the status of other
servers in the collective, and retrieves configuration changes made on the primary server.
2. If needed, register the archives. You only need to register the primary archive if the
secondary server stores the primary archive in a different directory than the primary server.
a. Open a command prompt window on the computer that hosts the secondary server.
b. Navigate to the ..\PI\adm directory.
c. Type:
piartool -ar \pathname\filename

where pathname is the path to the archive file and filename is the name of the archive
file.
3. Set tuning and firewall parameters.

Register certificates on the secondary server


Starting with PI Data Archive 2017, PI Data Archive collectives support certificate-based
authentication for each member. With this release, each secondary PI Data Archive server can
have its own unique certificate to use for authentication purposes with the primary server.
To support this authentication mechanism, servers within the collective must register their
certificates with each other.

44 High Availability Administration Guide


PI Data Archive high availability administration

The piartool utility has registration functionality through the -registerhacert --


updatePublicCertOnPrimary option to register certificates amongst the collective members
that cannot run Collective Manager to do this automatically.
In addition, the piartool utility has reporting functionality through the -registerhacert
--reportInfoOnTarget localhost option to assist with troubleshooting certificate-related
issues.
See the PI Data Archive topic "Options for the piartools command" in Live Library (https://
livelibrary.osisoft.com).
Note:
If you want to use your own certificate on a primary or secondary member, open PI
Collective Manager on that computer and use the Import Certificate option. All imported
certificates must meet the following requirements:
• Have a private key
• Be configured for both client authentication and server authentication
• Have the key usage options for digital signature and key encipherment enabled

Before you start


Ensure that a PI mapping exists on the primary server that allows the Windows user to
perform the procedure to connect to the primary, as well as write to the server table.

Procedure
1. On a secondary server, open the command window.
2. From a command prompt, change directory to the \pi\adm directory.
3. Run the command piartool -registerhacert -u:
piartool -registerhacert -u

The following message will appear:


Updating the public certificate on the primary.
4. Repeat Steps 1 through Step 3 for each secondary server in the collective.

Verify PI Data Archive collective communication with piconfig


If you cannot use Collective Manager, you can manually verify that PI Data Archive collective
members are properly communicating. Use piconfig to check the values of CommStatus,
SyncStatus, LastSyncRecordID, LastCommTime, and LastSyncTime in the PISERVER table.
When collective members are properly communicating, these values are the same for all
servers.

Procedure
1. Open a command prompt window.
2. Navigate to the ..\PI\adm directory.
3. Enter: piconfig < pisysdump.dif
The display shows configuration output. For example, for a primary PI Data Archive server,
output looks similar to:

High Availability Administration Guide 45


PI Data Archive high availability administration

Collective Configuration
Name, CollectiveID, Description
-----------------------------------------------------------------
uc-s1,08675309-0007-0007-0007-000000001001,UC 2006 Demo Collective

Member Server Configuration


Name, IsCurrentServer, ServerID, Collective, Description, FQDN, Role
-----------------------------------------------------------------
uc-s1,1,08675309-0007-0007-0007-000000001001,uc-s1,UC 2006 Demo Server 1,uc-
s1.osisoft.int,1
uc-s2,0,08675309-0007-0007-0007-000000001002,uc-s1,UC 2006 Demo Server 2,uc-
s2.osisoft.int,2

Collective Status
Name, Status
-----------------------------------------------------------------
uc-s1,0

Member Server Status


Name,IsAvailable,CommStatus,SyncStatus,LastSyncRecordID,LastCommTime,LastSyncTi
me,SyncFailReason,UnavailableReason
-----------------------------------------------------------------
uc-s1,1,0,0,468,6-Nov-06 17:17:18,25-Oct-06 17:33:42,,
uc-s2,1,0,0,468,6-Nov-06 17:17:14,6-Nov-06 17:17:04,,
4. Examine the values of CommStatus, SyncStatus, LastSyncRecordID, LastCommTime, and
LastSyncTime under Member Server Status.
If the collective is communicating properly, then:
◦ CommStatus and SyncStatus will be 0.
◦ The value of LastSyncRecordID will match for all servers.
◦ LastCommTime (the last time the secondary server shared its status with the primary
server) will contain a relatively recent time stam.p
◦ LastSyncTime (the last time the server synchronized with the primary server) will
contain similar time stamps at all secondary servers.

Reinitialize a secondary server manually in PI Data Archive


The procedure to reinitialize a secondary server manually depends on the PI Data Archive
version.

Topics in this section


• Reinitialize a secondary server manually
• Reinitialize a secondary server manually (PI Data Archive 2010 R3 and earlier)

Reinitialize a secondary server manually


If you cannot use Collective Manager in your PI System, you can manually initialize or
reinitialize each secondary server in your PI Data Archive collective.
Note:
For PI Data Archive versions 2017 and later, the version of the primary member must
match the version of the secondary member.

46 High Availability Administration Guide


PI Data Archive high availability administration

Before you start


Ensure that a valid PI mapping exists on the primary server. A valid mapping allows you to use
the piartool command to register a secondary server certificate if it has been modified.
For more information on the piartool command, see the PI Data Archive topic "piartool
command-line options" in Live Library (https://livelibrary.osisoft.com).

Procedure
1. Stop the secondary PI Data Archive server.
a. From a command prompt, change directory to the \pi\adm directory.
b. Stop the PI Data Archive server with the command: pisrvstop.bat.
2. On the primary PI Data Archive server:
a. From a command prompt, change directory to the \pi\adm directory.
b. Run the command piartool -registerhacert -u:
piartool -registerhacert -u
c. In the \pi\adm directory, rename primarybackup.bat.ManualCollectiveReinit to
primarybackup.bat.
d. Initialize a secondary PI Data Archive server.
If you are initializing a secondary server for the first time, run the command:
primarybackup.bat -init NUM

If you are reinitializing a secondary server, run the command:


primarybackup.bat -reinit NUM

For both commands NUM is the number of archives to include.


3. On the secondary member of the collective:
a. Copy the backup from the primary to the secondary node.
b. In the \pi\adm directory, rename
secondaryrestore.bat.ManualCollectiveReinit to secondaryrestore.bat.
c. From a command prompt, change directory to the \pi\adm directory.
d. Use the secondaryrestore.bat command to restore the backup.
secondaryrestore.bat -source c:\myprimarybackup –arc MYARCDIR

where MYARCDIR is the archive directory on your secondary node.


e. If archives on the secondary member are located at a different location than archives on
the primary member, create a new archive registration file using the following command.
If archives are located at the same location on the secondary member as the primary
member, skip this step.
pidiag -ar <path_to_primary_archive>

The archive registration file is called piarstat.dat.


f. Start the secondary server with the pisrvstart.bat command.
g. Register the certificates between the secondary and the primary server by running the
following command:

High Availability Administration Guide 47


PI Data Archive high availability administration

piartool -registerhacert -u
h. Manually register any archives that are not registered after reinitialization. Use the
piartool -ar command to manually register those archives.

Reinitialize a secondary server manually (PI Data Archive 2010 R3 and


earlier)
If you cannot use Collective Manager in your PI System, you can manually reinitialize each
secondary server in your PI Data Archive collective when necessary.

Procedure
1. Back up the primary PI Data Archive server using the Backups tool in PI System
Management Tools (Operation > Backups).
2. Manually copy all of the backup files from the primary PI Data Archive server to a
temporary directory on the secondary PI Data Archive server.
3. Delete the following files from the copy of the backup on the secondary PI Data Archive
server:
◦ pitimeout.dat
◦ pibackuphistory.dat
4. If the installation directory on the secondary server differs from the primary server, delete
the pisubsys.cfg file from the dat directory in the temporary directory that contains the
backup files.
5. Shut down the secondary PI Data Archive server.
6. Restore the secondary PI Data Archive server:
a. Create a command file, pirestore.bat, in the ..\PI\adm directory:
@rem Restore PI files
@rem $Workfile: pirestore.bat $ $Revision: 1 $
@rem
@setlocal
@rem default source: current directory
@set pi_s_dir=%cd%
@rem default destination based on PISERVER symbol
@set pi_d_dir=%PISERVER%
@rem default archive destination set later based on pi_d_dir
@set pi_arc=
@
@if [%1] == [] (goto usage)
@goto loop
@:shift3_loop
@shift
@:shift2_loop
@shift
@:shift1_loop
@shift
@:loop
@if [%1] == [-source] set pi_s_dir=%2%
@if [%1] == [-source] goto shift2_loop
@if [%1] == [-dest] set pi_d_dir=%2%
@if [%1] == [-dest] goto shift2_loop
@if [%1] == [-arc] set pi_arc=%2%
@if [%1] == [-arc] goto shift2_loop
@if [%1] == [-go] goto shift1_loop
@if [%1] == [-?] goto usage
@if [%1] == [?] goto usage

48 High Availability Administration Guide


PI Data Archive high availability administration

@if [%1] == [] goto loop_end


@echo Unrecognized argument "%1%"
@goto usage
@
@:loop_end
@if [%pi_d_dir%] == [] echo Specify argument -dest or set environment
variable PISERVER
@if [%pi_d_dir%] == [] (goto usage)
@
@set pi_adm=%pi_d_dir%\adm
@set pi_bin=%pi_d_dir%\bin
@set pi_dat=%pi_d_dir%\dat
@
@if [%pi_arc%] == [] set pi_arc=%pi_d_dir%\dat
@
@
@echo Copying the files to the target directories
xcopy /r /y "%pi_s_dir%\adm\*.*" "%pi_adm%"
xcopy /r /y "%pi_s_dir%\bin\*.*" "%pi_bin%"
xcopy /r /y "%pi_s_dir%\dat\*.*" "%pi_dat%"
xcopy /r /y "%pi_s_dir%\arc\*.*" "%pi_arc%"
@
@goto bat_end
@
@:usage
@echo. usage: pirestore.bat [-source s_dir][-dest d_dir][-arc a_dir][-go]
@echo.
@echo. Delete from d_dir\log\
@echo. message log files pimsg_*.dat
@echo. audit files pi*ssAudit.dat
@echo. copy archive files from s_dir\arc to a_dir
@echo. copy other files from s_dir\* to d_dir
@echo.
@echo. s_dir source directory. default %%cd%%
@echo. d_dir destination directory. default %%PISERVER%%
@echo. a_dir archive destination directory. default d_dir\dat
@echo. -go prevents accidental execution with no arguments
@echo.
@:bat_end
b. Open a command window on the computer that hosts the secondary PI Data Archive
server.
c. Navigate to the ..\PI\adm directory.
d. Enter: pirestore -source backupdirname -arc destinationarcdirname
where backupdirname specifies the temporary directory containing the backup files and
destinationarcdirname specifies the archive directory where the files will be restored.
For example, if you copied the backup files to the C:\temp\pibackup directory, and will
restore them to C:\PI\dat, then you would enter \C:\temp\pibackup in place of
backupdirname and \C:\PI\dat in place of destinationarcdirname.
7. If the location of the archive files on the secondary PI Data Archive server differs from the
primary PI Data Archive server, recreate the archive manager data file (piarstat.dat):
a. In the command window, navigate to the ..\PI\adm directory.
b. Enter: pidiag –ar path
where path is the full path to the primary archive file.
c. Use the Archives tool in PI System Management Tools (Operation > Archives) to register
the new primary archive on the secondary PI Data Archive server.

High Availability Administration Guide 49


PI Data Archive high availability administration

8. Restart the secondary server.


9. If you recreated the archive manager data file, use the piartool –ar command to re-
register any secondary archive files:
a. At the command prompt, navigate to the ..\PI\adm directory.
b. Enter: piartool –ar path
where path is the path to the secondary archive file you want to re-register.

Force synchronization with piartool


If you cannot use Collective Manager, you can use piartool to force synchronization between
a primary and secondary server in a PI Data Archive collective.

Procedure
1. Open a command window on the computer that hosts the secondary server.
2. Navigate to the ..\PI\adm directory.
3. Enter: piartool -sys -sync

Set synchronization and communication frequencies manually


Use piconfig to manually set the frequencies that a secondary server communicates with and
synchronizes with its primary server.

Procedure
1. Open a command prompt on the computer that hosts the primary server.
2. Navigate to the ..\PI\adm directory.
3. To set synchronization frequency, type:
piconfig
@tabl piserver
@mode ed
@istr name, syncperiod
ServerName, x

where ServerName is the name of the secondary server and x is the new synchronization
frequency.
4. To set communication frequency, type:
piconfig
@tabl piserver
@mode ed
@istr name, commperiod
ServerName, x

where ServerName is the name of the secondary server and x is the new communication
frequency.
5. Type Ctrl+C to exit.

50 High Availability Administration Guide


PI Data Archive high availability administration

Remove a secondary server from a collective


Use piartool to manually remove a secondary server from a PI Data Archive collective.

Before you start


Review the buffering configuration at interfaces and servers and the host server specification
at each interface to ensure that they do not refer to the server that you want to remove.

Procedure
1. Open a command prompt on the primary server.
2. Navigate to the ..\PI\adm directory.
3. Enter:
piartool -sys -drop ServerName

where ServerName is the name of a secondary server you want to remove. If you specify a
primary server, the command changes the server's role from primary server to non-
replicated. (Note that to change the primary server's role to non-replicated, you must first
put the server in stand-alone mode.)
4. Clear the known servers table at each client connected to the collective.

See also:
Control PI Data Archive stand-alone mode
Clear the known servers table

PI Data Archive collective reference topics


This section contains detailed reference material useful when working with PI Data Archive
collectives.

Topics in this section


• PI Data Archive collective configuration tables
• Replicated tables
• Non-replicated tables
• Message logs

PI Data Archive collective configuration tables


Each PI Data Archive includes the PICOLLECTIVE table and the PISERVER table. These tables
contain PI Data Archive collective configuration and status information. If you edit these tables
on the primary PI Data Archive server, the replication service replicates the changes to
secondary servers. Other read-only fields reflect statistics or run-time status.
When editing these tables, you cannot:

High Availability Administration Guide 51


PI Data Archive high availability administration

• Rename and remove the current server.


• Rename and remove an available server (this prevents removing an online secondary
server).
• Remove the current collective while the server is available.
• Promote another server to primary server from an available primary server.
• Remove an available primary server from the collective.

Topics in this section


• Collective table (PICOLLECTIVE)
• Server table (PISERVER)
• PICOLLECTIVE or PISERVER table status values
• Update the PICOLLECTIVE and PISERVER tables on the primary server

Collective table (PICOLLECTIVE)


The PICOLLECTIVE table contains information about the PI Data Archive collective that the
server belongs to, including the collective's name, description, and status. Use this table along
with the PISERVER table to determine configuration and status information for each server in a
PI Data Archive collective.
Normally, the table contains one row for the PI Data Archive collective that the server belongs
to. The Name attribute links entries in the PISERVER table to entries in the PICOLLECTIVE
table. PI Data Archive presents the CollectiveID as its server identifier to client applications
through PI SDK. This allows client applications to connect to any server in a collective without
changing displays.
The primary key is NAME.
PICOLLECTIVE Type Editable Example Description
Attribute
Name String Primary "uc-s1" Name of the
collective to which
the server belongs.
Must match
collective name
defined in
PISERVER table.
CollectiveID String Primary "08675309-0007-0 UID that uniquely
007-0007-000000 represents the PI
001001" Data Archive
collective.
Description String Primary "UC 2006 Demo Optional
Collective" description of the
collective.
LastCollectiveConfi TimeStamp No 12-Apr-06 Time stamp of last
gChangeTime 14:00:17 change to
collective
configuration.

52 High Availability Administration Guide


PI Data Archive high availability administration

PICOLLECTIVE Type Editable Example Description


Attribute
Status Int32 No 0 Overall status of
the collective (0 =
good). Use the
pidiag -e
command to look
up the status as an
error code. If the
status is not good,
at least one
member of the
collective has a bad
CommStatus or
SyncStatus in the
PISERVER table.
NewName String Used to rename an
existing PI Data
Archive collective.

Server table (PISERVER)


This table contains PI Data Archive configuration and status information. If the PI Data Archive
server is part of a PI Data Archive collective, the table contains one row for each server in the
collective.
The primary key is NAME. The value in Collective must match the value of Name in the
PICOLLECTIVE table.
This example displays the name, FQDN and server ID:
* (Ls - ) PIconfig> @mode list
* (Ls - ) PIconfig> @tabl pisys,piserver
* (Ls - PISYS,PISERVER) PIconfig> @ostr name,fqdn,serverid
* (Ls - PISYS,PISERVER) PIconfig> @sele name=*
* (Ls - PISYS,PISERVER) PIconfig> @ends
PiServer1,PiServer1.osisoft.dom,12db4fde-963d-4f47-bc43-65f9e026502c
* (Ls - PISYS,PISERVER) PIconfig>

Attribute Type Editable Example Description


Name String Primary uc-s2 Computer host
name (non-
qualified). Unique
key in the
PISERVER table.
Each server uses
this to find its own
entry in the table.

High Availability Administration Guide 53


PI Data Archive high availability administration

Attribute Type Editable Example Description


Collective String Primary uc-s1 Name of the
collective that the
server belongs to.
Must match
collective name
defined in
PICOLLECTIVE
table.
CommPeriod Int32 Primary 20 Frequency (in
seconds) that
secondary server
checks that it can
communicate with
primary server.
Default value is 5.
CommStatus Int32 no 0 Status of the last
secondary PI Data
Archive
communication
with the primary
server (0 is good).
Description String Primary UC 2006 Demo Optional
Server 2 description for the
server.
FQDN String Primary uc-s2.mycomp.com FQDN or IP
address used to
connect to
collective servers.
IsAvailable BYTE no 1 1 if available for
client access, 0
otherwise. Derived
from all other
status fields in the
table.
IsConnectedToPri BYTE no 1 1 indicates that the
mary secondary PI Data
Archive server is
connected to the
primary PI Data
Archive server.
Always 1 on the
primary PI Data
Archive server.
IsCurrentServer BYTE no 1 1 on the
responding PI Data
Archive server, 0
for all others.
IsTCPListenerOpen BYTE no 1 1 indicates this PI
Data Archive
server TCP/IP
listener is open.

54 High Availability Administration Guide


PI Data Archive high availability administration

Attribute Type Editable Example Description


LastCommTime TimeStamp no 12-Apr-06 Last time the
14:00:17 secondary PI Data
Archive sever
communicated
status to the
primary PI Data
Archive server.
LastSyncRecordID Uint64 no 68 Number of changes
each PI Data
Archive server
applied to the
replicated tables.
LastSyncTime TimeStamp no 12-Apr-06 Last time
14:00:17 synchronization
succeeded on any
secondary PI Data
Archive server.
NumConnections Uint32 no 11 Total number of
connections on the
specific PI Data
Archive server.
PIVersion String no 3.4.375.29 Version of PI Base
Subsystem.
Port Uint32 no 5450 TCP/IP port
number for
communicating to
PI Data Archive.
Role Int32 Primary 2 0 for non-
replicated; 1 for
primary; 2 for
secondary.
ServerID String Primary 08675309-0007-0 A UID representing
007-0007-000000 the unique PI Data
001002 Archive server
identification.
SyncFailReason String No Reason that
synchronization
did not succeed.
SyncPeriod Int32 Primary 10 Frequency (in
seconds) that
secondary server
checks for
configuration
updates from
primary server. 0
indicates no
automatic
synchronization.

High Availability Administration Guide 55


PI Data Archive high availability administration

Attribute Type Editable Example Description


SyncStatus Int32 no 0 On secondary
servers: the status
the last time that
synchronization
was attempted (0
is good)
UnavailableReason String no
NewName String Used to rename an
existing server.

PICOLLECTIVE or PISERVER table status values


Status Error Text Description
0 Success
-10407 No Access - Secure Object Primary and secondary
certificates are not properly
configured. Run the Fix
certificate issues utility in
Collective Manager. If you can not
run Collective Manager, run
piartool -registerhacert
-u on the primary, then on one
the secondary.
-10420 Direct write operations Cannot change configuration on
disallowed on secondary server secondary.
of collective

56 High Availability Administration Guide


PI Data Archive high availability administration

Status Error Text Description


-10773 No certificate is available on the Primary and secondary
server for TLS authentication certificates are not properly
configured, or are missing from
the OSIsoft LLC Certificates store.
Ensure that all servers in the
collective are upgraded to
version PI Data Archive 2017.
If there is no certificate in the
OSIsoft LLC Certificates store, re-
generate or import one.
To generate a new certificate, use
New-SelfSignedCertificate
command in PowerShell 5.0 or
later, or the Microsoft MakeCert
tool. Afterwards, run the Fix
Certificate Issues option in
Collective Manager to register
the certificate. If you cannot run
Collective Manager, run
piartool -registerhacert
-u on the primary, then on the
secondary.
To import a custom certificate,
use Import Certificate option in
PI Collective Manager.

-10774 No certificate found matching the No matching certificate for this


specified parameters server exists in the "OSIsoft LLC
Certificates" store. Generate or
import a certificate.
To generate a new certificate, use
New-SelfSignedCertificate
command in PowerShell 5.0 or
later, or the Microsoft MakeCert
tool. Afterwards, run the Fix
Certificate Issues option in
Collective Manager to register
the certificate. If you cannot run
Collective Manager, run
piartool -registerhacert
-u on the primary, then on the
secondary.
To import a custom certificate,
use Import Certificate option in
PI Collective Manager.

-16030 Batch database access disabled The Batch database on the


on secondary server of PI secondary server does not have
collective batches unless archives are
moved.
-18000 Server has not communicated its The primary server never
status connected to this secondary.

High Availability Administration Guide 57


PI Data Archive high availability administration

Status Error Text Description


-18001 Unable to process further change Change record sequence error.
records; missing update in
sequence
-18002 Error processing update Change record processing error.
-18003 No producer found on the The primary server cannot
primary server register with PI Update Manager
to produce change records
-18005 Server misconfigured; no local The server cannot find its own
server defined host name in table.
-18006 Server misconfigured; no The server cannot find the
primary server defined primary server in table.
-18007 Unable to connect to primary The primary server is not
server reachable.
-18008 Unable to sign up for changes on The primary server is not
the primary server responding.
-18009 Unable to unregister for changes The primary server does not
on the primary server acknowledge request.
-18010 Unable to disconnect from the The primary server does not
primary server acknowledge disconnect.
-18011 Server misconfigured; collective The server cannot find the
not found collective name.
-18012 Invalid role defined for this The server role is not valid.
server
-18013 Member server is in error The collective status shows this
when a server is in error.
-18014 Error reading change record Error reading status information
from the primary server from the primary server.
-18015 Unable to send updates to The primary server is unable to
secondary servers send configuration changes to
the secondary servers.
-18016 Unable to send status update to The secondary server is unable
the primary server to send a status update to the
primary server.
-18017 Unable to get change records The secondary server is unable
from the primary server to get configuration changes
from the primary.
-18018 Server has not communicated The secondary server has not
communicated to the primary
server within the designated
CommFailureGracePeriod and
will be marked unavailable.
-18019 Member has been removed from A status indicating that the
the PI Sys Database server has been marked for
deletion from the collective.
-18020 Member clock as drifted too far The secondary server's system
from the primary's clock clock has drifted too far from the
primary server's clock, as
defined by ClockDiffLimit.
Replication temporarily halted.

58 High Availability Administration Guide


PI Data Archive high availability administration

Status Error Text Description


-18021 Failed to enable replication Unable to enable replication.
-18022 Server has failed to remain The secondary server has failed
synchronized to keep in sync past the
designated
SyncFailureGracePeriod and will
be marked unavailable. By
default, this grace period is
disabled.
-18023 Unable to save unprocessed The secondary server was unable
configuration changes to disk to save its unprocessed change
records to disk.
-18024 Unable to load unprocessed The secondary server could not
configuration changes from disk read unprocessed configuration
changes from disk.
-18025 Server is not a member of a The server is a standalone server
collective and is not part of a collective.
-18026 Promotion is not supported on Promotion of another server is
the primary server not supported on the primary
server.
-18027 Server is not ready to process The secondary server is not
change records ready to process changes. This
usually indicates that one of the
core subsystems is not running.
-18028 Unable to sign up all secondary The primary server is unable to
servers to receive configuration sign up every secondary server
changes to receive configuration change
records, potentially breaking
replication with that secondary.
-18029 Unable to sign up all secondary Attempted to set an attribute
servers to receive configuration specific to replication for a non-
changes replicating server.
-18030 Name cannot contain spaces or Server name contains invalid
other invalid characters characters.
-18031 Localhost is not a valid name Server name cannot be
"localhost."
-18032 FQDN is invalid FQDN must contain only letters,
numbers, dashes, underscores,
and dots.
-18033 Two servers have the same FQDN Two server's FQDNs resolve to
or they resolve to the same IP the same IP address.
address
-18034 Unable to produce update for The primary server is unable to
secondary servers send a change to the secondary
servers.
-18035 Error reading update The secondary server cannot
read configuration changes sent
by the primary server.
-18036 Unable to rename the current The current server cannot be
server renamed while running.

High Availability Administration Guide 59


PI Data Archive high availability administration

Status Error Text Description


-18037 Unable to rename a server while Renaming a server is not
it's online supported while that server is
marked available.
-18038 Unable to remove the current The secondary server must be
server upgraded to be able to continue
processing configuration changes
coming from the primary server.
-18039 Unable to remove an server Removing a server is not
while it's online supported while that server is
marked available.
-18040 Unsupported replication change The secondary server must be
record version; upgrade may be upgraded to be able to continue
required processing configuration changes
from the primary server.
-18041 Two servers have the same The collective reports this error
server ID if two servers have the same
server ID.
-18045 License has expired A server reports this error and
becomes unavailable if its license
expires.
-18046 Unable to resolve the FQDN A server's FQDN cannot be
resolved.
-18047 Unable to resolve the IP Address A server's IP address cannot be
resolved.
-18048 Unable to remove collective The collective cannot be removed
from the PI Data Archive table
while the server is running as a
member of that collective.
-18049 Unable to get change records The secondary server cannot
from the primary server read configuration changes sent
by the primary server.
-18050 Unable to initialize cryptography A server reports this error if it
cannot initialize the
cryptographic certificate.

Update the PICOLLECTIVE and PISERVER tables on the primary server


Use piconfig to create one record in the PICOLLECTIVE table for the collective and to create
a record for each server in the collective in the PISERVER table. You need to specify identifying
fields for both tables. OSIsoft recommends that you create a command file and use piconfig
to run the commands in that file.
Table Identifying Field Description Example
PICOLLECTIVE Name Name of PI Data Archive uc-s1
collective. Use the host
name of an existing PI
Data Archive server.

60 High Availability Administration Guide


PI Data Archive high availability administration

Table Identifying Field Description Example


Description Text describing UC 2006 Demo
collective. Collective
CollectiveID UID of collective. Use the 08675309-0007-0007-
SID of the existing PI 0007-000000001001
Data Archive server.
PISERVER Name Host name of machine uc-s1
running PI Data Archive.
Description Text that describes PI UC 2012 Demo Server
Data Archive. 1
Collective Name of PI Data Archive uc-s1
collective containing
server. Must match the
Name field in the
PICOLLECTIVE table.
FQDN Fully qualified domain uc-s1.osisoft.int
name for host machine.
Role Role of server in 1
collective:
• 0
Not replicated
• 1
Primary server
• 2
Secondary server
Server ID SID of machine that uc-s1.osisoft.int
hosts PI Data Archive.

Procedure
1. Create a text file, such as collective_create_uc.txt, in the ..\PI\adm directory.
2. Copy the following text into the file:
* Collective information
*
@tabl picollective
@mode create,t
@istr name, Description, CollectiveID
uc-s1,UC 2012 Demo Collective,08675309-0007-0007-0007-
000000001001
*
* Individual server member information
*
* valid values for Role include:
* 0 NotReplicated
* 1 Primary
* 2 Secondary
*
@tabl piserver
@mode create,t
@istr name,Description,Collective,FQDN,Role,ServerID
uc-s1,UC 2012 Demo Server 1,uc-s1,uc-s1.osisoft.int,1,08675309-
0007-0007-0007-000000001001
uc-s2,UC 2012 Demo Server 2,uc-s1,uc-s2.osisoft.int,2,08675309-
0007-0007-0007-000000001002

High Availability Administration Guide 61


PI Data Archive high availability administration

3. Edit the text to specify the information for your collective and servers. If necessary, add
additional lines for additional servers in your collective.
4. Open a command window.
5. Navigate to the ..\PI\adm directory.
6. Enter:
piconfig < collective_create_uc.txt

Replicated tables
The replication service in a PI Data Archive collective replicates the key values of critical tables.
By replicating the key values, the service ensures that the values are the same on all servers in
the collective. With identical key values on all the servers, clients and interfaces can connect to
any server to write or access data. For example, because the point tables on the servers share
the association for Tag, PointID, and RecNo, interfaces can send identical time-series data to all
servers; interfaces need not track different PointID values for each server. Similarly, clients can
efficiently retrieve identical time-series data from each server without changing any
configuration.
You can only change values of replicated tables at the primary server.
PI Table Primary, Identity Keys Foreign Keys Replicates to Can Configure
Unique Keys Secondary? on Secondary?
DBSECURITY DBName UserID, yes no
GroupID
PIAFLINK no no
PIATRSET Set no no
PIBAALIAS Alias no yes, but not
recommended
PIBAUNIT UnitName UnitID PointID, no yes, but not
UserID, recommended
GroupID
PICOLLECTIVE Name yes no
PIDS Set SetNo yes no
PIFIREWALL Hostmask no yes
PIGROUP Group GroupID yes no
PIIDENTITY Ident IdentID yes no
PIIDENTMAP IdentMap IdentMapID PIIdent yes no
PIMAPPING yes no
PIMODULES UniqueID PointID, yes no
UserID,
GroupID
PIPOINT Tag PointID, RecNo SetNo, UserID, yes no
GroupID
PIPTCLS Class no no
PISERVER Name PICollective.na yes no
me
PITIMEOUT Name no yes

62 High Availability Administration Guide


PI Data Archive high availability administration

PI Table Primary, Identity Keys Foreign Keys Replicates to Can Configure


Unique Keys Secondary? on Secondary?
PITRUST Trust UserID, yes no
GroupID
PIUSER User UserID GroupID yes no

Non-replicated tables
The replication service does not replicate data for several tables:

• Tables storing meta-information about the attributes in the PIPOINT table:


◦ PIPTCLS — Contains information about point classes
◦ PIATRSET — Contains information about attribute sets
• Batch subsystem tables:
◦ PIBAUNIT
◦ PIBAALIAS
See the Batch Subsystem chapter in the PI Data Archive Applications User Guide for more
details. OSIsoft recommends the Batch Database (BDB) for new batch applications because
it builds on the Module Database (MDB), which is replicated.
• Machine-specific configuration tables:
◦ PI_GEN
◦ PITIMEOUT
◦ PIFIREWALL
Use PI SMT or piconfig to change these tables to accommodate hardware-specific or
network-specific conditions.

Message logs
The following table lists some of the messages you might find in the message log pertaining to
replication. You can use PI SMT or pigetmsg to search for these messages.
Source of the message PI Data Archive server that Message
reports the message
pirepl Primary Online status, cryptography
status, replication errors.
Replication errors refer to
failures to produce messages for
the secondary servers.
pirepl Secondary Online status, cryptography
status, connectivity status,
replication status
Secondary server name Primary Replication queue status

High Availability Administration Guide 63


PI Data Archive high availability administration

64 High Availability Administration Guide


PI AF high availability administration
You can configure high availability features for PI AF in many ways, including load-balanced PI
AF servers, SQL Server mirroring, SQL Server replication, Microsoft Cluster Service (MSCS), or
combinations of these methods. For general information see PI AF high-availability solutions.
For information about PI AF installation, see the PI Server Installation topic "PI AF server
installation and upgrade" in Live Library (https://livelibrary.osisoft.com)

High Availability Administration Guide 65


PI AF high availability administration

66 High Availability Administration Guide


PI System data collection interfaces and high
availability
Interfaces are the components of the PI System that collect time-series data from data sources
and send the data to PI Data Archive for storage. To implement HA, configure interfaces to
support failover and n-way buffering. Failover ensures that time-series data reaches PI Data
Archive even if one interface fails; n-way buffering ensures that identical time-series data
reaches each PI Data Archive server in a collective. To support failover, install a redundant copy
of an interface on a separate computer. When one interface is unavailable, the redundant
interface automatically starts collecting, buffering, and sending data to PI Data Archive. To
support n-way buffering, configure the buffering service on interface computers to queue data
independently to each PI Data Archive server in a collective.
In some deployments, interfaces send output points (that is, data from PI Data Archive) to the
data source. With proper configuration, failover considers the availability of PI Data Archive for
output points in addition to the availability of the interface.

Topics in this section


• Interface failover
• N-way buffering for PI interfaces

Interface failover
With interface failover, you configure redundant interfaces—that is, you configure interface
software on two different computers to record data from a single data source. If one computer
fails, the redundant computer takes over. With redundant interfaces, you minimize data loss by
ensuring that there is no single point of failure.
There are three types of interface failover: hot, warm, and cold.

• Hot failover
Both interfaces collect data from a source but only one interface reports that data to PI Data
Archive. If one interface fails, the redundant interface immediately begins sending data to PI
Data Archive without any data loss. Because the data source is connected and sending data
to two interfaces, this type of failover requires the most computing resources.

• Warm failover
The redundant interface maintains a connection with the data source but does not collect
data. If the primary interface fails, the redundant interface begins collecting and sending
data to PI Data Archive. Minimal data loss might occur while the data-collection process
starts.

• Cold failover
The redundant interface only connects with the data source after the primary interface fails.
Some data loss might occur while the connection process initiates (including tag loading)
and while the data collection process starts. Because connections only occur when needed,
this type of failover requires the least computing resources.

High Availability Administration Guide 67


PI System data collection interfaces and high availability

Most PI interfaces use the UniInt (Universal Interface) Failover service to manage failover. For
more information on interface failover, see UniInt Interface User Manual.

Topics in this section


• Output points and interface failover
• Interface failover configuration approaches
• Configure interface failover using shared-file synchronization

Output points and interface failover


If you have a PI Data Archive collective, and PI Data Archive sends output points to the interface
in your deployment, you can use interface failover to ensure the availability of the PI Data
Archive server that provides output points.
Each interface receives output points from a specific PI Data Archive server or collective
member. If that PI Data Archive server becomes unavailable, the interface will no longer receive
output points.
However, you can configure each interface to receive output points from a different collective
member. If you are using hot or warm failover and thePI Data Archive server connected to the
primary interface fails, the redundant interface takes over and will receive output points from
its collective member and report time-series data to the collective. Note that PI Data Archive-
induced failover only occurs if the redundant interface remains connected to the data source.

Interface failover configuration approaches


There are two approaches to configuring interface failover: synchronization through the data
source (phase 1 failover), and synchronization through a shared file (phase 2 failover). Phase 1
failover is now deprecated and is not recommended. This guide tells you how to configure
phase 2 failover.

Shared-file synchronization (phase 2 failover)


With shared-file synchronization, UniInt Failover writes information to a shared file to
communicate status and to synchronize operations between two interfaces. With this method,
UniInt Failover can provide hot, warm, or cold failover. With hot failover, no data loss occurs
when an interface fails. With warm or cold failover, however, some data loss might occur when
an interface fails.

68 High Availability Administration Guide


PI System data collection interfaces and high availability

You must choose a location for the shared file. OSIsoft recommends the following best
practices:

• Store the shared file on a file-server computer that has no other role in the data-collection
process. Do not store the file on the PI Data Archive server or interface computers.
• Exclude the location of the shared file from virus scanning.

Configure interface failover using shared-file synchronization


Before you start

• Stop your interfaces.


• Choose a location for the shared file.
• Select a unique failover ID number for each interface.

Procedure
1. Configure the shared file.
a. Choose a location for the shared file.
b. Create a shared file folder and assign permissions that allow both the primary and
redundant interfaces to read and write files in the folder.
c. Exclude the folder from virus scanning.
2. On each interface computer, open PI Interface Configuration Utility and the interface.
a. Click Start > All Programs > PI System > PI Interface Configuration Utility.
b. Select the interface.
3. If you have a PI Data Archive collective and PI Data Archive sends output points to this
interface, point each interface to a different collective member:

High Availability Administration Guide 69


PI System data collection interfaces and high availability

a. In the page tree, select General.


b. Under PI Host Information, set SDK Member to a collective member for the interface.
This property sets which PI Data Archive server in the collective sends configuration
data and output points to the interface. If you set each interface to a different collective
member, you enable failover when the PI Data Archive server that sends output points
becomes unavailable.
c. Set API Hostname to the host of the selected SDK Member.
The interface uses this information to connect to the PI Data Archive server that provides
configuration data. The drop-down list shows the host specified in various formats. You
can specify the host as an IP address, a path, or a host name. However, if you enable
buffering, you must specify the buffered server names in the same format, otherwise
buffering will not work.
4. Configure the failover parameters at each interface.
a. In the page tree under UniInt, select Failover.
b. Select the Enable UniInt Failover check box to enable the properties on this page.
c. Select Phase 2 to indicate shared-file synchronization.
d. In Synchronization File Path, specify the directory and file name of the synchronization
file (click Browse to select the directory and use the default file name).
e. In UFO Type, select the failover type.
f. In Failover ID# for this interface, enter the unique failover ID you have selected for this
interface.
g. In Failover ID# for the other interface, enter the unique failover ID you have selected for
the alternate interface and specify the path to that interface (click Browse to select the
interface).
5. From the interface connected to the primary server, create the digital state tags to support
failover.
a. On the UniInt Failover page, right-click a tag and choose Create UFO_State Digital Set on
Server XXX where XXX is the name of the PI Data Archive collective or server.
b. Click OK.
c. Right-click a tag and choose Create all points (UFO Phase 2). PI ICU creates the tags on PI
Data Archive.
6. Check that the user from each interface has permission to write to the shared file.
a. In the page tree, select Service.
b. Verify that the user name assigned in Log on as can read and write to the folder that will
store the shared file.
7. Click Apply.
8. Restart each interface.
The first interface that starts will create the shared file for synchronization.
9. Check the pipc.log file for any errors or problems.
The digital state tags that you created should be generating data.

70 High Availability Administration Guide


PI System data collection interfaces and high availability

N-way buffering for PI interfaces


You can use a buffering service to control the time-series data flowing from a PI interface to PI
Data Archive. When PI Data Archive is not available, the buffering service temporarily stores
the PI interface data. Once the server is available, the buffering service sends the data to PI
Data Archive in the proper order. To support PI Data Archive HA, you configure the buffering
service to use n-way buffering. With n-way buffering, the buffering service queues data
independently to each server in a PI Data Archive collective.

Topics in this section


• Buffering services
• PI Buffer Subsystem configuration
• Buffering from an interface on the PI Data Archive computer
• Batch interfaces and buffering

Buffering services
The PI System offers two services to implement buffering at interfaces. Only one of them, PI
Buffer Subsystem, supports buffering for clients.

• PI Buffer Subsystem (pibufss)


PI Buffer Subsystem is the best option for most environments, particularly if you use
version 4.3 or later. Starting with version 4.3, PI Buffer Subsystem supports buffering to
multiple servers and collectives.

• API Buffer Server (bufserv)


API Buffer Server is needed only by those with unusual configurations, for example, those
connecting to older versions of PI Data Archive.

PI Buffer Subsystem configuration


Use PI ICU to configure an interface to use PI Buffer Subsystem and n-way buffering. For
interfaces that were already configured to use PI Buffer Subsystem before you added a PI Data
Archive collective, the collective-creation process automatically configures n-way buffering.
You can verify proper n-way buffering configuration.

Topics in this section


• Configure n-way buffering for interfaces with PI Buffer Subsystem
• Buffering configuration when you add a PI Data Archive server to a collective and additional
steps for using interface failover
• Configure PI Buffer Subsystem to send data to select collective members
• Verify that buffered data is being sent to PI Data Archive

High Availability Administration Guide 71


PI System data collection interfaces and high availability

Configure n-way buffering for interfaces with PI Buffer Subsystem


The procedure in this section describes how you can use PI Interface Configuration Utility (PI
ICU) to configure PI Buffer Subsystem and n-way buffering for an interface that sends data to a
PI Data Archive collective.
Note:
If you have configured interface failover, you must configure n-way buffering at both the
primary interface and the redundant interface. Use PI ICU to ensure that each interface is
dependent on PI Buffer Subsystem.

Procedure
1. Click Start > All Programs > PI System > PI Interface Configuration Utility.
2. In Interface, select the interface.
3. In the page tree, select General.
4. Depending which version of PI Buffer Subsystem is installed on this computer, refer to the
appropriate instructions:
◦ Start buffering (PI Buffer Subsystem version 4.3 or later)
◦ Start buffering (PI Buffer Subsystem versions earlier than 4.3)
5. Verify that buffering is working as expected by doing one of the following:
◦ If you are using PI Buffer Subsystem 4.3 or later, view the Buffering Manager dashboard.
◦ If you are using an earlier version, see Verify that buffered data is being sent to PI Data
Archive.

Procedure
1. Start buffering (PI Buffer Subsystem version 4.3 or later).
2. Start buffering (PI Buffer Subsystem versions earlier than 4.3).

Start buffering (PI Buffer Subsystem version 4.3 or later)


PI ICU assists in configuring and running buffering.

Before you start


These instructions apply only if PI Buffer Subsystem version 4.3 or later is installed on this
computer. If version 3.4.380.79 or earlier is installed, refer to Start buffering (PI Buffer
Subsystem versions earlier than 4.3).
Note:
The version of PI Buffer Subsystem currently installed determines the process you follow
to start buffering. It does not matter whether buffering is configured on this computer, or
whether you use PI Buffer Subsystem or API Buffer Server.

72 High Availability Administration Guide


PI System data collection interfaces and high availability

Procedure
1. Click Tools > Buffering.
2. When prompted, confirm that you want to configure PI Buffer Subsystem. If you currently
use API Buffer Server (bufserv), you may need to confirm more than one prompt.
The Buffering Manager window opens. This indicates one of two things:
◦ This computer is configured to buffer data using API Buffer Server (bufserv). In this case,
before you continue, review the information in the Buffering Manager window regarding
upgrades from API Buffer Server.
◦ This computer is not configured to use any form of buffering.
3. To configure buffering, follow the instructions in the Buffering Manager window.
4. After you finish, return to the PI Interface Configuration Utility window. Select each interface,
and on the General page under PI Host Information, look at the Buffering Status setting:
◦ If Buffering Status is On, the buffering configuration for the server to which this interface
sends data is complete. (This is the server specified in the API Hostname field for this
interface.)
◦ If Buffering Status is Off, you need to configure the server specified in the API Hostname
field to receive buffered data from this interface. To add the server to the buffering
configuration, click the Enable button and follow the instructions on the Buffering
Manager screen.

Start buffering (PI Buffer Subsystem versions earlier than 4.3)


PI ICU assists in configuring and running buffering (API Buffer Server or PI Buffer Subsystem).

Before you start


These instructions apply only if PI Buffer Subsystem version 3.4.380.79 or earlier is installed
on this computer. If version 4.3 or later is installed, refer to Start buffering (PI Buffer
Subsystem version 4.3 or later).
Note:
The version of PI Buffer Subsystem currently installed determines the process you follow
to start buffering. It does not matter whether buffering is configured on this computer, or
whether you use PI Buffer Subsystem or API Buffer Server.

Procedure
1. Choose Tools > Buffering to display the Buffering dialog box.
2. Use the Choose Buffer Type page to select a buffering option:
◦ Disable buffering
◦ Enable PI Buffer Subsystem
◦ Enable API Buffer Server
3. Use the Buffering Settings page to change default settings.

High Availability Administration Guide 73


PI System data collection interfaces and high availability

4. Use the Buffered Servers page to select one or more servers that you want to buffer data to.
5. Use the API Buffer Server Service and PI Buffer Subsystem pages to configure and control the
buffering services.

Buffering configuration when you add a PI Data Archive server to a


collective and additional steps for using interface failover
PI Buffer Subsystem automatically uses n-way buffering and sends data to all servers in a PI
Data Archive collective. In many cases, you only need to verify PI Buffer Subsystem operation
after you add a server to a PI Data Archive collective.

Additional steps for those using interface failover


If you have configured interface failover and PI Data Archive sends output points to the
interface, you should also point a redundant interface to a secondary collective member.
Note:
Restart PI ICU after upgrading to a PI Data Archive collective.

Point a redundant interface to a secondary collective member


If you use PI Buffer Subsystem 4.3 or later, follow steps 1 through 4 below. If you use PI Buffer
Subsystem versions 3.4.380 and earlier, follow all steps below.

Before you start


If you just upgraded from a single PI Data Archive server to a collective, restart PI ICU.

Procedure
1. In the PI ICU page tree, select General.
2. Under PI Host Information, set SDK Member to a secondary collective member.
This property sets which PI Data Archive server in the collective sends the interface
configuration information and output points. If you set each interface to a different
collective member, you enable failover when the PI Data Archive server that sends output
points becomes unavailable.
3. Set API Hostname to match.
The interface uses this information to connect to the PI Data Archive server that provides
configuration data. The drop-down list shows the host specified in various formats. You can
specify the host as an IP address, a path, or a host name. However, when you configure the
buffered server list, you must specify the buffered server names in the same format,
otherwise buffering will not work.
4. Click Apply.
Note:
Follow the remaining steps only if you are using PI Buffer Subsystem 3.4.380 or
earlier.
5. Select Tools > Buffering.
6. In the Buffering dialog box task list, select Buffered Servers.

74 High Availability Administration Guide


PI System data collection interfaces and high availability

7. Verify that the Replicate data to all collective member nodes check box is selected and that
the server list contains the server and format specified in API Hostname.
8. If necessary, click the appropriate entry under Buffered Server Names to change the format.
9. Click Yes at the prompt to restart PI Buffer Subsystem and dependent interfaces.

Configure PI Buffer Subsystem to send data to select collective members


Use these instructions only for PI Buffer Subsystem 4.3 or later.
For certain configurations, you may not want to send data to all collective members. For
example, if collective members are in different networks, you may want to send data only to
servers on the local network.

Before you start


Configure PI Buffer Subsystem to send data to all servers in a collective.

Procedure
1. Click Start > All Programs > PI System > PI Interface Configuration Utility.
2. Select Tools > Buffering.
3. In the Buffering Manager window, click the Settings link.
4. In the Buffering Settings window, select the collective member to which you do not want PI
Buffer Subsystem to send data.
5. In the Buffering list, select Disallowed.
6. Click Save.

Results
PI Buffer Subsystem no longer sends data to the selected server. To send data to this server, you
can configure a PI to PI interface.

Verify that buffered data is being sent to PI Data Archive


Use these instructions only for PI Buffer Subsystem versions 3.4.380 and earlier. For later
versions of PI Buffer Subsystem, use Buffering Manager to verify buffering status.

Procedure
1. In a command window, navigate to the \PIPC\bin directory.
2. Enter: pibufss -cfg
3. In the resulting display, note the number of total events sent.
4. Wait a few seconds, then enter pibufss -cfg again.
You may want to repeat this step one or two more times. If buffering is working properly,
the number of total events sent increases each time. The number of queued events
should remain at or near zero.

High Availability Administration Guide 75


PI System data collection interfaces and high availability

Buffering from an interface on the PI Data Archive computer


You might install some interfaces, such as a Performance Monitor interface or a PI to PI
interface, on the PI Data Archive computer. You can use a buffering service to send data from
that interface to all the servers in a collective or to specific other servers. You can use either
buffering service for interfaces on the PI Data Archive computer, but procedures differ for each
service.
Use caution when selecting which servers to send buffered data. For performance monitoring
interfaces, which collect data about a particular server, you might want to send data to all
servers in a collective, including the server hosting the interface. On the other hand, for a PI to
PI interface that collects data stored on the host PI Data Archive server, you do not want to
send the data back to that server; instead you probably only want to send the data to other
servers in its collective.

Topics in this section


• Configure PI Buffer Subsystem 3.4.380 or later on a PI Data Archive computer
• Configure PI Buffer Subsystem 3.4.375 on a PI Data Archive computer

Configure PI Buffer Subsystem 3.4.380 or later on a PI Data Archive


computer
For an interface installed on a PI Data Archive computer, you can use PI Buffer Subsystem to
send interface data to the server or servers in its collective.
If you have PI ICU version 1.4.9 or later and PI Buffer Subsystem version 3.4.380 or later, you
can use PI ICU to configure PI Buffer Subsystem on a PI Data Archive computer.

Before you start


On the PI Data Archive computer, install and configure the interface for which data will be
buffered.
Note:
Do not use localhost as the server name in the interface configuration. Instead, use the
actual server name. Buffering to localhost can cause problems for PI Performance
Equation Scheduler, PI Totalizer, and PI Alarm.

Procedure
• Use PI ICU to configure PI Buffer Subsystem as you would on an interface computer, but as
mentioned above, use caution when selecting buffered servers.
See Configure n-way buffering for interfaces with PI Buffer Subsystem for instructions.

Configure PI Buffer Subsystem 3.4.375 on a PI Data Archive computer


For an interface installed on a PI Data Archive computer, you can use PI Buffer Subsystem to
send interface data to the server or to servers in its collective.
If you have a version of PI ICU older than 1.4.9 and PI Buffer Subsystem 3.4.375, you need to
manually configure PI Buffer Subsystem on a PI Data Archive computer as described in the

76 High Availability Administration Guide


PI System data collection interfaces and high availability

procedure below. In this case you must install PI Buffer Subsystem, specify the servers in the
initialization file, and set service dependencies.

Before you start


On the PI Data Archive computer, install and configure the interface for which data will be
buffered.
Note:
Do not use localhost as the server name in the interface configuration. Instead, use the
actual server name. Buffering to localhost can cause problems for PI Performance
Equation Scheduler, PI Totalizer, and PI Alarm.

Procedure
1. Install PI Buffer Subsystem on the PI Data Archive computer where the interface is installed.
2. Configure PI Buffer Subsystem by editing the piclient.ini file.
a. Open the piclient.ini file, found in the \PIPC\DAT\ directory.
b. Edit the file to include the RUNSONSERVER parameter in the PIBUFSS section and to list
the servers that you are sending data to in the BUFFEREDSERVERLIST section.
For example, if you are sending data from the interface to two servers:
[APIBUFFER]
BUFFERING=1
[PIBUFSS]
BUFFERING=1
AUTOCONFIG=1
RUNSONSERVER=1
[BUFFEREDSERVERLIST]
BUFSERV1= MyPIDataArchiveServer1
BUFSERV2= MyPIDataArchiveServer2
c. Save the file.
3. Use PI ICU to add PI Buffer Subsystem as a dependency and to start the interface.
a. Click Start > All Programs > PI System > PI Interface Configuration Utility.
b. In Interface, select the interface you want to buffer.
c. In the page tree of PI ICU, click Service.
d. At the prompt to add a dependency on API Buffer Server, click No.
e. Under Installed services, select PIBufss and click to move the service to the
Dependencies list.
f. Click Apply.
g. Start or restart the interface.
h. At the prompt to start the PIBufss service, click Yes.
PI ICU starts the PI Buffer Subsystem service and then starts the interface service.

Batch interfaces and buffering


Batch interfaces write data only to the primary server in a PI Data Archive collective. While
data can be buffered to collective members, it will not function as it does on the primary.

High Availability Administration Guide 77


PI System data collection interfaces and high availability

Therefore, OSIsoft does not recommend buffering for computers that run only batch interfaces.
For computers running both a batch interface and another interface that can be buffered,
buffering is recommended. Refer to the documentation for your interface for instructions on
configuring the interface without buffering.

78 High Availability Administration Guide


PI System clients and high availability
Clients are the component of the PI System used to access and view data on PI Data Archive
and features such as tag search, performance equations, etc. Implementing high availability for
client connections allow you to mitigate and minimize the effects of a disruption to the client in
the event that the PI Data Archive server goes down.
High availability is implemented for PI Data Archive through a PI Data Archive collective. A PI
Data Archive collective is a configuration of multiple servers that act as a logical PI Data
Archive server in your PI System to provide high availability, disaster recovery, load
distribution, increased scalability, and connection balancing. Each server in a collective is
called a member of the collective. See PI Data Archive high availability administration.
To implement high availability between the client and PI Data Archive, you must first configure
multiple PI Data Archive servers into a collective. Then, configure the client to connect to any PI
Data Archive server in that collective and seamlessly switch to another server within the
collective in the event of a failure or disruption of the primary server.
There are different types of client connections available to the PI Data Archive server,
including:

• AF SDK
Microsoft .NET assembly that provides access to objects and features of PI Asset
Framework. PI AF SDK is available for both 32-bit and 64-bit Windows operating systems.
Some AF SDK clients include PI System Explorer and PI Vision.
Note:
This client connection requires PI AF Client 2018 or later.

• PI SDK
The COM-based software development kit for PI System applications. PI SDK is a set of
programming libraries for development of Microsoft Windows client programs or interfaces
that can communicate with most PI Data Archive versions (PI Server 3.2.357 and up) on any
supported operating system.
You may also want to implement buffering to protect against data loss if PI Data Archive
becomes unavailable. For details, see N-way buffering for PI clients.

Topics in this section


• Client failover
• Client failback
• Client connection balancing
• Configure connection for AF SDK clients
• Configure connection for PI SDK clients
• N-way buffering for PI clients

High Availability Administration Guide 79


PI System clients and high availability

Client failover
When the client connection to the PI Data Archive server is configured for high availability,
clients automatically connect to another PI Data Archive server within the collective if the
current server becomes unavailable. This behavior is known as client failover. When this
occurs, failover automatically switches the client over to another PI Data Archive server of the
collective to minimize the effects of the disruption. In addition, when the original server comes
back online and becomes visible to the client, the client switches back automatically.
Regardless of the connection type, the client attempts to connect to another PI Data Archive
server within the collective based upon a set of factors such as connection preference set for
each client application and connection priority set across all applications on the client
machine. The application-specific connection preference is the first factor considered by
failover, and can be set to either require the primary PI Data Archive server, prefer the primary
server, or a preference to any member-type of the collective. If the connection preference is set
to the any, failover considers the connection priority values, which are set for all the
applications on the client machine. In this scenario, the client fail overs to the server with the
highest priority value.

Procedure
1. Configure failover.

Configure failover
Failover is enabled by default. However, the way failover occurs depends upon how your client
connection is configured.

Procedure
• Set the connection priority values for the client machine to configure the failover behavior
when a disruption occurs.
◦ Specify connection priority for AF SDK clients
◦ Specify the connection priority for PI SDK clients
The connection priority values are set for all client applications on the client machine.

Client failback
Depending on how the client connection is configured, failback automatically switches the
client back to either the primary server of the PI Data Archive collective or a server with a
higher connection priority value.
Failover occurs when the primary PI Data Archive server for the client becomes unavailable.
When this occurs, the client automatically connects to another server within the collective to
minimize the disruption for the client.
Failback attempts to restore the client connection back to the original server before the
disruption and failover occurred, depending on the configurations set for the connection. If the
client connection is set to prefer the primary server, it checks for the primary server to become
available and switches to that server. If the connection preference is set to any, the client
connection checks for any server with a higher connection priority value available and
switches to that server.

80 High Availability Administration Guide


PI System clients and high availability

Configure failback
Failback is enabled by default for PI AF Client 2018 and later only, and only with AF SDK client
connections.
Note:
Failback is not supported for PI SDK client connections.

Procedure
1. Check the version of PI AF Client for your client application.
◦ If your PI AF Client version is PI AF Client 2018 or later, proceed to the next step.
◦ If your PI AF Client version is 2017 R2 or earlier, you must upgrade to version PI AF
Client 2018 or later for your client application. See the PI Server Installation topic
"Upgrade PI AF Client" in Live Library (https://livelibrary.osisoft.com).
2. Configure connection preference to the failback behavior you want for your client
application. See Specify connection preference for AF SDK clients. The connection
preference is set on a client application level.
3. If you set the connection preference to any (in scenarios where you want Connection
balancing between servers of the collective), set the connection priority values for the client
machine. See Specify connection preference for AF SDK clients. The connection priority
values are set for all client applications on the client machine.

Client connection balancing


Connection balancing spreads out the overhead associated with client connections across the
PI Data Archive collective, distributing client connections amongst the various servers in the
collective. Having more than one dedicated server assigned to handle client connections avoids
overloading that server.
Consider using connection balancing in scenarios where the other servers in the collective have
similar host computer specifications and network configurations.
It is important to differentiate connection balancing from load balancing. Connection balancing
distributes requests from the client randomly across multiple available servers in the
collective. Load balancers control traffic from the data source and distribute that traffic to the
best available server in the collective. AF SDK only enables connection balancing and does NOT
perform load balancing.

Configure connection balancing


PI Server 2018 introduces connection balancing for AF SDK client.
Note:
This feature is available only for AF SDK applications. For a list of products that use AF
SDK, see the Buffering topic "PI products and buffering programs" in Live Library
(https://livelibrary.osisoft.com).

Procedure
1. Check the version of PI AF Client for your client application.

High Availability Administration Guide 81


PI System clients and high availability

◦ If your PI AF Client version is PI AF Client 2018 or later, proceed to the next step.
◦ If your PI AF Client version is 2017 R2 or earlier, you must upgrade to version PI AF
Client 2018 or later for your client application. See the PI Server Installation topic
"Upgrade PI AF Client" in Live Library (https://livelibrary.osisoft.com).
2. Configure connection preference to any for the client application(s). See Specify connection
preference for AF SDK clients.
On every PI Data Archive server participating in connection balancing, you must change the
connection preference from the default value Prefer Primary to the value Any. This is
necessary because using Prefer Primary interferes with the ability to distribute the
connections among the other servers within the collective equally.
3. Change the connection preference of any AF SDK client application to Any as well.
Note:
Connection preferences are set on each AF SDK client application (it is an application-
specific setting). This is in contrast to connection priority values which are set on the
AF SDK client machine, and applies to all client applications connecting on the AF SDK
connection.
4. Set the connection priority of each server within the collective you want participating in
connection balancing to the same numerical value. See Specify connection preference for AF
SDK clients. The connection priority values are apply for all AF SDK client applications on
the client machine.

Configure connection for AF SDK clients


You can configure the client connection settings of your AF SDK client (connection preference
and connection priority values) to affect the way that certain HA features behave when there is
a disruption (see Client failover) and after a disruption has resolved (see Failback.)
AF SDK clients can connect to an independent PI Data Archive server or to a PI Data Archive
collective. The AF SDK client considers a PI Data Archive collective to be a single data source. If
connected to a collective, AF SDK selects a server to provide data based upon the connection
preference and connection priority values. If the connected server becomes unavailable, AF
SDK connects to another server in the collective based upon the connection priority values of
each server.
Additionally, you can spread out client connections across the servers of a PI Data Archive
collective to distribute the overhead associated amongst the collective (see Connection
balancing.)
Use PI System Explorer to configure the connection priority values for your AF SDK client
application(s).

Specify connection preferences for AF SDK clients


AF SDK clients can connect to any PI Data Archive server in a collective. However, you can
require the AF SDK connection to connect to the primary server, as well as preferring the
primary server over secondary servers. You can also configure the AF SDK connection to not
have a member-type preference. You can do this by specifying the connection preference for
each AF SDK client application using this connection.

82 High Availability Administration Guide


PI System clients and high availability

Keep in mind that some of these clients write configuration data to PI Data Archive. These
clients must connect to a primary server. In a collective, you make configuration changes to the
primary servers, which sends those changes to all secondary servers.
Client-specific connection preferences overrides any default connection preference values.

Procedure
1. Open PI System Explorer on the host computer and select Tools > Options...
2. In the Server Options tab, locate the Connection preference drop-down list in the PI Data
Archive Connection Settings in PI System Explorer field.
Caution:
Ensure that you are not erroneously locating the Connection preference drop-down
list in the PI AF Server Connection Settings in PI System Explorer field.
3. Select the Connection preference drop-down list for this client application and select the
preference you want for the client application (in this case, we are specifying for the PSE
client application).

◦ Require Primary
Client applicationmust connect to primary server.

◦ Prefer Primary
Client application prefers to connect to primary server. With this setting, the client
application (in this case it is the PSE) always attempts to connect to a primary server
first, but will connect to a secondary server if the primary server is unavailable.

◦ Any
Client application can connect to any server.
4. Click OK.

Specify connection priority for AF SDK clients


Use PI System Explorer to specify the connection priority value for all servers of the PI Data
Archive collective. This connection priority value helps determine the order that AF SDK
connects to specific servers in the collective. You specify the connection priority with a
numerical priority value. AF SDK attempts to connect to the server with the highest priority
value. By default, all of the PI Data Archive servers within the collective are assigned a priority
of 1.

Procedure
1. Open PI System Explorer on the host computer for the client application and select File >
Connections.
2. Right-click on the PI Data Archive collective and select Properties.
3. In the Collectives tab, specify priority values for each of the server in the collective.
4. Click OK.

High Availability Administration Guide 83


PI System clients and high availability

Note:
Connection priorities are set for all AF SDK client applications on the host computer
(machine-specific setting). This is in contrast to connection preferences, which are set
for the specific AF SDK client application (application-specific setting). Hence, you
only need to set connection priority once on the client machine and it will apply to all
AF SDK client applications running on the machine.

Configure connection for PI SDK clients


While some PI clients send data to PI Data Archive through AF SDK, there is still deployment
scenarios where clients send data through PI SDK. PI SDK is installed with clients that use it to
send data to PI Data Archive.
Use PI Connection Manager to view which server is providing data, to switch to a different
server, and to change the order that a client attempts to connect to a PI Data Archive collective
server. You configure connections with PI Connection Manager at each host computer where
you install a PI SDK client application. PI Connection Manager, which is installed with PI SDK,
provides a user interface that shows servers to which PI SDK is connected and sending data. PI
SDK can connect to an independent PI Data Archive server or to a PI Data Archive collective. PI
SDK considers a PI Data Archive collective to be a single data source.
If connected to a collective, PI SDK selects a server to provide data. If the connected server
becomes unavailable, PI SDK connects to an alternate server.
Note:
To benefit from all of the high availability features associated with client connections, you
will need both PI SDK 1.3.4 or later and PI Server 3.4.375 or later.

Topics in this section


• View the PI Data Archive server providing data to a client
• Switch to the primary server in a PI Data Archive collective
• Switch to a secondary PI Data Archive collective member
• Specify the connection preference for PI SDK clients
• Specify the connection priority for PI SDK clients
• Clear the known servers table

View the PI Data Archive server providing data to a client


Procedure
1. Open PI Connection Manager. From most clients, choose File > Connections.
The PI Connection Manager dialog box shows the list of possible PI Data Archive collectives
or individual servers. A check mark indicates which collective or individual server currently
provides data.
2. Double-click the collective.
The Collective Member Information dialog box lists the servers in the collective. Bold type
and a green dot next to the icon indicate which server currently provides data.

84 High Availability Administration Guide


PI System clients and high availability

Switch to the primary server in a PI Data Archive collective


Use PI Connection Manager to connect your PI client to the primary server in a PI Data Archive
collective.

Procedure
1. Open PI Connection Manager. From most clients, choose File > Connections.
2. In the list of servers, select the collective.
3. Choose Server > Connect to Primary.
PI SDK connects to the primary server in the collective.
4. To verify, double-click the collective and check the connected server on the Collective
Member Information dialog box.

Switch to a secondary PI Data Archive collective member


Procedure
1. Open PI Connection Manager.
From most clients, choose File > Connections.
2. Choose Server > Switch Member.
PI SDK disconnects from the currently selected PI Data Archive server in the collective and
switches to the next available server, attempting servers in the order specified.
3. To verify, double-click the collective and check the connected server on the Collective
Member Information dialog box.

Specify the connection preference for PI SDK clients


Most PI SDK clients can connect to any server in a collective. However, some of these clients
write configuration data to PI Data Archive. These clients must connect to a primary server. (In
a collective, you make configuration changes to the primary server, which sends those changes
to all secondary servers.) When you configure a client, you can specify server-type connection
preferences:

• Require Primary
Client must connect to primary server. With this setting, PI SDK returns an error if the
primary server is unavailable when trying to connect.
• Prefer Primary
Client prefers to connect to primary server; if connected to a secondary server, some
features may not be available. With this setting, PI SDK always attempts to connect to a
primary server first, but will connect to a secondary server if the primary server is
unavailable.

• Any
Client can connect to any server.

High Availability Administration Guide 85


PI System clients and high availability

Client-specific connection preferences to require or prefer the primary server override default
connection preferences specified in PI Connection Manager. See the client documentation for
information about specifying a client-specific connection preference.

Procedure
1. Default connection preferences.

Default connection preferences


PI Connection Manager specifies a preference for the order in which PI SDK connects to
collective servers from a particular workstation. You can configure connection order to balance
loads or for other reasons. For example, you might restrict workstations on the business
network to connect only to servers located on the business network, or you might prefer
workstations to connect to servers located in the same building. You can configure a particular
client to override the default connection preference with client-specific connection
preferences.
For information about setting default connection preferences, see Specify the connection
priority for PI SDK clients.

Specify the connection priority for PI SDK clients


Use PI Connection Manager to specify default connection preferences, which determine the
order that PI SDK connects to servers in a PI Data Archive collective. You specify this
preference with a priority value. PI SDK attempts to connect to servers in the order specified
by the priority value. By default, the all PI Data Archive collective members are assigned a
priority of 1.
You can set different priority values at different workstations to distribute work among the
servers in a collective. Give the server with the highest priority a value of 1 and other servers
incremental values (2, 3, 4, and so on). Set the priority to -1 to prevent connections to a server.
For example, you can force workstations to connect to only local servers by setting the priority
value to -1 for remote servers.
You can also set client-specific connection preferences, which override these default
connection preferences. See Specify the connection preference for PI SDK clients.

Procedure
1. Open PI Connection Manager. From most clients, choose File > Connections.
2. Double-click the collective name to open the Collective Member Information dialog box.
3. Click a server to view its properties.
4. In Priority, specify the desired connection order for the selected server.
PI SDK attempts to connect to servers in the order specified. PI SDK never connects to
servers with a priority of -1. By default, the primary PI Data Archive server has a priority of
1.
5. Click Save.
6. Click Close to close the Collective Member Information dialog box.

86 High Availability Administration Guide


PI System clients and high availability

Clear the known servers table


The known servers table contains the list of servers that PI SDK knows about. This table can
list each server one time, either as a PI Data Archive collective member or an independent
server. Occasionally, you might need to clear entries in this table. For example, if you remove a
server from a collective and then want to connect to the server as an independent server, you
must remove the old collective and then add the independent server.
If the known servers table contains more than one server, you can simply remove a server
using PI Connection Manager. However, the procedure for clearing table entries differs if the
server you want to remove is your only connection. In that case you must add a temporary
server to the table before you remove a server connection.

Procedure
1. Open PI Connection Manager. From most clients, choose File > Connections.
2. If you only have one connection, you must first add a temporary or placeholder server. If you
have multiple connections, skip this step.
a. Choose Server > Add Server.
b. In Network Node, type a temporary name, such as TempServer.
c. Clear the Confirm check box.
When Confirm is selected, PI Connection Manager attempts to connect to the specified
server.
d. Click OK.
3. Remove the server you want to clear.
a. Select the server.
b. Choose Server > Remove Selected Server.
If you have multiple connections, the procedure is now complete.
4. If do not have multiple connections, you can now add the new server.
a. Choose Server > Add Server.
b. In Network Node, type the server host name.
c. Enter the connection credentials.
d. Click OK.
5. Remove the temporary server.
a. Select the temporary server in the list of servers.
b. Choose Server > Remove Selected Server.

N-way buffering for PI clients


You can use PI Buffer Subsystem to control the PI point data written by a PI client to PI Data
Archive. When PI Data Archive is not available, the buffering service temporarily stores the PI
client data. Once the server is available, the buffering service sends the data to PI Data Archive.

High Availability Administration Guide 87


PI System clients and high availability

To support PI Data Archive HA for PI clients, you configure PI Buffer Subsystem to use n-way
buffering. With n-way buffering, the buffering service fans data all PI Data Archive collective
members.
There are some important differences between PI client buffering and PI interface buffering:

• You can buffer PI client data only with PI Buffer Subsystem. API Buffer Server cannot buffer
client data.
• If your PI clients write data using PI SDK, use PI SDK Utility to configure buffering.
• If your PI clients write data using AF SDK, use PI System Explorer to configure buffering.

Configure n-way buffering for AF SDK clients


By default, once you have successfully configured buffering, AF SDK data is buffered if possible.
This means that if the buffering service is running, security is properly configured, and the
target PI Data Archive server is configured for buffering, then data sent by AF SDK to PI Data
Archive is buffered. If the target is a PI Data Archive collective, data is sent to all collective
members (or fanned). No additional configuration is required.
AF SDK applications can override the default buffering behavior, either to bypass buffering or
require buffering.
If buffering is not configured, or if the configuration is incomplete, then AF SDK data is written
directly to PI Data Archive and data sent to PI Data Archive collectives is not fanned. If the
server becomes unavailable, data loss occurs.
If needed, you can modify the default buffering behavior for AF SDK data. You can either turn
off AF SDK buffering, or you can require buffering, which means data will not be written if the
buffering service becomes unavailable. Use PI System Explorer to modify the default setting for
AF SDK buffering.

Before you start


Install PI Asset Framework 2.6.1 or later. These instructions assume that you have already
configured one or more PI Data Archive computers to receive buffered data from this computer.
If not, start PI System Explorer and click Tools > Buffering Manager to configure buffering, then
follow these instructions.

Procedure
1. To change the default configuration for AF SDK buffering, start PI System Explorer and click
Tools > Buffering Manager.
Alternatively, you can click File > Connections, and then click Buffering Manager.
2. In the Buffering Settings window, click Show advanced global configuration.
3. In the AF SDK Buffering list, select the setting you want:
◦ To turn off buffering for AF SDK data, select Do not buffer.
◦ To require buffering to send AF SDK data to PI Data Archive, select Always buffer.

88 High Availability Administration Guide


PI System clients and high availability

Caution:
Use the Always buffer option with care. If data cannot be buffered for any reason, it
will not be sent to the target PI Data Archive server or collective. Since it cannot be
buffered, the data will be lost.
4. Click Save.

Configure n-way buffering for PI SDK clients


To buffer data from PI clients that send data using PI SDK, use PI SDK Utility to enable PI SDK
buffering.

Before you start


Install PI SDK 1.4.4 or later. These instructions assume that you have already configured one or
more PI Data Archive computers to receive buffered data from this computer. If not, start PI
SDK Utility and click Buffering > Buffering Manager to configure buffering, then follow these
instructions.

Procedure
1. On the computer sending the data to be buffered, run PI SDK Utility.
2. Click Buffering > PI SDK Buffering Configuration.
3. Select the Enable PI SDK Buffering check box.
4. Click Save.
A message in the status bar shows the current buffering status.
5. Restart the PI client applications to ensure that their data is buffered.

Results
All PI SDK data from this computer is sent to all PI Data Archive computers that have been
configured to receive data from PI Buffer Subsystem. To add servers, use Buffering Manager.

Buffering configuration when you add a PI Data Archive server to a collective


PI Buffer Subsystem versions 4.3 and later automatically use n-way buffering and send data to
all servers in a PI Data Archive collective. For PI clients, you only need to verify PI Buffer
Subsystem operation after you add a server to a PI Data Archive collective.

Change the buffered server or collective


When using PI Buffer Subsystem versions earlier than 4.3, the steps you follow to change the
buffered PI Data Archive server or PI Data Archive collective depend on the type of data you are
buffering.

Topics in this section


• Make sure all queued events have been processed
• When buffering PI SDK data only
• When buffering both PI API and PI SDK data or PI API data only

High Availability Administration Guide 89


PI System clients and high availability

• Effect on buffering-related files

Make sure all queued events have been processed


To verify that PI Buffer Subsystem has sent all queued events to PI Data Archive, use the
pibufss –cfg command to view the current buffer sessions.

Procedure
1. At the command prompt, type pibufss –cfg.
2. In the resulting output, look for the line that starts with total events sent.
At the end of that line, you should see queued events: 0. This indicates that all events in
the buffer queue have been sent to PI Data Archive.
3. If the number of queued events is greater than 0, and you want the events to be sent to PI
Data Archive, do the following:
a. In the pibufss –cfg command output, look for the line that begins with a number
followed by the server ID, for example:
1 [YourServerID] state: SendingData, successful connections: 1
b. Make sure the state is SendingData as shown above. If it is not, check the connection
between PI Buffer Subsystem and PI Data Archive.
See pibufss buffer session states for more information.
c. Issue the pibufss -cfg command a few more times until the output shows queued
events: 0.
This indicates that all queued events have been sent to PI Data Archive. The time
required to process all events depends on the number of queued events, network
performance, and PI Data Archive load.

pibufss buffer session states


For each PI Data Archive server that will receive buffered data, PI Buffer Subsystem creates a
separate buffer session. When PI Buffer Subsystem starts a buffer session, the session first
connects to the PI Data Archive server (Connected state), and then registers with PI Snapshot
Subsystem (Registered state). It can then start buffering data as needed (SendingData state).
Under normal circumstances, the state for each buffer session is either Registered or
SendingData.
Buffer states and their meanings are described below.
State Description
Connected The buffer session is connected to PI Data Archive,
but cannot register with PI Snapshot Subsystem.
This is usually a temporary state during
registration. If this state persists long enough to be
visible, it may indicate that PI Snapshot Subsystem
is unavailable.
Disconnected The buffer session is not connected to PI Data
Archive. This may indicate that PI Data Archive is
unavailable.

90 High Availability Administration Guide


PI System clients and high availability

State Description
Dismounted The buffer session is dismounted because a user
on the buffered node issued the pibufss -bc
dismount command.
NotPosting The buffer session is not posting data to PI Data
Archive because a user on the buffered node issued
the pibufss -bc stop command.
Offline The PI Buffer Subsystem service is not running.
Registered The buffer session is connected to PI Data Archive
and registered with PI Snapshot Subsystem, but
has no data to buffer.
SendError The most recent attempt to post buffered data to
PI Data Archive failed. If the problem persists
longer than the time period specified by the
RETRYRATE parameter, the buffer session state
changes to Disconnected.
SendingData The most recent attempt to post buffered data to
PI Data Archive succeeded. When PI Buffer
Subsystem is receiving data from PIAPI or PISDK,
this is the normal buffer session state.
QueueError PI Buffer Subsystem cannot read data from the
buffer queue. This indicates a problem with the
buffer queue. For assistance, visit the OSIsoft
Customer Portal (https://my.osisoft.com/).

When buffering PI SDK data only


When you are buffering PI SDK data only (usually data from PI clients), use PI SDK Utility to
change the server or collective receiving the buffered data.

Before you start


Make sure all queued events have been processed

Procedure
1. In PI SDK Utility > PI SDK Buffering > PI SDK Buffering Configuration, use the Buffered
Server/Collective list to select a different server or collective.
2. In the PI SDK Buffering Configuration dialog box, click Tools>Service Configuration.
3. On the General tab, click Stop, then Start to restart the PI Buffer Subsystem service.

After you finish


Verify that PI Buffer Subsystem is buffering data.

When buffering both PI API and PI SDK data or PI API data only
When you are buffering both PI API and PI SDK data (usually from both PI interfaces and PI
clients), use PI Interface Configuration Utility (PI ICU) to change the server or collective
receiving the buffered data.

High Availability Administration Guide 91


PI System clients and high availability

Before you start


Make sure all queued events have been processed

Procedure
1. In PI ICU > Tools > Buffering, select Buffered Servers and then select a server or collective
on the Buffering to collective/server list.
Note:
Make sure the selection under Buffered Server Names (Path, Name, or IP address)
uses the same format as the API Hostname specified for the interface. For example, if
you use a path for API Hostname, you must also use a path (not a name or IP address)
for Buffered Server Names.

2. Click OK.
3. When prompted, restart PI Buffer Subsystem and its dependent interfaces.

After you finish


Verify that PI Buffer Subsystem is buffering data.

Effect on buffering-related files


When you change the buffered PI Data Archive server or PI Data Archive collective, the change
affects the following files in PIHOME\DAT.
File Effect of changing buffered server
pibufmem.dat (point and snapshot cache file) This file is renamed.
The file currently in use is named pibufmem.dat.
The file used by the original buffered server is renamed
with a numeric file extension, for example, pibufmem.
1303779431.

pibufq_buffered_server_name.dat or A new queue file is created for the new buffered server.
APIBUF_buffered_server_name.dat (buffer queue
The file currently in use shows the current buffered
files)
server as its buffered_server_name.
Files that show the names of previous buffered servers
are used as follows:
• If there is data in the queue after changing the
buffered server, that data will not be sent to PI Data
Archive.
• If you later start buffering to this server again from the
same node, the data will be sent to PI Data Archive.

Verify that buffered data is being sent to PI Data Archive


Use these instructions only for PI Buffer Subsystem versions 3.4.380 and earlier. For later
versions of PI Buffer Subsystem, use Buffering Manager to verify buffering status.

92 High Availability Administration Guide


PI System clients and high availability

Procedure
1. In a command window, navigate to the \PIPC\bin directory.
2. Enter: pibufss -cfg
3. In the resulting display, note the number of total events sent.
4. Wait a few seconds, then enter pibufss -cfg again.
You may want to repeat this step one or two more times. If buffering is working properly,
the number of total events sent increases each time. The number of queued events
should remain at or near zero.

High Availability Administration Guide 93


PI System clients and high availability

94 High Availability Administration Guide


Special cases for high availability
This information is for sites with unusual high availability requirements.
For example, if an interface cannot connect to all members of a PI Data Archive collective, and
you cannot use n-way buffering to distribute the data among collective members, you can use
PI to PI to copy time-series data from the primary server to a secondary server. If you have
multiple collectives, you can use PI to PI to copy data between collectives and aggregate that
data.
There are also special cases for those who can use buffering to distribute data. If you need to
buffer to a PI Server older than version 3.4.375, buffer to multiple PI Data Archive servers, or
buffer data from interfaces that run on a non-Windows platform, you need to use API Buffer
Server. See the Buffering topic "Special cases for buffering" in Live Library (https://
livelibrary.osisoft.com).
Additionally, you may be required to implement PI AF in a PI AF collective because the other PI
AF HA deployments (Windows Cluster and Network Load Balancer) that are recommended by
OSIsoft are not supported for your deployment.

Topics in this section


• PI to PI interface and high availability
• PI AF collective installation and upgrade

PI to PI interface and high availability


The PI to PI interface copies time-series data from one PI Data Archive to another PI Data
Archive. This interface moves data in only one way—from a source server to a target server. In
a PI Data Archive collective, if an interface cannot connect to all collective members and you
cannot use n-way buffering to distribute the data, you might use PI to PI to copy time-series
data from the primary server to a secondary server.
If you have multiple collectives, you can use PI to PI to copy data between collectives and
aggregate that data. For example, you might have a collective that collects data at each plant,
and have a separate collective at your headquarters that gathers key indicators from the plants.
For more details about the PI to PI interface, see the interface manual: PI to PI TCP/IP Interface
to the PI System.

Topics in this section


• PI to PI interface configuration considerations
• Data transfer between PI Data Archive collective members
• Data aggregation between PI Data Archive collectives

PI to PI interface configuration considerations


Before configuring a PI to PI interface, you need to determine where you are going to install the
interface and where you want to gather data from, either the source server's archive or

High Availability Administration Guide 95


Special cases for high availability

snapshot. You must configure your points to support PI to PI. When using PI to PI in a PI Data
Archive collective, you must pay special attention to buffering, startup, and history recovery.

Topics in this section


• PI to PI installation location
• PI to PI source data
• PI to PI point definition
• PI to PI buffering
• PI to PI startup
• PI to PI history recovery

PI to PI installation location
You can install PI to PI on a PI Data Archive computer or on a separate computer.
• For the most robust configuration, install PI to PI on a different computer than your PI Data
Archive so that you can:
◦ Enable source PI Data Archive failover, which allows PI to PI to connect to an alternate
source server if the main source server becomes unavailable.
◦ Use n-way buffering to write data to any number of target servers in a PI Data Archive
collective.
◦ Install a redundant interface to support interface failover.
• If you install PI to PI on a PI Data Archive computer, you can install PI to PI on the target
server, such that the target server pulls data from the source server. If the connection
between the servers breaks, the target PI Data Archive can request data from the proper
time point upon restoration of the connection.
Alternatively, you can install PI to PI on the source server, such that the source server
pushes data to the target server. In this case, you must set up a buffering service to control
the data flow and send data in case of a lost connection.
• If you are using PI to PI to copy data to multiple servers in a PI Data Archive collective, you
must use n-way buffering and you must push data from the source server to the collective's
servers.

PI to PI source data
PI to PI can gather data from the snapshot at the source server or from the archive at the
source server. You must select the source most appropriate for your needs and expectations.
Gather data from the snapshot if your system requires frequent updates and current data.
Because snapshot data is not compressed, archives might vary slightly among servers. Gather
data from the archive if your system requires identical data at all servers, such as for detailed
analysis. A point's scan class determines the method used.
If PI to PI gathers snapshot data, choose a fast scan rate. PI to PI requests updates like any
other client. The large amounts of data typically requested by PI to PI can overwhelm PI Update
Manager. A faster scan rate clears the subsystem and avoids memory issues. Also, if PI to PI
gathers snapshot data, then you must configure compression at the target server.

96 High Availability Administration Guide


Special cases for high availability

If PI to PI gathers archive data, choose a longer scan rate, such as hourly or daily. Set the scan
rate such that there is at least one value in the archive. A longer scan rate avoids clogging PI
Archive Subsystem with many smaller queries. Also, because data in the archives has been
compressed, you can set compression to zero or turn it off on the target server.

PI to PI point definition
The PI to PI target server must contain defined points to receive data from each unique point
on the source server. Each point's scan-class setting determines whether the point receives
archive data or snapshot data (that is, exception data that has not been compressed) from the
source server. By default, points assigned to the first scan class receive snapshot data, and
points assigned to any other defined scan class receive archive data. You can configure an
alternate scan class to receive snapshot data (that is, exception data).

PI to PI buffering
If buffering data with PI to PI in a PI Data Archive collective, use care not to send data back to
the source server. By default, PI Buffer Subsystem uses N-way buffering to send data to all
servers in a collective. Therefore, if the source server and target servers are in the same
collective and you are using PI Buffer Subsystem, you must disable data copying to all collective
members and explicitly select which servers you want sent the data.
Caution:
OSIsoft recommends that you do not use PI Buffer Subsystem version 3.4.375.38 with PI
to PI. This version does not support all archive write options and can lead to data loss.
Instead, use a later version of PI Buffer Subsystem.

Caution:
OSIsoft does not recommend using both PI to PI and PI SDK buffering to replicate data
between collective members. This may cause errors, duplicate events, or both.

PI to PI startup
The PI to PI interface is not aware of PI Data Archive collectives. PI to PI only knows about
servers specified in its startup file. You specify a source (and possibly an alternate source) in
addition to the target PI Data Archive (the host). Each of these servers must be available when
PI to PI starts so that PI to PI can initialize its point list.

PI to PI history recovery
You can use history recovery to recover data for time periods when PI to PI was not running or
could not collect data. You can configure the history-recovery period. The default value is eight
hours. You can also specify a start time and end time to recover history from a specific time
range. You use this technique to transfer data from one server in a PI Data Archive collective to
another server in the collective when interfaces cannot send data directly.
If you use n-way buffering to write data from the PI to PI interface to a PI Data Archive
collective, history recovery requires that all target servers be in the same initial state. Upon
startup, PI to PI checks the snapshot value on the target PI Data Archive server for each tag in

High Availability Administration Guide 97


Special cases for high availability

its tag list. PI to PI uses the snapshot value to determine the starting time point for history
recovery. However, PI to PI only checks the snapshot value at the target server specified in its
startup file (the host server). If the values are not the same at other servers in the collective,
the single starting time point will result in either a data gap or a data overlap. To avoid this,
initialize each PI Data Archive in the target collective with the same set of data before
implementing the PI to PI interface.

Data transfer between PI Data Archive collective members


In a PI Data Archive collective, you can use PI to PI to transfer time-series data from one
collective member to another when the interface node cannot send that data directly. By
design, n-way buffering distributes data directly from a data source to each PI Data Archive
server in a collective, but system architecture and security restrictions can preclude this
technique in some cases. Note, however, that deployments using PI to PI are not as robust as
those using n-way buffering. For example, if the primary server becomes unavailable, the
secondary server that receives data from PI to PI can only access historic data, not real-time
data.
To use PI to PI to transfer data in a PI Data Archive collective, you must enable tag-attribute
override parameters. Collectives require that each server have identical point definitions. If the
primary server has points configured to receive data directly from the interface node, then
each secondary server must have identical points configured to receive data directly from the
interface node. Normally, PI to PI requires tags configured to receive data explicitly from PI to
PI. However, after you enable tag-attribute override parameters, PI to PI can collect data for
tags not configured explicitly for the PI to PI interface.
You can configure PI to PI on a PI Data Archive computer or you can configure PI to PI on
separate computers. In the most basic configuration, you might configure PI to PI on a PI Data
Archive computer to copy time-series data from the primary server to a single secondary
server.

Configuring PI to PI on separate computers offers a more robust configuration. In a more


complex configuration, you might configure PI to PI to copy time-series data from one or more

98 High Availability Administration Guide


Special cases for high availability

servers in a control network to multiple servers in a business network. This configuration


might include interface failover (that is, a redundant copy of the PI to PI interface) to ensure
that a PI to PI interface is always running and copying data. This configuration must use N-way
buffering to ensure that PI to PI copies identical data to all the servers in the business network.
Finally, to ensure that source data is always available to the PI to PI interface, you configure
source-server failover (a failover mechanism specific to the PI to PI interface).

Configure PI to PI to copy data between PI Data Archive collective


members
This topic describes a basic procedure to configure PI to PI to copy data from a primary server
to a secondary server in a PI Data Archive collective. Installing the interface on the secondary
server and pulling data from the primary server can improve PI to PI performance if the

High Availability Administration Guide 99


Special cases for high availability

secondary server is located in a high-latency business network. For more detailed information,
see the interface manual, PI to PI TCP/IP Interface to the PI System.

Procedure
1. Install the PI to PI interface on the secondary server.
2. Click Start > All Programs > PI System > PI Interface Configuration Utility.
3. Create a new instance of the interface.
a. Select Interface > New Windows Interface Instance from BAT File.
b. Navigate to the PItoPI directory.
c. Select PItoPI.bat_new and click Open.
d. In Select the host PI Server/Collective, select the collective and click OK. PI ICU creates a
new instance of the interface.
4. Select IO Rate in the page tree and clear the Enable IORates for this interface check box.
5. Select General in the page tree and set the following properties:
a. In Point Source(s), add the point source identifier for each interface that sends data to
the primary server.
b. Set SDK Member to secondary server.
c. Set API Hostname to secondary server.
d. Click Apply. You might also consider editing the existing scan class and reducing the scan
frequency.
6. Select PItoPI in the page tree and click the Required tab.
◦ In Source host, type name of the primary server.
7. Click the Location tab.
◦ Select the Override location 1 check box.
◦ Select the Override location 2 check box and select 0 in the corresponding drop-down
list.
◦ Select the Override 3 check box and select 3 in the corresponding drop-down list.
◦ Select the Override 4 check box and select Sign up for exceptions.
◦ Select the Override 5 check box and select 0 in the corresponding drop-down list.
8. Click the Optional tab.
◦ Select the Source tag definition attribute check box.
◦ Select the option Use TagName on both (Ignoring Exdesc and InstrumentTag point
attributes).
9. Select Service in the page tree, and click Create .
10. Click the Start interface service button to start the service.

100 High Availability Administration Guide


Special cases for high availability

Data aggregation between PI Data Archive collectives


You can use the PI to PI interface to aggregate data between PI Data Archive collectives. For
example, you might have a collective that collects data at each plant, and have a separate
collective at your headquarters that gathers key indicators from the plants.

PI AF collective installation and upgrade


PI AF collective is an option for implementing High Availability for your PI AF server
deployment. PI AF collectives use SQL Server replication to copy data from the primary PI AF
SQL database computer to each of the secondary PI AF SQL database computers.
OSIsoft recommends using Failover Cluster or NLB options instead for High Availability
deployments for your PI AF. Use PI AF collective only if the other HA options are not supported
for your deployment.

Limitation of PI AF collectives
Because secondary PI AF collective members are read-only, applications that require writes to
the PI AF Configuration database (such as asset analytics and notifications), or applications
that write event frames, will not work when the PI AF collective primary server is unavailable.

PI AF collective setup and configuration


PI AF collectives use SQL Server replication to copy data from the primary PI AF SQL database
computer (publisher) to each of the secondary PI AF SQL database computers.
Each secondary server communicates with the primary server through a Windows
Communication Foundation (WCF) connection and reports its status information. The server
authenticates the WCF connection using a Windows certificate that the PI AF server generates
when it is started. SQL Server replication transmits the primary PI AF server’s certificate to
each secondary server. After the secondary server receives the primary server’s certificate, it
can communicate its status to the primary server.
When PI AF data is changed on the primary PI AF server, the log reader agent pushes changes
to the SQL Server instance on each secondary server. If the secondary server is not reachable
(for example, if there is a network problem or the computer is offline), the agent retries later.
Follow these procedures to create and configure a PI AF collective.

Procedure
1. AF Collective Manager.
2. Prepare to create a PI AF collective.
3. Create a PI AF collective.
4. Configure PI AF collective properties.
5. Check PI AF collective status.
6. Add a secondary server to a PI AF collective.
7. Connect or switch to a specific member of a PI AF collective.
8. Remove a secondary server from a PI AF collective.

High Availability Administration Guide 101


Special cases for high availability

9. Stop or start PI AF collective replication.


10. Reinitialize a PI AF collective member.
11. Configure folder permissions on the PI AF collective primary server.

AF Collective Manager
Starting with PI Server 2018, PI AF collective creation has been moved out of PI System
Explorer and into the AF Collective Manager. AF Collective Manager provides a graphical user
interface for creating, editing, and managing PI AF collectives.
AF Collective Manager is available for installation with the PI Server install kit and PI AF Client
install kit.

Accessing AF Collective Manager

Procedure
1. Select Start > All Programs > PI System > AF Collective Manager. A message appears
informing you that OSIsoft no longer recommends using PI AF collectives as a High
Availability option. See the OSIsoft Knowledge Base article High Availability (HA) options
for PI Asset Framework (PI AF) (https://customers.osisoft.com/s/knowledgearticle?
knowledgeArticleUrl=KB00634).
2. To start the AF Collective Manager:
◦ Click No start the AF Collective Manager tool.
◦ Click Yes to read the KB article.
The AF Collective Manager window opens.

Prepare to create a PI AF collective


Before you begin creating a PI AF collective, follow these steps:

Procedure
1. Make sure that you meet all general collective creation requirements. See Configuration
requirements for PI AF collectives.
2. Make sure that you meet all SQL Server requirements. See SQL Server requirements for PI
AF collectives.
3. Make sure that you meet all security requirements. See Security requirements for PI AF
collectives.
4. A single instance of PI AF server consists of the PI AF application service and the PI AF SQL
database. These components may be installed on separate machines. Make sure that PI AF
server is installed on each member of the collective. This means that at least two complete
PI AF server systems must be installed. This could be two machines (PI AF application
service and PI AF SQL database installed on both machines), or four machines (two
machines with PI AF application service only, and two machines with PI AF SQL database
only).
5. Make a full backup of the PI AF SQL Server database, typically named PIFD.

102 High Availability Administration Guide


Special cases for high availability

OSIsoft highly recommends that you make regular backups of SQL Server data, especially on
the primary server. The PI AF installation process creates a SQL Server backup job that is
scheduled to run by SQL Server Agent. Make sure you copy these backups to media other
than the media that contains the data.
6. Verify that TCP/IP and Named Pipes are enabled on all SQL Server computers for the correct
instance. Run SQL Server Configuration Manager, choose your instance, and verify that the
correct protocols are enabled.
7. Make sure the SQL Agent service is running on the primary SQL Server computer.
8. All computers upon which the PI AF application service runs must be in a domain. Check the
domain for each computer:
a. Click Start and right-click Computer.
b. Select Properties to view workgroup and domain settings.

Topics in this section


• Configuration requirements for PI AF collectives
• SQL Server requirements for PI AF collectives
• Security requirements for PI AF collectives

Configuration requirements for PI AF collectives


PI AF collectives have the following configuration requirements:
• PI AF collectives are supported for PI AF 2.1 or later.
• The PI AF application service computers must be in a domain; workgroups are not allowed.
• The PI AF application service version must be the same on all PI AF collective computers.
• The PI AF collective consists of at least two PI AF servers (machine hosting PI AF application
service). The PI AF client is not required on either PI AF server, but If you install it, your
work with PI AF will be more convenient.
• The PI AF SQL Server database on the primary and secondary servers must have the same
name. By default its name is PIFD.
• The Named Pipes and TCP/IP protocols must be enabled for the instances where the PI AF
SQL Server databases are installed.
• Using a Clustered PI AF Server as the Primary collective member can cause issues with the
Host value. The AF Cluster name assigned as the Host value is not persisted. Ultimately, this
may cause connection issues. To correct the issue, change the value assigned to the
"reportIPaddress" setting in the AFService.exe.config file from its default setting of "false" to
the AF Cluster's IP address:
Default Value for reportIPAddress Using AF Cluster's IP Address for reportIPAddress
<add key="reportIPAddress" <add key="reportIPAddress"
value="false"/> value="192.168.255.255"/>

SQL Server requirements for PI AF collectives


PI AF collectives have these SQL Server requirements:

High Availability Administration Guide 103


Special cases for high availability

• Two SQL Server instances are required, each on separate physical hardware.
• The PI AF SQL database computers can be in a workgroup or a domain. If the PI AF SQL
database computers are in a workgroup, see PI AF collectives in a domain or workgroup.
• The primary PI AF server requires a non-Express Edition of a supported version of SQL
Server. (Review the PI AF Release Notes for supported SQL Server Versions and Editions.)
• The secondary SQL Server computer can use the SQL Express edition, with limitations. Refer
to Microsoft's web site for details.
• SQL Server Compact edition is not supported.
• It is not necessary to have the same SQL Server edition and version for all members of a
collective, but it is recommended.
• SQL Server Agent must be running on the primary SQL Server computer.
• SQL Server Replication must be installed on the primary SQL Server computer; it is not
required on the secondary collective members. If replication is subsequently added or
installed, you must restart SQL Server Agent to prevent errors.
• When the SQL Agent is run under a domain account and the primary AF database server is
64-bit SQL Server, you must configure the C:\Program Files\Microsoft SQL Server
\100\COM\ folder on the primary AF database server to allow read/write access to the SQL
Agent domain account.

Security requirements for PI AF collectives


For security, the following accounts (or users) in a PI AF collective require a reduced level of
permissions:
• SQL Server Database Engine service
• SQL Server Agent service
• PI AF application service
• PI AF collective creator user
• AFServers local group
For more information about minimum privilege levels required for replication, see the
following Microsoft articles:
• Replication Agent Security Model (https://docs.microsoft.com/en-us/previous-
versions/sql/sql-server-2008-r2/ms151868(v=sql.105))
• Security Role Requirements for Replication (https://docs.microsoft.com/en-us/previous-
versions/sql/sql-server-2008-r2/ms152528(v=sql.105))
Each PI AF collective account has the following access requirements.

104 High Availability Administration Guide


Special cases for high availability

SQL Server Database Engine


Component Action required
Permissions • Run as a low-privileged account.
• Do not run the SQL Server Database Engine service under an account with
local or domain administrative privileges.

SQL Server Agent


Component Action required
Permissions • Run as a low-privileged account.
• Do not run as NetworkService.
Primary PI AF server No action required.
Secondary PI AF servers No action required.
Primary PI AF SQL • If it does not already exist, create a login in SQL Server for the account
database under which the SQL Server Agent service runs.
◦ Assign the db_owner database role on the PI AF SQL Server database to
this account.
◦ Do not grant the sysadmin server role to this account.
• Assign write permission to the \repldata folder. Sample path:
C:\Program Files\Microsoft SQL Server\MSSQL10_50.TEST
\MSSQL\repldata
For more information, refer to Configure folder permissions on the PI AF
collective primary server.
Secondary PI AF SQL • If it does not already exist, create a login in SQL Server for the account
databases under which the SQL Agent service runs on the primary.
◦ Assign the db_owner database role on the PI AF SQL Server database to
this account.
◦ Do not grant the sysadmin server role to this account.

PI AF application service
Beginning with PI AF 2.7, by default the PI AF application service is run under a virtual
account, NT SERVICE\AFService. Do not run it under the Local System account. The best
practice is to use a low-privileged domain account, as this account does not require special
access to the PI AF SQL database. The PI AF application service account is added to a local
Windows security group, which is assigned the appropriate access in the PI AF SQL database.
Component Action required
Permissions • Run as a low-privileged account.
• Do not run as Local System.
Primary PI AF server No action required.
Secondary PI AF servers No action required.

High Availability Administration Guide 105


Special cases for high availability

Component Action required


Primary PI AF SQL • In Windows, add the domain account under which the PI AF application
database service runs to the local AFServers group.
• Do not create an SQL Server login for the PI AF application service
account.
• Do not assign the db_owner database role on the PI AF SQL Server
database to the PI AF application service account.
• Do not grant the sysadmin server role to the PI AF application service
account.
Secondary PI AF SQL • In Windows, add the domain account under which the PI AF application
databases service runs to the local AFServers group.
• Do not create an SQL Server login for the PI AF application service
account.
• Do not assign the db_owner database role on the PI AF SQL Server
database to the PI AF application service account.
• Do not grant the sysadmin server role to the PI AF application service
account.

PI AF collective creator
A domain user, with Windows credentials that are authenticated by PI AF, Windows, and SQL
Server, runs the AF Collective Manager client that is used to create the PI AF collective.
Component Action required
Permissions The credentials that are used to create the PI AF collective are used only once
to create the PI AF collective. After you create the PI AF collective, you can
remove the special permissions.
Primary PI AF server Add the credentials used to create the PI AF Collective in AF Collective
Manager to the Local Administrators group.
Secondary PI AF servers Add the credentials used to create the PI AF Collective in AF Collective
Manager to the Local Administrators group.
Primary PI AF SQL • If it does not already exist, create a login in SQL Server for the PI AF
database collective creator's domain account.
• Add the credentials used to create the PI AF Collective in AF Collective
Manager to the Local Administrators group.
• Grant the sysadmin server role to this account.
Secondary PI AF SQL • If it does not already exist, create a login in SQL Server for the PI AF
databases collective creator's domain account.
• Grant the sysadmin server role to this account.

AFServers local group


The only account that should exist in the AFServers local Windows group is the account under
which the PI AF application service runs.
Note:
The AFServers local Windows group is typically created during the installation of the PI
AF SQL database. If you use SQL scripts to install the PI AF SQL Server database, however,
you need to set up this user group manually.

106 High Availability Administration Guide


Special cases for high availability

Component Action required


Permissions This group should never be given local or domain administrator privileges.
Primary PI AF server No action required.
Secondary PI AF servers No action required.
Primary PI AF SQL • If it does not already exist, create a login in SQL Server for the AFServers
database local group.
• Grant the db_AFServer database role on the PI AF SQL Server database
to this account.
• Do not assign the db_owner database role on the PI AF SQL Server
database to this account.
• Do not grant the sysadmin server role to this account.
Secondary PI AF SQL • If it does not already exist, create a login in SQL Server for the AFServers
databases local group.
• Grant the db_AFServer database role on the PI AF SQL Server database
to this account.
• Do not assign the db_owner database role on the PI AF SQL Server
database to this account.
• Do not grant the sysadmin server role to this account.

PI AF collectives in a domain or workgroup


Any PI AF server (a computer where the PI AF application service is installed) in a PI AF
collective must be in a domain; workgroups are not supported.
The PI AF SQL database computers can be in a workgroup or a domain.
If the PI AF SQL database computers are in a workgroup, you must use a local Windows
account that exists on the computer where AF Collective Manager is run to create the collective
on the SQL Server computer. The accounts must have matching passwords, be in the local
Windows administrators group on all computers, and be a member of the SQL Server
sysadmin role. This local account will be used to run AF Collective Manager and create the PI
AF collective.
Note:
If you run AF Collective Manager as a domain account that is mapped to sysadmin in SQL
Server but your SQL Server is in a workgroup, you will get this error: cannot open
service control manager on computer '172.30.86.10'. This
operation might require other privileges. Do you wish to
continue?

Check security credentials and connections for PI AF collectives


To ensure that you have the required access permissions and that you can connect to each SQL
Server in the collective, follow these steps:

High Availability Administration Guide 107


Special cases for high availability

Procedure
1. Using the Windows credentials that you will use to create the collective, login to the
workstation from which you will create the collective (do not do this on the SQL Server
computer) and connect to each PI AF server that will be part of the collective.
2. On the same workstation, verify that you can perform a simple file share access to each SQL
Server:
a. Select Start > Run.
b. Enter \\SQL_Server_computer_name for each SQL server.
This ensures that your credentials authenticate to each SQL Server at the Windows level.
3. Establish a connection to each SQL Server via SQL Server Management Studio (SSMS) or
sqlcmd.exe.
4. Once connected, run the following query:
SELECT IS_SRVROLEMEMBER ('sysadmin') "is sysadmin", CURRENT_USER "connected
as", SYSTEM_USER "login user" ;

where
"is sysadmin" returns 1=true, 0=false
"connected as" returns "dbo"
"login user" returns the user’s Windows user principal
Do not proceed until the connection and query succeeds for each SQL Server that will be
part of your PI AF collective.

Create a PI AF collective
Before you start
Perform all the steps in Prepare to create a PI AF collective.

Procedure
1. Start the SQL Server Agent Service.
SQL Server replication depends on the SQL Server Agent service. If it is not running, when
you attempt to set up a PI AF collective, the setup fails without warning. The only way to
recover is to delete the collective, start the SQL Server Agent service, then set up the
collective.
2. In AF Collective Manager, right-click on a PI AF server that you want in the collective and
select Create Collective.
3. In the Create New Collective - Verify Backup Completed window, select the I have verified my
backups are valid check box and click Next.
4. In the Create New Collective - Select Primary window, choose your primary server.
5. Click Next.
6. From the Server list in the Create New Collective - Select Secondary Servers window, select a
PI AF server to add to the collective as a secondary server and click Add. Repeat to add
additional secondary servers. If you want to create the collective without adding a
secondary, then skip this step.

108 High Availability Administration Guide


Special cases for high availability

You can add secondary servers after the collective is created. See Add a secondary server to
a PI AF collective.
7. Click Next.
The Create New Collective – Verify Selections window opens.
8. Optional. Click Advanced Options. See Configure PI AF collective properties for a
description of the advanced option fields.
9. Click Next.
The collective is created and the Create New Collective – Finishing window opens.
10. Click OK to begin the replication process.

◦ If you click Exit before the secondary servers are listed in the lower area of the window,
the replication process stops on any secondary servers in the collective. A message
appears that indicates the replication process is not complete. You will need to start the
replication process on any secondary servers that currently belong to the collective.
◦ If you click Finish before the replication is complete, a message appears indicating the
replication is not complete, and where to look for the current replication status.

Results
When the replication process is complete, the status for the first row (the snapshot creation)
shows Succeeded. The status for the second row (the replication process as it relates to the
primary server) shows Idle. The status for the third row and subsequent rows (the replication
process as it relates to the secondary servers) shows Idle. For details about the collective
status, see PI AF collective status details.

Configure distributor database security


When you create a PI AF collective, a distributor database is created to allow for SQL Server
replication. That database requires some configuration.
The distributor database is named <PIFD>_distribution, where <PIFD> is the name of the
PI AF SQL Server database. By default the name of the PI AF SQL Server database is PIFD.
The AFServers group must have the db_AFServer role for the <PIFD>_distribution
database. This role is automatically assigned to the local AFServers group during the PI AF
collective creation. However, if you are installing a PI AF collective on a SQL Server cluster, the
local AFServers group does not exist; it was replaced with a domain group as part of the
process of installing PI AF on a SQL Server cluster. If the AFServers domain group does not
have the db_AFServer role for the <PIFD>_distribution database, the collective creation
will fail with an error message:
Waiting on a (Good) SyncStatus .. Current SyncStatus(Snapshot Not Ready)

This error can be corrected during the PI AF collective creation process; it is not necessary to
exit the Create New Collective window. The PI AF collective creation process will continue
normally after the following steps are completed.

Procedure
1. Open Microsoft SQL Server Management Studio, and connect to the SQL Server instance for
the primary server in the PI AF collective.
2. Under the SQL Server cluster instance, expand Security > Logins.

High Availability Administration Guide 109


Special cases for high availability

3. Right-click the login created for the AFServers domain group and select Properties.
4. Select the User Mapping page.
5. Under Users mapped to this login, select the Map check box for the <PIFD>_distribution
database row.
6. Ensure the User column for the <PIFD>_distribution row is set to the domain user group
(YourDomain\YourAFDomainGroup).
7. With the <PIFD>_distribution row selected, select the db_AFServer role check box under
Database role membership for: <PIFD>_distribution. The public role should be selected by
default; if it is not, select its check box.
8. Click OK to save the SQL Server login.

Configure PI AF collective properties


Procedure
1. In AF Collective Manager, right-click on a PI AF collective and click the Properties button.
2. In the PI AF Server Properties window, click the Collective tab.
3. Select a collective member and edit the following settings:

◦ Timeout
The number of seconds for an operation to finish on the PI AF server.

◦ Priority
The priority order for selecting the collective member on the current computer. You can
modify this value for each collective member.

◦ Period
The frequency, in seconds, in which a collective member checks the status of the
remaining collective members.

◦ Grace
The time, in seconds, that is allowed before the communication status is set to
TimedOutOnPrimary when there is no communication with the primary server.
Note:
The Port, Account, Role, and Status settings on the Collective tab are read-only. See
the descriptions of these settings for information on how each one is set.

◦ Port
The port through which the PI AF server communicates. This value is set in the
configuration of the PI AF server, before the server became a collective member.

◦ Account
The account under which the PI AF application service is running. This value is set in the
configuration of the PI AF server, before the server became a collective member.

110 High Availability Administration Guide


Special cases for high availability
◦ Role
The role within the collective of the selected collective member, primary or secondary.
This value is set when the PI AF server is added to the collective.

◦ Status
The status of the selected collective member, including the last time communication was
verified with the primary server the last time the collective member was synchronized,
current synchronization status, and current communication status.
4. Click More to display the Collective Status Details window. See PI AF collective status details.

Check PI AF collective status


You can check the status of a PI AF collective member in either AF Collective Manager or PI
System Explorer.

Procedure
1. Choose one of the following actions:
To check the status of a collective Do this ...
member in ...
AF Collective Manager Right-click a collective member and click Show Collective
Status.
PI System Explorer a. Select File > Connections.
b. In the Servers window, right-click a collective member and
click Show Collective Status.

The status of the selected member is displayed in the Collective Status Details window. Click
Refresh as needed to update status.
2. Choose one of the following actions:
To ... Do this ...
Review errors for secondary servers a. Select the Show Errors Only check box.
only
b. Click Refresh.
Specify how much detail you want to a. In the Max. Secondary Details field, select One per
see for secondary servers Secondary or specify a number from zero to 100.
b. Click Refresh.

3. Click Close to exit the Collective Status Details window.


Caution:
If you are currently adding a secondary server, do not click Close before its status is
displayed in the Collective Status Details window. Otherwise, the replication process
stops on the secondary server and a message is displayed that indicates the
replication process is not complete. You will then need to restart the replication
process on the newly added secondary server.

High Availability Administration Guide 111


Special cases for high availability

PI AF collective status details


The Collective Status Details window shows the most recent status messages for the primary
and secondary servers in a PI AF collective. Scroll horizontally to review the content of every
column in the Details grid.

• The first row shows the status of the snapshot creation process. This row is always
displayed.
• The second row shows the status of the replication process between primary server and
secondary servers. This row is always displayed.
• The third and ensuing rows show the latest replication status messages for the secondary
servers. The level of detail depends on the settings you have selected for Show Errors Only
and Max. Secondary Details.
If there is no current activity, the Details area is empty.

Details grid
The Details grid contains the following columns:

• Name
The name of the collective member.

• Time Stamp
The time stamp from the SQL call to obtain the replication status, displayed in five-minute
intervals.

• Commands Delivered
The number of commands being sent from the primary server to the secondary server.

• Status
The synchronization status between the server members in the collective.
The status of the replication process from the primary server to the secondary servers.

• Comment
The current stage of the replication process.

• Error Code
If an error occurs, the associated error code.

• Error Message
If an error occurs, the associated error message.

Add a secondary server to a PI AF collective


You can add a secondary server to a PI AF collective when you create the collective, or after you
create it. When you add a secondary PI AF server to a collective:

112 High Availability Administration Guide


Special cases for high availability

• A push subscription is set up in the PIFD_distribution database.


• A push subscription agent is started for each secondary server added to the collective.
The push subscription agent pushes the current snapshot to the secondary servers to
initialize them. All the tables that are marked for replication are pushed to the secondary
server. The existing snapshot data is replicated from the primary server to the newly added
secondary server. Any pre-existing data on the secondary server is lost.
Note:
You must ensure that the Audit Trail feature is disabled on a secondary server that you
are adding to a PI AF collective. If you do see a message on the PI AF client indicating that
the feature needs to be disabled, run the AFDiag utility on the secondary server and
disable the Audit Trail feature. Then, return to the PI AF client and click OK in response to
the message.

Procedure
1. In AF Collective Manager, right-click the primary PI AF server and select Add Server to
Collective. The Adding Secondaries – Select Secondary Servers window opens.
2. From the Server list, select the PI AF server to add to the collective as a secondary server.
3. Click Add to add the PI AF server to the list.
4. Click Next.
The Adding Secondaries - Verify Selections window opens.
5. Click Next. The secondary server is added to the collective.
The Adding Secondaries – Finishing window appears. The process of replicating data to the
secondary server begins and the window displays collective status details during the
process. When the replication process is complete on the secondary server, the Status for
the third and subsequent rows display Idle. For more on status details, see PI AF collective
status details.
Note:
If you click Exit before the window lists the newly added secondary server, the
replication process stops on that secondary server. A message appears that indicates
the replication process is not complete. You will need to start the replication process
on any secondary servers that currently belong to the collective.

Connect or switch to a specific member of a PI AF collective


When you connect to a PI AF collective, PI AF automatically connects you to the collective
member with the highest priority (lowest number). However, you can use the Switch
Collective Member option to select the next collective member based on its assigned
priority, or the Connect to Collective Member option to select a specific member of the
collective.

Procedure
1. To connect to a specific collective member, choose one of the following actions:

High Availability Administration Guide 113


Special cases for high availability

To select a collective member in ... Do this ...


AF Collective Manager Right-click a collective member and click
Connect to Collective Member.
PI System Explorer a. Select File > Connections.
b. In the Servers window, right-click a collective
member and click Connect to Collective
Member.

2. In the Choose Collective Member window, select the collective member to which you want to
connect from the Collective Member list.
3. Click OK.
You are now connected to the selected collective member.

Remove a secondary server from a PI AF collective


When you remove a secondary server from a collective, the subscription is dropped on both
ends (primary server and secondary server), the push agent for the secondary server is
stopped, and the secondary server is deleted from the collective.
Caution:
If you remove a primary PI AF server from a collective, the entire collective is removed.
The subscription is dropped on both ends (primary server and secondary server). All
agents are stopped. The PIFD_distribution database is deleted. All replication is halted
and cannot be restarted. The primary server is available as a stand-alone PI AF server.

Procedure
1. In AF Collective Manager, select the PI AF Collective that contains the secondary server to be
removed and click the Properties button.
2. Click the Collective tab.
3. Right-click the secondary server and select Delete.

Stop or start PI AF collective replication


There is no pause or resume option for replication; replication is either running or stopped.
Test these procedures in AF Collective Manager.
When you stop replication, the subscription is dropped on both ends (primary server and
secondary server). The push agent for the secondary server is stopped. All agents are stopped,
and all replication is halted.

Topics in this section


• Stop replication on a PI AF collective secondary server
• Stop replication on the PI AF collective primary server
• Start replication on a PI AF collective server

114 High Availability Administration Guide


Special cases for high availability

Stop replication on a PI AF collective secondary server

Procedure
1. In AF Collective Manager, right-click the PI AF Collective that contains the secondary server
on which you want to stop replication and click the Properties button.
2. Click the Collective tab.
3. Right-click the secondary server and select Stop Replication.
Replication is stopped on the secondary server. As long as the server is a member of the
collective, you can start replication at a later time.

Stop replication on the PI AF collective primary server

Procedure
1. In AF Collective Manager, right-click the PI AF Collective that contains the primary server on
which you want to stop replication and click the Properties button.
2. Click the Collective tab.
3. Right-click the primary server and select Stop Replication.
Replication is stopped on the primary server and all secondary servers. As long as the
collective still exists, you can start replication on the primary server at a later time; you will
need to start replication on each secondary server, too.

Start replication on a PI AF collective server


If you have stopped replication on a collective member, it does not restart automatically. If you
want the collective member to be involved in replication, you must start the replication on that
member.

Procedure
1. In AF Collective Manager, right-click the PI AF Collective that contains the servers on which
you want to start replication and click the Properties button.
2. Click the Collective tab.
3. Right-click the server and select Start Replication. If this is the primary server, you also
need to start replication on each secondary server.

Reinitialize a PI AF collective member


You can force a new snapshot of the database on the primary PI AF server to be created and
pushed out to a secondary server by reinitializing the secondary server. If you have multiple
secondary servers, you must reinitialize each individually.
When a secondary server is reinitialized, a new snapshot is created on the primary PI AF
server. An agent pushes the snapshot to the secondary servers to initialize them. All the tables
that are marked for replication are pushed to the secondary servers. Any pre-existing data on
the secondary servers is lost.

High Availability Administration Guide 115


Special cases for high availability

Procedure
1. In AF Collective Manager, right-click the PI AF Collective that contains the server you want
to reinitialize and click the Properties button.
2. Click the Collective tab.
3. Right-click the server and select Reinitialize Replication.

Configure folder permissions on the PI AF collective primary server


On the primary PI AF SQL database computer, configure permissions on the replication folder
to enable the SQL Server Agent service account to have access.

Procedure
1. On the primary PI AF SQL database computer, open Windows Explorer.
2. Navigate to the \repldata folder for the SQL Server instance where the PI AF SQL database
is installed.
3. Right-click the \repldata folder and select Properties.
4. Click the Security tab and click Edit.
5. In the Permissions for repldata window, click Add.
6. In the Select Users, Computers, or Groups window, check that the From this location: field
shows the correct domain. If not, click Location and navigate to and select the correct
domain.
7. In the Enter the object names to select field, enter the name of the domain account under
which the SQL Server Agent service runs.
8. Click OK.
9. In the Permissions for [SQL Agent Account Name] area of the Permissions for repldata
window, select the Modify check box and ensure that all check boxes except Full control and
Special permissions are selected.
10. Click OK.
11. Click OK to return to Windows Explorer.

Troubleshoot PI AF collectives
Use the topics in this section to troubleshoot issues with PI AF collectives.

Topics in this section


• Status details indicate no configured subscriber
• PI AF collective creation fails due to login failure
• Snapshot creation fails due to access error
• PI AF collective cannot be created when SQL Server Agent is not running

116 High Availability Administration Guide


Special cases for high availability

Status details indicate no configured subscriber


This message indicates no secondary server has been configured for replication. If a secondary
server has already been added to the collective, the error could indicate there is a
communication problem between the primary PI AF server and secondary server, or between
the secondary PI AF server and the secondary PI AF SQL database.
If the failure was due to a problem between the primary and secondary PI AF server, review the
PI AF event log on the secondary server for possible causes of the error. Verify the user account
used in AF Collective Manager has the proper access to the PI AF server.
If the failure was due to a problem between the secondary PI AF server and the secondary PI
AF SQL database, review the PI AF event log on the secondary PI AF SQL database for possible
causes of the error. Verify the user account used in the AF Collective Manager has the proper
access to the PI AF SQL database.

PI AF collective creation fails due to login failure


When creating a collective, the Create New Collective – Finishing window displays the following
message in the top section:
Login failed for user ‘[DOMAIN]\[UserName]’.

This message indicates that the logged-on user is unable to access one of the servers included
in the collective. The error is most likely related to the fact that the logged-on user does not
have the correct permissions on the primary PI AF SQL database computer.
Review the Application event logs on the PI AF server and PI AF SQL database computers,
beginning with the primary PI AF server, to determine which computer is receiving the
connection error.
Be sure that the login account is given sysadmin privileges to SQL Server on the AF SQL
database computer.

Snapshot creation fails due to access error


During creation of a PI AF collective, the Create New Collective – Finishing window displays the
following message in the middle section:
Current SyncStatus(Snapshot not ready).

In the SnapShot status row (the first row in the bottom section), the message displays:
Access to the path ‘[..\repldata\...] is denied.

This message indicates that the SQL Server Agent service account does not have Write access
to the \repldata folder for the SQL Server instance into which the primary PI AF SQL
database was installed. See Configure folder permissions on the PI AF collective primary
server.
After setting the proper security permissions on the \repldata folder, exit the Create New
Collective – Finishing window. A message displays, indicating the primary server’s replication
has not finished.
Click OK and return to the Collective tab in the AF Server Properties window. Delete the
collective, then recreate the collective, and the snapshot is created correctly.

High Availability Administration Guide 117


Special cases for high availability

PI AF collective cannot be created when SQL Server Agent is not


running
You attempt to create a collective by right-clicking a PI AF server in AF Collective Manager, and
selecting Create Collective.
If the SQL Server Agent service for the selected PI AF server is not running, a message displays,
indicating the SQL Server Agent is not running on the PI AF SQL database computer.
Click OK to return to the list of PI AF servers. Start the SQL Server Agent service on the primary
server, then create the new collective.
You attempt to create a collective by right-clicking in AF Collective Manager, and an error
window opens, along with the Create New Collective – Finishing window, indicating:
SQL Server Agent is not running.

Click OK to exit the error window. In the Create New Collective – Finishing window the same
message appears. Click Cancel to exit the window. The collective was not created. Start the SQL
Server Agent service on the primary server, then create the new collective.

118 High Availability Administration Guide


Hardware load balancers and PI System products
Hardware load balancers provide advanced capabilities for ensuring your infrastructure is
highly available. You can set up a hardware load balancer to monitor PI Vision application
servers and AF servers and adjust load balancing accordingly. Load balancing provides several
benefits:

• Server offload
Functionality for handling high availability is moved from the server to an application
network infrastructure device.
Hardware load balancing is usually managed by an application delivery controller or an
application accelerator. These devices relieve the load on servers by handling such functions
as SSL termination and acceleration, TCP multiplexing, cookie encryption and decryption,
compression, caching, URI rewriting, and application security.

• Highly configurable failover and failback


Gives you precise control of when a server is placed into or taken out of service.

• Alerts
You can configure the load balancer to notify you when a server is placed into or taken out
of service.

• Change control
You can take a server out of service for maintenance without affecting the availability of
your application.

Configure your hardware load balancer to monitor your system


There are various methods to assess the functioning of your system. You can configure your
hardware load balancer to use some of these methods to monitor the system and make
adjustments based on that information.
Methods for monitoring PI Vision
Method Description Monitors Usage
ping Basic check to determine Server available Use ping with the server
if a server is available Operating system name or IP address
functioning
Network functioning

telnet Determines if a server is Server available Telnet to port 80 or port 443


available, by attempting a Firewall rules open to see if you can successfully
connection over TCP/IP Internet Information connect
Services (IIS) responding

High Availability Administration Guide 119


Hardware load balancers and PI System products

Method Description Monitors Usage


HTTP status If you successfully access Server available Create an http or https
code 200 content on a server that is IIS responding connection to the PI Vision
running IIS, a status code Web content being application server and verify
200 is returned. returned status code 200 is returned.
See Check TCP response for
HTTP status code 200 for
more information.

HTTP content Parameters in HTTP Server available Create an http or https


content can provide IIS responding request to a test PI Vision
information that PI Vision service running display, and verify that the
determines if the page SQL Server running Title tag contains the
was successfully accessed display name.
or an error occurred.
See Check HTTP content to
verify PI Vision application
server and SQL Server
availability for more
information.

Methods for monitoring AF


Method Description Monitors Usage
ping Basic check to determine if Server available Use ping with the server
a server is available Operating system name or IP address.
functioning
Network functioning
telnet Basic check to see if Port is Verifies that the AF service is Telnet to port 5457
responding running, but since there is
no response on port 5457,
does not verify the service is
functioning correctly
HTTP status If you successfully access Server available Create an http or https
code 200 content on a server that is IIS responding connection to an AF server
running IIS, a status code Web content being and verify status code 200
200 is returned. returned is returned.
See Check TCP response
for HTTP status code 200
for more information.
AF Health The AF server includes a Server is available Allows you to monitor the
Check counter Windows PerfMon counter IIS is running AF server like a typical web
called AF Health Check. AF service is running server.
Load balancers typically SQL Server is running See Monitor the AF Health
do not read this counter
Check counter for more
directly, but you can access
information.
the information it
provides. This is the
preferred method to check
the availability of the AF
application service and
SQL Server.

120 High Availability Administration Guide


Hardware load balancers and PI System products

Check TCP response for HTTP status code 200


If you successfully access content on a server that is running IIS, IIS will return a status code of
200 as part of the TCP response. Therefore, to monitor if a server is accessible and IIS is
running, you can configure your hardware load balancer to periodically access the server and
check the TCP response for this status code. IIS also records the status code in its log.
This check does not verify the availability of the AF application service or the PI Vision service.
Note:
IIS is not installed by default when you install AF, therefore when using AF, you need to
manually install IIS to use this method. If you use the Acknowledgement Web Page
functionality of PI Notifications, IIS is automatically installed.

Check HTTP content to verify PI Vision application server and SQL Server
availability
Parameters in HTTP content can provide useful information. For example, the HTML Title tag
that is returned when you access a URL can show you if the page was successfully accessed or
an error occurred.
To monitor if the PI Vision application server and SQL Server are available:

Procedure
1. Create a test display.
2. Periodically call the test display directly by name, for example: http://webServer/
pivision/#/Displays/7/YourTestDisplay, where webServer is the name of your
PI Vision web server. If SQL Server is available, you should see the name of the display
returned in the Title tag of the HTML code. If SQL Server is down, an error page is shown,
instead.
To examine the contents of the Title tag:

a. Choose Tools > F12 developer tools.


b. In the HTML tab at lower left, expand the HTML and head elements to find the Title tag.
3. Configure your load balancer to access the display URL, for example, every 10 seconds.
Parse the text in the Title tag to find the name of the display. For example, if the text is not
correctly returned two times out of three, you could set a flag showing that there is a
problem, which could be with SQL Server or the PI Vision application service.

Monitor the AF Health Check counter


You can use a Windows PerfMon counter called AF Health Check to determine if both the AF
application service and the SQL Server are running and responding. When they are responding,
the counter returns a value of 1. Load balancers typically do not read this counter directly, but
you can access the information it provides.

High Availability Administration Guide 121


Hardware load balancers and PI System products

Note:
This procedure requires the Windows Performance Monitor (perfmon) utility. It does not
require the PI Performance Monitor interface.

Procedure
1. Run Internet Information Services (IIS) on the AF server.
2. Create a page that will show the value of the perfmon counter (see code sample, later).
3. Configure the hardware load balancer to read the content from this web page to determine
the availability of AF.

Sample code to show the value of the perfmon counter


The following sample checks the value of the AF Health Check performance counter and then
displays the status of the AF server on a web page.
<%@ Import Namespace = "System.Diagnostics" %>

<script runat="server">
sub Page_Load(sender as Object, e as EventArgs)
Dim perfAFHealth as New PerformanceCounter("PI AF Server", "Health")

If perfAFHealth.NextValue() = 0
lblperfAFHealth.Text = "DOWN"
ElseIf perfAFHealth.NextValue() = 1
lblperfAFHealth.Text = "UP"
Else lblperfAFHealth.Text = "INVALID"
End If
end sub
</script>

<!DOCTYPE html>
<html>
<head>
<title>AF Health Check</title>
<meta http-equiv="refresh" content="5" />
</head>
<body>

<form id="Form1" runat="server">


AF Health Status :
<asp:Label id="lblperfAFHealth" runat="server" />
</form>

</body>
</html>

Use PowerShell scripts to monitor AF Health Check counter


If you are unable to run Internet Information Services (IIS) on your system, you might use
Windows PowerShell scripts to create a listener to monitor the AF Server Health tag, as
demonstrated by the following example code fragments.

Procedure
1. Create a listener. For example:
$endpoint = new-object System.Net.IPEndPoint
([system.net.ipaddress]::any, $port)

122 High Availability Administration Guide


Hardware load balancers and PI System products

$listener = new-object System.Net.Sockets.TcpListener


$endpoint
$listener.start()
2. Read the AF Server Health tag. For example:
$counter_path = "\PI AF Server\Health"
$counters_value = get-counter -counter $counter_path |
Select-Object –ExpandProperty CounterSamples |
Select-Object CookedValue
3. Parse the response. For example:
$Pass = Select-String -inputObject $counters_value
-pattern "CookedValue=1" -quiet
If ($Pass){$responseString=$up}else{$responseString=$down}
4. Send the response to the listener. For example:
$sendBytes = [System.Text.Encoding]::ASCII.GetBytes($responseString) + $CR +
$LF
$stream.Write($sendBytes,0,$sendBytes.Length)

Maintain server affinity to the PI Vision server or AF server


It is recommended to configure your hardware load balancer to maintain server affinity to the
PI Vision application server or to the AF server. When you maintain server affinity, each client
returns to the same server, and does not bounce between the servers in the load-balancing
pool.
The first time the hardware load balancer encounters a client, it allocates the client to a server
depending on current load. The hardware load balancer issues a cookie to the client that
records to which server the client was allocated. On subsequent visits, the hardware load
balancer "asks" the client which server it should go to by checking the cookie.
Server affinity provides the best performance, because the server cache has the most recent
content, whereas a new server connection would need to load content. Additionally, switching
servers would necessitate sign up for updates on the new server and the old server would have
non-connected sessions that need to time-out.
PI Vision and AF do not employ a user session, so the transactions are stateless. Therefore,
session affinity does not play a significant role in performance.

Recommendations for using AF with a hardware load balancer


If you are using a hardware load balancer with AF servers, OSIsoft recommends you set up
your system as follows:
• Run IIS on each AF server (this enables you to perform the HTTP status code 200 check, see
Check TCP response for HTTP status code 200
• Monitor the AF Health Check PerfMon counter (for more information, see Monitor the AF
Health Check counter)
• Balance traffic between the two AF servers

High Availability Administration Guide 123


Hardware load balancers and PI System products

Note that providing more than two AF servers can help with high availability, but does not
increase scalability.
• Use a common SQL Server for the AF servers to share

124 High Availability Administration Guide


Hardware load balancers and PI System products

Simple network load balancing configuration

Hardware and software requirements


To support this configuration, you need to ensure that:
• Your hardware load balancer supports the configuration whereby AF server traffic flows
through port 5457, and you monitor the site on a different port, for example, port 80.

High Availability Administration Guide 125


Hardware load balancers and PI System products

Ideally, your hardware load balancer monitors the AF Health Check on one port and if the
AF service is up and running, it load balances the traffic on port 5457.
• Clients connect to AF servers by using the virtual IP address (VIP) of the hardware load
balancer.
Clients should not connect directly to AF servers without going through the load balancer,
because the PIFD database will return the same AFSystemID value for each AF server, which
causes errors for the AF SDK.

Failure handling
In the event of a failure, configure the following actions to occur:
• If one AF server is taken out of service, direct traffic to the other AF server.
• If the SQL Server fails, take both AF servers out of service.
• If no AF server is available, inform users that the site is down or under maintenance.

Load balancing with mirrored SQL Servers and with PI Notifications


It is common practice to deploy AF servers so that each server uses a separate SQL Server, and
mirroring is set up between the SQL Servers, as shown in the figure, below. The mirrored SQL
Server is initially configured as read only. If the primary SQL Server fails, the "witness" server
can automatically place the secondary SQL Server into write mode.
Note also that all load-balanced AF servers must connect to the same SQL Server. Upon failover,
you can use the witness server to ensure that all AF servers successfully fail over to the same
SQL Server.

126 High Availability Administration Guide


Hardware load balancers and PI System products

Typical configuration using SQL mirroring

PI Notifications should not be configured to run under the load balancer. PI Notifications has its
own heart-beat check to determine which servers are active making it unnecessary to run

High Availability Administration Guide 127


Hardware load balancers and PI System products

under a load balancer. You can run PI Notifications on a separate server, or on the AF servers.
Only one PI Notifications server can be active at a time; the active server is registered in the AF
database on the SQL Server.

PI Notifications Acknowledgment Web Page (AWP)


You can configure the PI Notifications Acknowledgment Web Page (AWP) to reside on the AF
server and you can load balance the AWP with the AF server. The AWP can fail over with the AF
server. The AF server should be running IIS and you should configure the AWP to use the
virtual IP address (VIP) of the AF server and the VIP of the PI Notifications Primary server.

128 High Availability Administration Guide


Technical support and other resources
For technical assistance, contact OSIsoft Technical Support at +1 510-297-5828 or through the
OSIsoft Customer Portal Contact Us page (https://customers.osisoft.com/s/contactus). The
Contact Us page offers additional contact options for customers outside of the United States.
When you contact OSIsoft Technical Support, be prepared to provide this information:
• Product name, version, and build numbers
• Details about your computer platform (CPU type, operating system, and version number)
• Time that the difficulty started
• Log files at that time
• Details of any environment changes prior to the start of the issue
• Summary of the issue, including any relevant log files during the time the issue occurred
To ask questions of others who use OSIsoft software, join the OSIsoft user community,
PI Square (https://pisquare.osisoft.com). Members of the community can request advice and
share ideas about the PI System. The PI Developers Club space within PI Square offers
resources to help you with the programming and integration of OSIsoft products.

High Availability Administration Guide 129


Technical support and other resources

130 High Availability Administration Guide

You might also like