Professional Documents
Culture Documents
Administration Guide
• Reliability
With high availability, data has multiple paths from the source to the end user. If one
component fails, data can traverse an alternate path. Therefore, you can eliminate single
points of failure, protect against potential data loss, ensure access to current data, and
decrease downtime.
System upgrades, such as new server hardware, can be implemented during normal hours.
The server can be configured, then introduced into the collective. A collective is a set of PI
Data Archive servers that act as the logical PI Data Archive server for your system. From
there it can be fully tested and qualified before making it available to users. Since collective
PI Data Archive servers do not have to share machine or operating specifications it is
possible to introduce new hardware such as 64 bit machines.
Unplanned outages can be dealt with during normal working hours. Recovering a system
during the weekend is extremely disruptive and resources required for an efficient fix may
not be available. Outages during normal working hours can be addressed on a schedule as
well, allowing current activities to be completed rather than disrupting and wasting current
work in progress.
• Maintainability
PI Data Archive server maintenance is easier because you can bring down a collective
member with no impact to the other collective members. PI can be more easily patched or
upgraded without having to schedule downtime. With high availability, you can perform
scheduled maintenance with minimal impact on your user applications. You can
troubleshoot a secondary server offline, giving you time to analyze and diagnose problems
without adversely affecting users.
• Workload balancing
You can automatically direct client requests to the server with the most workload capacity.
Client applications can start on any server. Applications are not required to be aware of any
particular server. You can distribute connections and workloads among servers, reducing
demands on individual servers.
• Security
You can configure all components in a highly available PI system to be secure. Network
traffic is secure between primary and secondary servers, and traffic is secure between client
applications and all servers.
collection or not having history recovery functionality available. The data is often lost forever.
All customers want to avoid data loss.
Lack of data availability means that PI data is not available for consumption by a display,
report, or application at that time (but will be available for consumption at a later time). For
example, if a non-high availability PI Data Archive server is down, the data is not available to PI
ProcessBook but the PI Interfaces would still be collecting and buffering data (to later forward
to the PI Data Archive server) to ensure that there is no data loss.
The following table summarizes some of the differences between data loss and data
availability.
Concerns Data loss Data availability
Who is concerned? Everyone is concerned about data Many are concerned about data
loss. availability.
Drivers for concern No one ever wants to lose data. Availability concerns are driven
Loss of data has potential by your use of the data and how
regulatory issues, and it may much it is integrated into your
impact the perceived integrity of business processes.
a controlled or regulated system.
Questions to ask If the PI Interface, a PI Data If the PI Data Archive server goes
Archive server, or other PI down, can my end users wait [4
System component goes down, hours] to see their data? What is
will I lose data? the business impact of this?
Risk mitigation technologies • Interface buffering • Interface failover
• Interface failover (Redundancy)
(Redundancy) • PI System component
• Interface history recovery redundancy and high
availability (PI, Asset
Framework, ACE,
Notifications, etc.)
For distributed systems with large workloads and PI point counts, and with multiple PI Data
Archive servers or PI Data Archive collectives that link to a central PI AF database, OSIsoft
recommends that you install PI Data Archive collectives and Microsoft SQL Server on separate,
redundant computers to achieve the best level of performance and scalability.
High availability capabilities are available for all components in the PI System:
• Data sources
Data sources can be configured to support redundant, replicated nodes.
• Interfaces
A primary interface node and one or more secondary interface nodes ensures failover so
that time-series data reaches PI Data Archive even if one interface fails. Buffering ensures
that identical time-series data reaches each PI Data Archive server in a collective. When one
interface is unavailable, the redundant interface automatically starts collecting, buffering,
and sending data to PI Data Archive.
• Asset Framework
To implement HA for PI AF, you can configure multiple instances of PI AF application service
in a Windows Failover Cluster or Network Load Balancer deployment. In addition, you can
configure Microsoft SQL Servers in an AlwaysOn Availability Group, Mirrored SQL Server
System, or as a Failover Cluster. See the PI Server topic " PI AF server installation and
upgrade" in Live Library (https://livelibrary.osisoft.com).
• PI Data Access
The PI Data Access products PI OLEDB Enterprise, PI OLEDB Provider and PI Web Services
support high availability. PI OLEDB Enterprise supports connection failover to servers in a
PI collective when used with PI Asset Framework 2010 and later.
PI Web Services retrieves data from either the primary or second member PI collective,
using connection information from its host machine. PI OLEDB Enterprise and PI OLEDB
Provider clients connect to collectives according to connection preference settings; you can
also use PI System utilities to select another server in the collective.
If a server in the collective becomes unavailable, SQL statements that are in progress might
fail. This occurs if a PI OLEDB Enterprise or PI OLEDB Provider client cannot connect to an
unavailable server, or reconnect to another collective member, within the time set for the
Command Timeout. To avoid this timeout, increase the Command Timeout property in the
OLE DB client, which is by default set to 60 seconds. For more information, see the user
guides for PI OLEDB Enterprise or PI OLEDB Provider, which are available on the OSIsoft
Customer Portal (https://my.osisoft.com/).
• Client applications
To implement high availability at the PI client layer, configure clients to connect to any
server in a PI Data Archive collective and switch to another server if necessary, without
requiring any user intervention to fail over from one server to another. Clients can be
configured to support redundant, replicated nodes.
You can automatically direct client requests to the server with the most workload capacity.
Client applications can start on any server. Applications are not required to be "aware" of
any particular server. You can distribute connections and workloads among servers,
reducing demands on individual servers.
OSIsoft recommends that SQL Server Standard and SQL Server Enterprise be used for most PI
Server installations, but you can consider using SQL Server Express for systems with few assets
(10,000 assets or less) and low-to-moderate workloads (25,000 PI points or fewer). However,
because SQL Server Express imposes limitations on CPU, memory, and disk usage, you must
also factor in object sizes, concurrent load, and usage patterns of PI AF clients.
To assess whether you can use SQL Server Express, see the OSIsoft Knowledge Base article
KB00309 - Is the SQL Server Express edition sufficient for running PI AF 2.x (https://
customers.osisoft.com/s/knowledgearticle?knowledgeArticleUrl=KB00309).
Note:
If you use SQL Server Standard or SQL Server Enterprise, you should install it on a
different computer from PI Data Archive to ensure that the performance of PI Data
Archive is not degraded.
secondary PI Data Archive server send configuration information to interfaces at the remote
center.
You can also use the PI to PI interface to aggregate data between PI Data Archive collectives.
For example, you might have a collective that collects data at each plant, and have a separate
collective at your headquarters that gathers key indicators from the plants.
PI AF architecture
PI AF uses a multi-tiered architecture. A minimal system consists of three tiers:
PI AF deployment options
Depending on your needs and goals, you have various options for deploying PI AF server,
ranging from a simple deployment that uses one computer to a complex mirrored collective
that uses multiple computers. Carefully consider which deployment option is best for your
needs and resource constraints before installation.
Simple PI AF deployment
For systems with few assets (10,000 or less) and low to moderate workloads (25,000 PI points
or fewer), OSIsoft recommends that you follow these guidelines:
• If using SQL Server Express, install PI Data Archive, PI AF server, and SQL Server on the
same computer.
• If using SQL Server Standard or Enterprise, consider installing SQL Server on a different
computer from the PI Data Archive computer. Installing SQL Server Standard or Enterprise
edition on the same computer as the PI Data Archive computer can significantly degrade PI
Data Archive performance.
Note:
Review the PI AF Release Notes for a current list of SQL Server Versions and Editions that
are supported for the PI AF Server.
Possible deployment scenarios include:
• Deploy the PI AF Application Service and PI AF SQL Server database on the same computer,
and deploy a PI AF client on the same computer or on a different computer.
• Deploy the PI AF Application Service and PI AF SQL Server database on separate computers,
and deploy a PI AF client on one of these computers or on a different computer.
• Deploy the PI AF Application Service on multiple computers that point to a single PI AF SQL
Server database, and deploy a network load balancer between the PI AF client and the PI AF
Application Services.
For example:
Deployment considerations
Depending on your needs and goals, you have various options for deploying PI AF server,
ranging from a simple deployment that uses one computer to a complex mirrored collective
that uses multiple computers. Carefully consider which deployment option is best for your
needs and resource constraints before installation.
The main components in a PI Server are PI AF, and PI Data Archive. The Microsoft SQL Server is
not actually a part of the PI Server, but is a dependency. OSIsoft recommends that you use these
guidelines to deploy PI AF within a PI Server:
• If the PI Data Archive host computer is heavily loaded, move SQL Server to a different
computer.
• It is acceptable to use a shared SQL Server that contains databases for other non-OSIsoft
applications. Often these are already running on a cluster.
• Hardware sizing should be based upon workload, not PI AF object count, since they do not
correlate. RAM is the most important hardware sizing consideration for implementing PI
AF, mainly due to the fact that the SQL Server tends to utilize a considerable amount of
system resources. This consideration applies for deployments where PI AF server and the
SQL server are on the same computer.
• As I/O workload increases, it is important to consider the disk subsystem to handle the IO
count as well as the storage requirements. Specifications to consider include: number of
disk spindles, solid-state drives, and so on. For very large PI AF systems, where you are
planning on more than 10,000 assets and moderate-to-high workloads and point counts
(more than 25,000 PI points), use drive arrays that can sustain at least 3000 random read
I/O Per Second (IOPS).
• Adding SQL Server RAM improves SQL Server read and write performance and is the
variable that most affects performance of PI AF. In particular, if you use a very large PI AF
system, specify that the SQL Server RAM to be 60-65 percent of the database size.
Is any specific collation required? Yes. The collation Although the installation procedure
is required to be does not specify any particular collation,
case insensitive. SQL_Latin1_General_CP1_CI_AS has
had the most testing.
Does PI AF expect SQL Server to listen No
on a specific port?
Is MS-DTC required? No
Is it necessary to enable remote Yes Yes, if the PI AF Application Service is
database connections? not installed on the database server
system.
PI AF high-availability solutions
To implement high availability for PI AF, OSIsoft recommends an approach based on network
load balancing and Microsoft high availability technologies. However, there are many other
possible solutions to achieve high availability that you can choose based on your own
requirements.
For detailed information about high-availability options, refer to the OSIsoft Knowledge Base
article High Availability (HA) options for PI Asset Framework (AF) (https://
customers.osisoft.com/s/knowledgearticle?knowledgeArticleUrl=KB00634). That article
provides a list of the advantages and disadvantages of various high availability technologies.
• Deploy the PI AF application service on multiple computers and the PI AF SQL Server
database on another set of two or more computers. The PI AF application service should be
configured to run under a domain account.
• Configure the PI AF SQL Server database computers as an Always On availability group.
• Set up a network load balancer that manages all communication between PI AF clients and
the PI AF application service tier.
Note:
OSIsoft assumes you are familiar with the configuration and operation of network load
balancers, Windows failover clusters, and the cluster administration tools provided with
the Windows operating system. For an overview of Microsoft high availability solutions,
see the Microsoft article Business continuity and database recovery - SQL Server
(https://docs.microsoft.com/en-us/sql/database-engine/sql-server-business-continuity-
dr?view=sql-server-2017).
• High availability using Clustered SQL Servers and a network load balancer:
◦ Deploy the PI AF application service on multiple computers and the PI AF SQL Server
database on another set of two or more computers. The PI AF application service should
be configured to run under a domain account.
◦ Configure the PI AF SQL Server database computers as a Clustered SQL Server.
◦ Point all instances of the PI AF application service toward the Clustered SQL Server.
◦ Deploy a network load balancer between the PI AF client and the PI AF application
service.
◦ Install the PI AF client on separate computers. Direct the PI AF clients toward the
network load balancer.
• High availability using only Windows Failover Clusters:
◦ Deploy the PI AF application service on multiple computers and the PI AF SQL Server
database on another set of two or more computers. The PI AF application service should
be configured to run under a domain account.
◦ Set up a Windows Failover Cluster for all instances of the PI AF application service and
another Windows Failover Cluster for the Clustered SQL Servers. Then create a SQL
Server Cluster for the PI AF SQL Server database computers.
◦ Install the PI AF client on separate computers. Direct the PI AF clients toward the name
of the Windows Failover Cluster used for the PI AF application service.
• High availability using Windows Failover Clusters and a Microsoft Always On availability
group but no load balancer:
◦ Deploy the PI AF application service on multiple computers and the PI AF SQL Server
database on another set of two or more computers. The PI AF application service should
be configured to run under a domain account.
◦ Configure all instances of the PI AF application service as a Windows Failover Group.
◦ Configure the PI AF SQL Server databases as a Microsoft Always On availability group.
◦ Install the PI AF client on separate computers. Direct the PI AF clients toward the PI AF
application Service configured as a Windows Failover Cluster.
• High availability using SQL Server mirroring: and an optional load balancer:
◦ Deploy the PI AF application service and the PI AF SQL Server database on separate
computers.
◦ Set up the PI AF SQL Server database on a mirrored SQL Server.
Note:
Although SQL Server mirroring is still available, Microsoft has deprecated that
functionality. For more information about deprecated capabilities, see the Microsoft
article Deprecated Database Engine Features in SQL Server 2016 (https://
docs.microsoft.com/en-us/sql/database-engine/deprecated-database-engine-
features-in-sql-server-2016?view=sql-server-2017).
◦ Deploy the PI AF client on a different computer. Optionally, you can deploy a network
load balancer between the PI AF client and the PI AF application service.
In a standard configuration, where all collective members are in the same security
environment and you are using AD, you configure security on the collective’s primary server
just as you would configure a single PI Data Archive server. The collective’s PI Data Archive
replication service copies the configuration to all secondary servers in the collective. This
replication process requires that all collective members be on a single domain or part of fully-
trusted domains.
You must use a custom security configuration if:
• Collective members are not contained in a homogeneous security environment, such as
when members are on different non-trusted domains or on no domain.
• You do not have access to AD and must configure authentication through local Windows
security on the primary and secondary servers.
Custom configuration in collective servers can affect PI applications and users when accessing
PI Data Archive information. If the same mappings are not available on all collective members,
applications might fail to connect or might receive different permissions on failovers. OSIsoft
recommends avoiding custom configurations whenever possible. Custom configurations are
more complex. To set up and maintain a custom configuration, you must consider who needs
access to each collective member, and who will need to fail over. Visit the OSIsoft Customer
Portal (https://my.osisoft.com/) if you need help.
You do not need this access if you are creating or modifying the collective manually.
Note:
These access permissions are valid for PI Data Archive version 3.4.380 and later. Earlier
versions do not include the PIBACKUP entry in database security so piadmin access is
required for PI Collective Manager in that version. PI Data Archive collectives were
introduced in version 3.4.375.
Procedure
1. Click Start > All Programs > PI System > PI System Management Tools.
2. Under Collectives and Servers, select the PI Data Archive server where you want to enable
the tuning parameter.
3. Under System Management Tools, select Operation > Tuning Parameters.
4. Click the New Parameter button.
5. In Parameter name, type:
Base_AllowSIDLookupFailureForMapping
6. In Value, type:
1
7. Click OK.
8. Restart the server’s PI Base Subsystem.
Procedure
1. Click Start > All Programs > PI System > PI System Management Tools.
2. Under Collectives and Servers, select the secondary server that needs the security mapping.
3. Under System Management Tools, select Security > Mappings and Trusts.
4. Find the SID on the Mappings tab.
◦ If a mapping based on the desired Windows group already exists:
▪ Right-click the mapping and choose Properties.
▪ View the Windows SID on the Mapping Properties dialog box.
◦
If a mapping based on the desired Windows group does not exist:
▪ Click New to open the Add New Mapping dialog box.
▪ In Windows Account, specify the Windows group.
▪ View the SID in Windows SID.
▪ Click Cancel.
Procedure
1. At a command prompt, navigate to the ..\PI\adm directory.
2. Type: piconfig
3. Update the PI Identity Mapping table (PIIDENTMAP). You must set at least three attributes:
◦ IdentMap
Name of the PI identity mapping
◦ Principal
SID of the Windows group you want to map to the specified PI identity
You can also specify other table attributes, if desired.
For example, to create a new mapping called My_Mapping that maps the Windows group
specified by SID S-1-5-21-1234567890-1234567890-1234567890-12345 to the PI
group, piadmins, you would enter the following commands at the piconfig prompts:
@table PIIdentmap
@mode create
@istr IdentMap,Principal,PIIdent
My_Mapping,S-1-5-21-1234567890-1234567890-1234567890-12345,piadmins
PIIDENTMAP attributes
The following table lists all attributes in the PIIDENTMAP table. You can specify any of these
attributes when you create a mapping.
Attribute Description
IdentMap The name of the PI mapping. This must be unique,
but is not case-sensitive. This field is required to
create a new mapping.
Desc Optional text describing the mapping. There are no
restrictions on the contents of this field.
Flags Bit flags that specify optional behavior for the
mapping. There are two options:
• 0x01 = Mapping is inactive and will not be used
during authentication.
• 0x00 = (Default value). Mapping is active and
will be used during authentication after initial
setup.
IdentMapID A unique integer that corresponds to the identity
mapping. The system will automatically generate
the value upon creation. Value will not change for
the life of the identity mapping.
PIIdent Name of the PI identity to which the security
principal specified by Principal will be mapped.
The contents of this field must match Ident in an
existing entry in the PIIDENT table. The target
identity must not be flagged as Disabled or
MappingDisabled. Multiple IdentMap entries
can map to the same PIIdent entry.
This field is required to create a new identity
mapping.
Attribute Description
Principal The name of the security principal (domain user or
group) that is to be mapped to the identity
named in PIIdent.
For principals defined in an Active Directory
domain, the format of input to this field can be any
of the following:
• Fully qualified account name (my_domain
\principal_name)
• Fully qualified DNS name (my_domain.com
\principal_name)
• User principal name (UPN)
(principal_name@my_domain.com)
• SID (S-1-5-21-nnnnnnnnnn-…-nnnn).
For security principals defined as local users or
groups, only the fully qualified account name
(computer_name\principal_name) or SID formats
may be usedcomputer_name. Output from piconfig
for this field will always be in SID format,
regardless of which input format was used.
This field is required to create a new identity
mapping.
Note:
The PI Performance Monitor is not installed by default with the PI Data Archive server.
For more information about installing the PI Performance Monitor interface, see
"Overview of PI interfaces" in Live Library (https://livelibrary.osisoft.com).
Procedure
1. Open PI ICU.
2. Import the interface.
a. Choose Interface > New Windows Interface Instance from BAT File.
b. Navigate to the PIPerfMon directory.
c. Select PIPerfMon.bat_new and click Open.
3. On the General page:
a. Set API Hostname to the host server. (Do not set to localhost.)
b. Set Point Source to a unique string.
Note:
Each installed PerfMon interface must have a unique Point Source.
c. Click Apply.
4. On the Service page, click Create to create a service for the interface.
Procedure
1. Open PI SMT on the primary server.
2. Under System Management Tools, select IT Points > Performance Counters.
3. Select the Build Tags tab.
4. In the counter list, expand PI Server Statistics and select the check boxes next to the PI
points you want to add:
◦ IsAvailable
◦ IsCommunicating
◦ IsInSync
◦ LastSyncRecordID
5. Under Build Tags, select Write tags to CSV File.
6. Click Create Tags.
7. Specify the directory and file name for the spreadsheet, and click Save.
8. Open the spreadsheet in Microsoft Excel.
9. Set the pointsource field to the value you set for the interface.
10. Create a copy of all the PI points for each secondary server.
11. Edit the copied points to create server-specific points.
Check the Tag, Descriptor, Exdesc, Location1, Location4, and Pointsource fields.
12. Save the spreadsheet.
Procedure
1. Open the Excel file where you created the performance points.
2. Under Data Server, select the primary server.
3. Click Select All.
4. Click Publish.
PI Builder creates the PI points on the primary PI Data Archive server, which replicates the
tags to the secondary servers in the collective.
Procedure
1. In PI ICU, select the Service page.
2. Click the Start interface service button
A PI Data Archive collective does not synchronize archive shifts on different servers. Shifts will
occur at different times on each PI Data Archive server. This can increase the availability of
archive data. For example, if the shift takes a long time or fails on one PI Data Archive server,
other servers can still receive and retrieve data. However, before moving an archive from one
server to another server, you must reprocess the archive to change the start and end times to
match the destination.
Procedure
1. Log on to the primary server computer.
2. In Collective Manager, in the Collectives list, select the collective where you want to add a
member server.
3. Select Edit > Add Server to Collective to open the wizard.
4. In Server, select a server that you want to add to the collective as a secondary server. The
following options are available at the prompt to verify selections:
◦ You can choose to copy PI message logs into the PI\log directory. By default, the
message logs are not copied. Click Advanced Options to make this change.
◦ You can set an alternative directory for archive files on the secondary server. To do this,
click Advanced Options and under Member Servers, select the secondary server that you
want to set. The default value is the directory that stores archives on the primary server.
If you set a different directory, the replication process automatically registers archives to
this directory.
Note:
You cannot change the Advanced Options settings for a secondary server after you
add the server to the collective.
5. To add a server to the collective:
a. To the right of the server selection menu, click to open the PI Connection Manager
window.
b. Select Server > Add Server to open the Add Server window.
c. In the Network Node text field, enter the fully qualified domain name (FQDN) of the
server.
d. Enter a Default User Name.
e. Click OK.
f. Click Close.
Procedure
1. Log on to the primary PI Data Archive computer.
2. Click Start > All Programs > PI System > Collective Manager.
3. Under Collectives, select the collective you want to edit.
4. In the diagram of collective members, select the secondary server you want to remove.
5. Choose Edit > Remove Server from Collective.
6. Click Yes at the confirmation prompt.
Collective Manager removes the server from the collective and updates the display.
7. Clear the known servers table at each client connected to the PI Data Archivecollective.
See Clear the known servers table.
Procedure
1. Log on to the computer of the primary server.
2. Open Collective Manager. Click Start > All Programs > PI System > Collective Manager.
3. Under Collectives, select the collective.
4. In the diagram of collective members, select the secondary server you want to reinitialize.
5. Choose Edit > Reinitialize Secondary Server.
6. Follow the wizard prompts to indicate which archives to copy to the secondary server, and
file locations.
The wizard stops the secondary server, backs up the primary server, copies that data to the
secondary server, and restarts the secondary server.
7. Click Finish.
• Firewall parameters
If networks change, you must change these non-replicated parameters at all members in the
PI Data Archive collective.
Procedure
1. Open a command window on the computer that hosts the secondary server.
2. Navigate to the ..\PI\adm directory.
3. Enter: piartool -sys -sync
Procedure
1. Prepare the new PI Data Archive machine.
a. Generate a license activation file for the new machine.
b. Install PI Data Archive software on the new machine.
2. Prepare the source PI Data Archive machine.
a. Force an archive shift.
b. Stop PI Data Archive.
3. Move files from the source PI Data Archive server to the target PI Data Archive server.
These files include current data files, queue files, and archive files.
4. Start the new PI Data Archive server.
Procedure
1. Review the buffering configuration at interfaces and servers and the host server
specification at each interface.
The configuration and specification must refer to the server that you want to promote.
See Buffering configuration when you add a PI Data Archive server to a collective and
additional steps for using interface failover and the Buffering topic "Upgrade to n-way
buffering for interfaces with API Buffer Server" in Live Library (https://
livelibrary.osisoft.com).
2. If applicable, shut down the existing source primary server.
a. Verify that no updates are pending for a secondary server. In Collective Manager, check
that LastSyncRecordID contains the same value at the primary server and all secondary
servers.
b. Shut down the primary server.
c. If the primary or independent PI AF server is installed on the same machine as this
primary PI Data Archive server, then start PI AF server.
3. Update the collective definition on the secondary server that you want to promote.
a. Open a command window on the secondary server that you want to promote.
b. Navigate to the ..\PI\adm directory.
c. To drop the primary server from the collective, type:
piartool -sys -drop OldPrimaryServerName
d. To promote the secondary server to primary server, type:
piartool -sys -promote SecondaryServerName
4. Synchronize the new primary server with the PI AF server (see Synchronize the new
primary PI Data Archive server with PI AF server). If you do not need to synchronize with
the PI AF Server, you must still restart the PI Base Subsystem service on the new primary
server.
5. Reinitialize any other secondary servers in the collective. See Reinitialize a secondary server
with Collective Manager.
6. Generate a new machine signature file and a new license activation file on the computer that
will host the new primary PI Data Archive server.
7. Copy the new license activation file to all members of the PI Data Archive collective.
8. Clear the known servers table at each client connected to the collective.
See Clear the known servers table.
c. If the aflink user is a domain account, then add the aflink user as a member of this local
Windows group.
d. If the aflink user is Network Service, then add the machine account name of the new
primary PI Data Archive server as a member of this local Windows group.
e. If the aflink user is a local user on the new primary PI Data Archive machine, then create
the same user with the same password on the AF Server machine and add that local user
to the local group on the AF Server machine.
6. If AF Server is on the same machine as the new primary PI Data Archive server:
a. Find the local Windows group with the name AF Link to PI-Old Server where Old Server
is the name of the old primary PI Data Archive server.
b. Rename this local Windows group to AF Link to PI-New Server where New Server is the
name of the new primary PI Data Archive server.
c. Add the aflink user as a member of this local Windows group.
7. Restart the PI Base Subsystem service on the new primary PI Data Archive server.
8. Start the PI AF Link Subsystem service on the new primary PI Data Archive server.
Procedure
1. Open a command window on the computer that hosts the server.
2. Navigate to the ..\PI\adm directory.
3. Enter:
piartool -sys -standalone query
Procedure
1. Click Start > All Programs > PI System > Collective Manager.
2. Under Collectives, select the collective.
If the collective does not appear, you must enable communication between Collective
Manager and the collective:
Collective Manager shows a diagram of collective members. An icon represents each server
in the collective. A green check mark on the icon indicates that the server is communicating
correctly. A red X indicates that the server is not communicating correctly.
Procedure
1. Determine the server ID of the existing PI Data Archive server.
2. Force an archive shift on the primary server.
3. Verify that the snapshot queue is empty.
Procedure
1. Open a command prompt window.
2. Navigate to the ..\PI\adm directory.
3. Enter: piconfig < pisysdump.dif
Results
The display shows configuration output.
For example, before creating a collective, output looks similar to:
Collective Configuration
Name, CollectiveID, Description
--------------------------------------------------------------
Collective Status
Name, Status
--------------------------------------------------------------
Note:
When you create a collective or specify new secondary servers, you can either explicitly
specify a UID, or you can have the creation process generate one automatically.
Procedure
1. Open a command window.
2. Navigate to the ..\PI\adm directory.
3. Enter: piartool -fs
Results
The display shows counter output:
Counters for 8-Sep-06 11:51:44
Point Count: 364 0
Snapshot Events: 518157 0
Out of Order Snapshot Events: 0 0
Snapshot Event Reads: 154276 0
Events Sent to Queue: 308873 0
Events in Queue: 0 0
Number of Overflow Queues: 0 0
Total Overflow Events: 0 0
Estimated Remaining Capacity: 2590182 0
The display updates periodically. As you monitor the Events in Queue parameter, you may
occasionally see the value grow greater than 0 and then become 0. This indicates that the
queue is receiving time-series events from the snapshot subsystem, and that the archive
subsystem is able to send the data to the archives.
Procedure
1. Open a command window.
2. Navigate to the ..\PI\adm directory.
3. Enter:
piartool -flush
OSIsoft recommends creating a command file and using piconfig to run the commands in
that file.
Procedure
1. Create a text file, such as collective_create_uc.txt, in the ..\PI\adm directory.
2. Copy the following text into the file.
* Collective information
*
@tabl picollective
@mode create,t
@istr name, Description, CollectiveID
uc-s1,UC 2006 Demo Collective,08675309-0007-0007-0007- 000000001001
*
* Individual server member information
*
* valid values for Role include:
* 0 NotReplicated
* 1 Primary
* 2 Secondary
*
@tabl piserver
@mode create,t
@istr name,Description,Collective,FQDN,Role,ServerID
uc s1,UC 2006 Demo Server 1,uc s1,uc s1.osisoft.int,1,08675309
0007 0007 0007 000000001001
uc s2,UC 2006 Demo Server 2,uc s1,uc s2.osisoft.int,2,08675309
0007 0007 0007 000000001002
3. Edit the text to specify the information for your collective and servers. If necessary, add
additional lines for additional servers in your collective.
4. Open a command window.
5. Navigate to the ..\PI\adm directory.
6. Enter: piconfig < collective_create_uc.txt
Note:
If you want to use your own certificate on a primary or secondary member, open PI
Collective Manager on that computer and use the Import Certificate option. All imported
certificates must meet the following requirements:
• Have a private key
• Be configured for both client authentication and server authentication
• Have the key usage options for digital signature and key encipherment enabled
Procedure
1. On the primary server, open the command window.
2. From the command prompt, change the directory to the \pi\adm path.
3. Run the command piartool -registerhacert -u:
piartool -registerhacert -u
Procedure
1. Open a command window on the computer that hosts the primary server.
2. Navigate to the ..\PI\adm directory.
3. Enter: pibackup c:\temp\pibackup 9999 "01-jan-70
Procedure
1. Create a text file named PI_Set_ServerRole_2.reg.
2. Insert the following text into the file:
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SOFTWARE\PISystem\PI]
"ServerRole"="2"
3. Double-click the file in Windows Explorer to run regedit32.exe.
4. Click Yes.
Windows updates the registry key so that PI Data Archive server can start and read the
configuration database.
5. Delete the pilicense.ud file from the ..\PI\dat directory.
where pathname is the path to the archive file and filename is the name of the archive
file.
3. Set tuning and firewall parameters.
Procedure
1. On a secondary server, open the command window.
2. From a command prompt, change directory to the \pi\adm directory.
3. Run the command piartool -registerhacert -u:
piartool -registerhacert -u
Procedure
1. Open a command prompt window.
2. Navigate to the ..\PI\adm directory.
3. Enter: piconfig < pisysdump.dif
The display shows configuration output. For example, for a primary PI Data Archive server,
output looks similar to:
Collective Configuration
Name, CollectiveID, Description
-----------------------------------------------------------------
uc-s1,08675309-0007-0007-0007-000000001001,UC 2006 Demo Collective
Collective Status
Name, Status
-----------------------------------------------------------------
uc-s1,0
Procedure
1. Stop the secondary PI Data Archive server.
a. From a command prompt, change directory to the \pi\adm directory.
b. Stop the PI Data Archive server with the command: pisrvstop.bat.
2. On the primary PI Data Archive server:
a. From a command prompt, change directory to the \pi\adm directory.
b. Run the command piartool -registerhacert -u:
piartool -registerhacert -u
c. In the \pi\adm directory, rename primarybackup.bat.ManualCollectiveReinit to
primarybackup.bat.
d. Initialize a secondary PI Data Archive server.
If you are initializing a secondary server for the first time, run the command:
primarybackup.bat -init NUM
piartool -registerhacert -u
h. Manually register any archives that are not registered after reinitialization. Use the
piartool -ar command to manually register those archives.
Procedure
1. Back up the primary PI Data Archive server using the Backups tool in PI System
Management Tools (Operation > Backups).
2. Manually copy all of the backup files from the primary PI Data Archive server to a
temporary directory on the secondary PI Data Archive server.
3. Delete the following files from the copy of the backup on the secondary PI Data Archive
server:
◦ pitimeout.dat
◦ pibackuphistory.dat
4. If the installation directory on the secondary server differs from the primary server, delete
the pisubsys.cfg file from the dat directory in the temporary directory that contains the
backup files.
5. Shut down the secondary PI Data Archive server.
6. Restore the secondary PI Data Archive server:
a. Create a command file, pirestore.bat, in the ..\PI\adm directory:
@rem Restore PI files
@rem $Workfile: pirestore.bat $ $Revision: 1 $
@rem
@setlocal
@rem default source: current directory
@set pi_s_dir=%cd%
@rem default destination based on PISERVER symbol
@set pi_d_dir=%PISERVER%
@rem default archive destination set later based on pi_d_dir
@set pi_arc=
@
@if [%1] == [] (goto usage)
@goto loop
@:shift3_loop
@shift
@:shift2_loop
@shift
@:shift1_loop
@shift
@:loop
@if [%1] == [-source] set pi_s_dir=%2%
@if [%1] == [-source] goto shift2_loop
@if [%1] == [-dest] set pi_d_dir=%2%
@if [%1] == [-dest] goto shift2_loop
@if [%1] == [-arc] set pi_arc=%2%
@if [%1] == [-arc] goto shift2_loop
@if [%1] == [-go] goto shift1_loop
@if [%1] == [-?] goto usage
@if [%1] == [?] goto usage
Procedure
1. Open a command window on the computer that hosts the secondary server.
2. Navigate to the ..\PI\adm directory.
3. Enter: piartool -sys -sync
Procedure
1. Open a command prompt on the computer that hosts the primary server.
2. Navigate to the ..\PI\adm directory.
3. To set synchronization frequency, type:
piconfig
@tabl piserver
@mode ed
@istr name, syncperiod
ServerName, x
where ServerName is the name of the secondary server and x is the new synchronization
frequency.
4. To set communication frequency, type:
piconfig
@tabl piserver
@mode ed
@istr name, commperiod
ServerName, x
where ServerName is the name of the secondary server and x is the new communication
frequency.
5. Type Ctrl+C to exit.
Procedure
1. Open a command prompt on the primary server.
2. Navigate to the ..\PI\adm directory.
3. Enter:
piartool -sys -drop ServerName
where ServerName is the name of a secondary server you want to remove. If you specify a
primary server, the command changes the server's role from primary server to non-
replicated. (Note that to change the primary server's role to non-replicated, you must first
put the server in stand-alone mode.)
4. Clear the known servers table at each client connected to the collective.
See also:
Control PI Data Archive stand-alone mode
Clear the known servers table
Procedure
1. Create a text file, such as collective_create_uc.txt, in the ..\PI\adm directory.
2. Copy the following text into the file:
* Collective information
*
@tabl picollective
@mode create,t
@istr name, Description, CollectiveID
uc-s1,UC 2012 Demo Collective,08675309-0007-0007-0007-
000000001001
*
* Individual server member information
*
* valid values for Role include:
* 0 NotReplicated
* 1 Primary
* 2 Secondary
*
@tabl piserver
@mode create,t
@istr name,Description,Collective,FQDN,Role,ServerID
uc-s1,UC 2012 Demo Server 1,uc-s1,uc-s1.osisoft.int,1,08675309-
0007-0007-0007-000000001001
uc-s2,UC 2012 Demo Server 2,uc-s1,uc-s2.osisoft.int,2,08675309-
0007-0007-0007-000000001002
3. Edit the text to specify the information for your collective and servers. If necessary, add
additional lines for additional servers in your collective.
4. Open a command window.
5. Navigate to the ..\PI\adm directory.
6. Enter:
piconfig < collective_create_uc.txt
Replicated tables
The replication service in a PI Data Archive collective replicates the key values of critical tables.
By replicating the key values, the service ensures that the values are the same on all servers in
the collective. With identical key values on all the servers, clients and interfaces can connect to
any server to write or access data. For example, because the point tables on the servers share
the association for Tag, PointID, and RecNo, interfaces can send identical time-series data to all
servers; interfaces need not track different PointID values for each server. Similarly, clients can
efficiently retrieve identical time-series data from each server without changing any
configuration.
You can only change values of replicated tables at the primary server.
PI Table Primary, Identity Keys Foreign Keys Replicates to Can Configure
Unique Keys Secondary? on Secondary?
DBSECURITY DBName UserID, yes no
GroupID
PIAFLINK no no
PIATRSET Set no no
PIBAALIAS Alias no yes, but not
recommended
PIBAUNIT UnitName UnitID PointID, no yes, but not
UserID, recommended
GroupID
PICOLLECTIVE Name yes no
PIDS Set SetNo yes no
PIFIREWALL Hostmask no yes
PIGROUP Group GroupID yes no
PIIDENTITY Ident IdentID yes no
PIIDENTMAP IdentMap IdentMapID PIIdent yes no
PIMAPPING yes no
PIMODULES UniqueID PointID, yes no
UserID,
GroupID
PIPOINT Tag PointID, RecNo SetNo, UserID, yes no
GroupID
PIPTCLS Class no no
PISERVER Name PICollective.na yes no
me
PITIMEOUT Name no yes
Non-replicated tables
The replication service does not replicate data for several tables:
Message logs
The following table lists some of the messages you might find in the message log pertaining to
replication. You can use PI SMT or pigetmsg to search for these messages.
Source of the message PI Data Archive server that Message
reports the message
pirepl Primary Online status, cryptography
status, replication errors.
Replication errors refer to
failures to produce messages for
the secondary servers.
pirepl Secondary Online status, cryptography
status, connectivity status,
replication status
Secondary server name Primary Replication queue status
Interface failover
With interface failover, you configure redundant interfaces—that is, you configure interface
software on two different computers to record data from a single data source. If one computer
fails, the redundant computer takes over. With redundant interfaces, you minimize data loss by
ensuring that there is no single point of failure.
There are three types of interface failover: hot, warm, and cold.
• Hot failover
Both interfaces collect data from a source but only one interface reports that data to PI Data
Archive. If one interface fails, the redundant interface immediately begins sending data to PI
Data Archive without any data loss. Because the data source is connected and sending data
to two interfaces, this type of failover requires the most computing resources.
• Warm failover
The redundant interface maintains a connection with the data source but does not collect
data. If the primary interface fails, the redundant interface begins collecting and sending
data to PI Data Archive. Minimal data loss might occur while the data-collection process
starts.
• Cold failover
The redundant interface only connects with the data source after the primary interface fails.
Some data loss might occur while the connection process initiates (including tag loading)
and while the data collection process starts. Because connections only occur when needed,
this type of failover requires the least computing resources.
Most PI interfaces use the UniInt (Universal Interface) Failover service to manage failover. For
more information on interface failover, see UniInt Interface User Manual.
You must choose a location for the shared file. OSIsoft recommends the following best
practices:
• Store the shared file on a file-server computer that has no other role in the data-collection
process. Do not store the file on the PI Data Archive server or interface computers.
• Exclude the location of the shared file from virus scanning.
Procedure
1. Configure the shared file.
a. Choose a location for the shared file.
b. Create a shared file folder and assign permissions that allow both the primary and
redundant interfaces to read and write files in the folder.
c. Exclude the folder from virus scanning.
2. On each interface computer, open PI Interface Configuration Utility and the interface.
a. Click Start > All Programs > PI System > PI Interface Configuration Utility.
b. Select the interface.
3. If you have a PI Data Archive collective and PI Data Archive sends output points to this
interface, point each interface to a different collective member:
Buffering services
The PI System offers two services to implement buffering at interfaces. Only one of them, PI
Buffer Subsystem, supports buffering for clients.
Procedure
1. Click Start > All Programs > PI System > PI Interface Configuration Utility.
2. In Interface, select the interface.
3. In the page tree, select General.
4. Depending which version of PI Buffer Subsystem is installed on this computer, refer to the
appropriate instructions:
◦ Start buffering (PI Buffer Subsystem version 4.3 or later)
◦ Start buffering (PI Buffer Subsystem versions earlier than 4.3)
5. Verify that buffering is working as expected by doing one of the following:
◦ If you are using PI Buffer Subsystem 4.3 or later, view the Buffering Manager dashboard.
◦ If you are using an earlier version, see Verify that buffered data is being sent to PI Data
Archive.
Procedure
1. Start buffering (PI Buffer Subsystem version 4.3 or later).
2. Start buffering (PI Buffer Subsystem versions earlier than 4.3).
Procedure
1. Click Tools > Buffering.
2. When prompted, confirm that you want to configure PI Buffer Subsystem. If you currently
use API Buffer Server (bufserv), you may need to confirm more than one prompt.
The Buffering Manager window opens. This indicates one of two things:
◦ This computer is configured to buffer data using API Buffer Server (bufserv). In this case,
before you continue, review the information in the Buffering Manager window regarding
upgrades from API Buffer Server.
◦ This computer is not configured to use any form of buffering.
3. To configure buffering, follow the instructions in the Buffering Manager window.
4. After you finish, return to the PI Interface Configuration Utility window. Select each interface,
and on the General page under PI Host Information, look at the Buffering Status setting:
◦ If Buffering Status is On, the buffering configuration for the server to which this interface
sends data is complete. (This is the server specified in the API Hostname field for this
interface.)
◦ If Buffering Status is Off, you need to configure the server specified in the API Hostname
field to receive buffered data from this interface. To add the server to the buffering
configuration, click the Enable button and follow the instructions on the Buffering
Manager screen.
Procedure
1. Choose Tools > Buffering to display the Buffering dialog box.
2. Use the Choose Buffer Type page to select a buffering option:
◦ Disable buffering
◦ Enable PI Buffer Subsystem
◦ Enable API Buffer Server
3. Use the Buffering Settings page to change default settings.
4. Use the Buffered Servers page to select one or more servers that you want to buffer data to.
5. Use the API Buffer Server Service and PI Buffer Subsystem pages to configure and control the
buffering services.
Procedure
1. In the PI ICU page tree, select General.
2. Under PI Host Information, set SDK Member to a secondary collective member.
This property sets which PI Data Archive server in the collective sends the interface
configuration information and output points. If you set each interface to a different
collective member, you enable failover when the PI Data Archive server that sends output
points becomes unavailable.
3. Set API Hostname to match.
The interface uses this information to connect to the PI Data Archive server that provides
configuration data. The drop-down list shows the host specified in various formats. You can
specify the host as an IP address, a path, or a host name. However, when you configure the
buffered server list, you must specify the buffered server names in the same format,
otherwise buffering will not work.
4. Click Apply.
Note:
Follow the remaining steps only if you are using PI Buffer Subsystem 3.4.380 or
earlier.
5. Select Tools > Buffering.
6. In the Buffering dialog box task list, select Buffered Servers.
7. Verify that the Replicate data to all collective member nodes check box is selected and that
the server list contains the server and format specified in API Hostname.
8. If necessary, click the appropriate entry under Buffered Server Names to change the format.
9. Click Yes at the prompt to restart PI Buffer Subsystem and dependent interfaces.
Procedure
1. Click Start > All Programs > PI System > PI Interface Configuration Utility.
2. Select Tools > Buffering.
3. In the Buffering Manager window, click the Settings link.
4. In the Buffering Settings window, select the collective member to which you do not want PI
Buffer Subsystem to send data.
5. In the Buffering list, select Disallowed.
6. Click Save.
Results
PI Buffer Subsystem no longer sends data to the selected server. To send data to this server, you
can configure a PI to PI interface.
Procedure
1. In a command window, navigate to the \PIPC\bin directory.
2. Enter: pibufss -cfg
3. In the resulting display, note the number of total events sent.
4. Wait a few seconds, then enter pibufss -cfg again.
You may want to repeat this step one or two more times. If buffering is working properly,
the number of total events sent increases each time. The number of queued events
should remain at or near zero.
Procedure
• Use PI ICU to configure PI Buffer Subsystem as you would on an interface computer, but as
mentioned above, use caution when selecting buffered servers.
See Configure n-way buffering for interfaces with PI Buffer Subsystem for instructions.
procedure below. In this case you must install PI Buffer Subsystem, specify the servers in the
initialization file, and set service dependencies.
Procedure
1. Install PI Buffer Subsystem on the PI Data Archive computer where the interface is installed.
2. Configure PI Buffer Subsystem by editing the piclient.ini file.
a. Open the piclient.ini file, found in the \PIPC\DAT\ directory.
b. Edit the file to include the RUNSONSERVER parameter in the PIBUFSS section and to list
the servers that you are sending data to in the BUFFEREDSERVERLIST section.
For example, if you are sending data from the interface to two servers:
[APIBUFFER]
BUFFERING=1
[PIBUFSS]
BUFFERING=1
AUTOCONFIG=1
RUNSONSERVER=1
[BUFFEREDSERVERLIST]
BUFSERV1= MyPIDataArchiveServer1
BUFSERV2= MyPIDataArchiveServer2
c. Save the file.
3. Use PI ICU to add PI Buffer Subsystem as a dependency and to start the interface.
a. Click Start > All Programs > PI System > PI Interface Configuration Utility.
b. In Interface, select the interface you want to buffer.
c. In the page tree of PI ICU, click Service.
d. At the prompt to add a dependency on API Buffer Server, click No.
e. Under Installed services, select PIBufss and click to move the service to the
Dependencies list.
f. Click Apply.
g. Start or restart the interface.
h. At the prompt to start the PIBufss service, click Yes.
PI ICU starts the PI Buffer Subsystem service and then starts the interface service.
Therefore, OSIsoft does not recommend buffering for computers that run only batch interfaces.
For computers running both a batch interface and another interface that can be buffered,
buffering is recommended. Refer to the documentation for your interface for instructions on
configuring the interface without buffering.
• AF SDK
Microsoft .NET assembly that provides access to objects and features of PI Asset
Framework. PI AF SDK is available for both 32-bit and 64-bit Windows operating systems.
Some AF SDK clients include PI System Explorer and PI Vision.
Note:
This client connection requires PI AF Client 2018 or later.
• PI SDK
The COM-based software development kit for PI System applications. PI SDK is a set of
programming libraries for development of Microsoft Windows client programs or interfaces
that can communicate with most PI Data Archive versions (PI Server 3.2.357 and up) on any
supported operating system.
You may also want to implement buffering to protect against data loss if PI Data Archive
becomes unavailable. For details, see N-way buffering for PI clients.
Client failover
When the client connection to the PI Data Archive server is configured for high availability,
clients automatically connect to another PI Data Archive server within the collective if the
current server becomes unavailable. This behavior is known as client failover. When this
occurs, failover automatically switches the client over to another PI Data Archive server of the
collective to minimize the effects of the disruption. In addition, when the original server comes
back online and becomes visible to the client, the client switches back automatically.
Regardless of the connection type, the client attempts to connect to another PI Data Archive
server within the collective based upon a set of factors such as connection preference set for
each client application and connection priority set across all applications on the client
machine. The application-specific connection preference is the first factor considered by
failover, and can be set to either require the primary PI Data Archive server, prefer the primary
server, or a preference to any member-type of the collective. If the connection preference is set
to the any, failover considers the connection priority values, which are set for all the
applications on the client machine. In this scenario, the client fail overs to the server with the
highest priority value.
Procedure
1. Configure failover.
Configure failover
Failover is enabled by default. However, the way failover occurs depends upon how your client
connection is configured.
Procedure
• Set the connection priority values for the client machine to configure the failover behavior
when a disruption occurs.
◦ Specify connection priority for AF SDK clients
◦ Specify the connection priority for PI SDK clients
The connection priority values are set for all client applications on the client machine.
Client failback
Depending on how the client connection is configured, failback automatically switches the
client back to either the primary server of the PI Data Archive collective or a server with a
higher connection priority value.
Failover occurs when the primary PI Data Archive server for the client becomes unavailable.
When this occurs, the client automatically connects to another server within the collective to
minimize the disruption for the client.
Failback attempts to restore the client connection back to the original server before the
disruption and failover occurred, depending on the configurations set for the connection. If the
client connection is set to prefer the primary server, it checks for the primary server to become
available and switches to that server. If the connection preference is set to any, the client
connection checks for any server with a higher connection priority value available and
switches to that server.
Configure failback
Failback is enabled by default for PI AF Client 2018 and later only, and only with AF SDK client
connections.
Note:
Failback is not supported for PI SDK client connections.
Procedure
1. Check the version of PI AF Client for your client application.
◦ If your PI AF Client version is PI AF Client 2018 or later, proceed to the next step.
◦ If your PI AF Client version is 2017 R2 or earlier, you must upgrade to version PI AF
Client 2018 or later for your client application. See the PI Server Installation topic
"Upgrade PI AF Client" in Live Library (https://livelibrary.osisoft.com).
2. Configure connection preference to the failback behavior you want for your client
application. See Specify connection preference for AF SDK clients. The connection
preference is set on a client application level.
3. If you set the connection preference to any (in scenarios where you want Connection
balancing between servers of the collective), set the connection priority values for the client
machine. See Specify connection preference for AF SDK clients. The connection priority
values are set for all client applications on the client machine.
Procedure
1. Check the version of PI AF Client for your client application.
◦ If your PI AF Client version is PI AF Client 2018 or later, proceed to the next step.
◦ If your PI AF Client version is 2017 R2 or earlier, you must upgrade to version PI AF
Client 2018 or later for your client application. See the PI Server Installation topic
"Upgrade PI AF Client" in Live Library (https://livelibrary.osisoft.com).
2. Configure connection preference to any for the client application(s). See Specify connection
preference for AF SDK clients.
On every PI Data Archive server participating in connection balancing, you must change the
connection preference from the default value Prefer Primary to the value Any. This is
necessary because using Prefer Primary interferes with the ability to distribute the
connections among the other servers within the collective equally.
3. Change the connection preference of any AF SDK client application to Any as well.
Note:
Connection preferences are set on each AF SDK client application (it is an application-
specific setting). This is in contrast to connection priority values which are set on the
AF SDK client machine, and applies to all client applications connecting on the AF SDK
connection.
4. Set the connection priority of each server within the collective you want participating in
connection balancing to the same numerical value. See Specify connection preference for AF
SDK clients. The connection priority values are apply for all AF SDK client applications on
the client machine.
Keep in mind that some of these clients write configuration data to PI Data Archive. These
clients must connect to a primary server. In a collective, you make configuration changes to the
primary servers, which sends those changes to all secondary servers.
Client-specific connection preferences overrides any default connection preference values.
Procedure
1. Open PI System Explorer on the host computer and select Tools > Options...
2. In the Server Options tab, locate the Connection preference drop-down list in the PI Data
Archive Connection Settings in PI System Explorer field.
Caution:
Ensure that you are not erroneously locating the Connection preference drop-down
list in the PI AF Server Connection Settings in PI System Explorer field.
3. Select the Connection preference drop-down list for this client application and select the
preference you want for the client application (in this case, we are specifying for the PSE
client application).
◦ Require Primary
Client applicationmust connect to primary server.
◦ Prefer Primary
Client application prefers to connect to primary server. With this setting, the client
application (in this case it is the PSE) always attempts to connect to a primary server
first, but will connect to a secondary server if the primary server is unavailable.
◦ Any
Client application can connect to any server.
4. Click OK.
Procedure
1. Open PI System Explorer on the host computer for the client application and select File >
Connections.
2. Right-click on the PI Data Archive collective and select Properties.
3. In the Collectives tab, specify priority values for each of the server in the collective.
4. Click OK.
Note:
Connection priorities are set for all AF SDK client applications on the host computer
(machine-specific setting). This is in contrast to connection preferences, which are set
for the specific AF SDK client application (application-specific setting). Hence, you
only need to set connection priority once on the client machine and it will apply to all
AF SDK client applications running on the machine.
Procedure
1. Open PI Connection Manager. From most clients, choose File > Connections.
2. In the list of servers, select the collective.
3. Choose Server > Connect to Primary.
PI SDK connects to the primary server in the collective.
4. To verify, double-click the collective and check the connected server on the Collective
Member Information dialog box.
• Require Primary
Client must connect to primary server. With this setting, PI SDK returns an error if the
primary server is unavailable when trying to connect.
• Prefer Primary
Client prefers to connect to primary server; if connected to a secondary server, some
features may not be available. With this setting, PI SDK always attempts to connect to a
primary server first, but will connect to a secondary server if the primary server is
unavailable.
• Any
Client can connect to any server.
Client-specific connection preferences to require or prefer the primary server override default
connection preferences specified in PI Connection Manager. See the client documentation for
information about specifying a client-specific connection preference.
Procedure
1. Default connection preferences.
Procedure
1. Open PI Connection Manager. From most clients, choose File > Connections.
2. Double-click the collective name to open the Collective Member Information dialog box.
3. Click a server to view its properties.
4. In Priority, specify the desired connection order for the selected server.
PI SDK attempts to connect to servers in the order specified. PI SDK never connects to
servers with a priority of -1. By default, the primary PI Data Archive server has a priority of
1.
5. Click Save.
6. Click Close to close the Collective Member Information dialog box.
Procedure
1. Open PI Connection Manager. From most clients, choose File > Connections.
2. If you only have one connection, you must first add a temporary or placeholder server. If you
have multiple connections, skip this step.
a. Choose Server > Add Server.
b. In Network Node, type a temporary name, such as TempServer.
c. Clear the Confirm check box.
When Confirm is selected, PI Connection Manager attempts to connect to the specified
server.
d. Click OK.
3. Remove the server you want to clear.
a. Select the server.
b. Choose Server > Remove Selected Server.
If you have multiple connections, the procedure is now complete.
4. If do not have multiple connections, you can now add the new server.
a. Choose Server > Add Server.
b. In Network Node, type the server host name.
c. Enter the connection credentials.
d. Click OK.
5. Remove the temporary server.
a. Select the temporary server in the list of servers.
b. Choose Server > Remove Selected Server.
To support PI Data Archive HA for PI clients, you configure PI Buffer Subsystem to use n-way
buffering. With n-way buffering, the buffering service fans data all PI Data Archive collective
members.
There are some important differences between PI client buffering and PI interface buffering:
• You can buffer PI client data only with PI Buffer Subsystem. API Buffer Server cannot buffer
client data.
• If your PI clients write data using PI SDK, use PI SDK Utility to configure buffering.
• If your PI clients write data using AF SDK, use PI System Explorer to configure buffering.
Procedure
1. To change the default configuration for AF SDK buffering, start PI System Explorer and click
Tools > Buffering Manager.
Alternatively, you can click File > Connections, and then click Buffering Manager.
2. In the Buffering Settings window, click Show advanced global configuration.
3. In the AF SDK Buffering list, select the setting you want:
◦ To turn off buffering for AF SDK data, select Do not buffer.
◦ To require buffering to send AF SDK data to PI Data Archive, select Always buffer.
Caution:
Use the Always buffer option with care. If data cannot be buffered for any reason, it
will not be sent to the target PI Data Archive server or collective. Since it cannot be
buffered, the data will be lost.
4. Click Save.
Procedure
1. On the computer sending the data to be buffered, run PI SDK Utility.
2. Click Buffering > PI SDK Buffering Configuration.
3. Select the Enable PI SDK Buffering check box.
4. Click Save.
A message in the status bar shows the current buffering status.
5. Restart the PI client applications to ensure that their data is buffered.
Results
All PI SDK data from this computer is sent to all PI Data Archive computers that have been
configured to receive data from PI Buffer Subsystem. To add servers, use Buffering Manager.
Procedure
1. At the command prompt, type pibufss –cfg.
2. In the resulting output, look for the line that starts with total events sent.
At the end of that line, you should see queued events: 0. This indicates that all events in
the buffer queue have been sent to PI Data Archive.
3. If the number of queued events is greater than 0, and you want the events to be sent to PI
Data Archive, do the following:
a. In the pibufss –cfg command output, look for the line that begins with a number
followed by the server ID, for example:
1 [YourServerID] state: SendingData, successful connections: 1
b. Make sure the state is SendingData as shown above. If it is not, check the connection
between PI Buffer Subsystem and PI Data Archive.
See pibufss buffer session states for more information.
c. Issue the pibufss -cfg command a few more times until the output shows queued
events: 0.
This indicates that all queued events have been sent to PI Data Archive. The time
required to process all events depends on the number of queued events, network
performance, and PI Data Archive load.
State Description
Dismounted The buffer session is dismounted because a user
on the buffered node issued the pibufss -bc
dismount command.
NotPosting The buffer session is not posting data to PI Data
Archive because a user on the buffered node issued
the pibufss -bc stop command.
Offline The PI Buffer Subsystem service is not running.
Registered The buffer session is connected to PI Data Archive
and registered with PI Snapshot Subsystem, but
has no data to buffer.
SendError The most recent attempt to post buffered data to
PI Data Archive failed. If the problem persists
longer than the time period specified by the
RETRYRATE parameter, the buffer session state
changes to Disconnected.
SendingData The most recent attempt to post buffered data to
PI Data Archive succeeded. When PI Buffer
Subsystem is receiving data from PIAPI or PISDK,
this is the normal buffer session state.
QueueError PI Buffer Subsystem cannot read data from the
buffer queue. This indicates a problem with the
buffer queue. For assistance, visit the OSIsoft
Customer Portal (https://my.osisoft.com/).
Procedure
1. In PI SDK Utility > PI SDK Buffering > PI SDK Buffering Configuration, use the Buffered
Server/Collective list to select a different server or collective.
2. In the PI SDK Buffering Configuration dialog box, click Tools>Service Configuration.
3. On the General tab, click Stop, then Start to restart the PI Buffer Subsystem service.
When buffering both PI API and PI SDK data or PI API data only
When you are buffering both PI API and PI SDK data (usually from both PI interfaces and PI
clients), use PI Interface Configuration Utility (PI ICU) to change the server or collective
receiving the buffered data.
Procedure
1. In PI ICU > Tools > Buffering, select Buffered Servers and then select a server or collective
on the Buffering to collective/server list.
Note:
Make sure the selection under Buffered Server Names (Path, Name, or IP address)
uses the same format as the API Hostname specified for the interface. For example, if
you use a path for API Hostname, you must also use a path (not a name or IP address)
for Buffered Server Names.
2. Click OK.
3. When prompted, restart PI Buffer Subsystem and its dependent interfaces.
pibufq_buffered_server_name.dat or A new queue file is created for the new buffered server.
APIBUF_buffered_server_name.dat (buffer queue
The file currently in use shows the current buffered
files)
server as its buffered_server_name.
Files that show the names of previous buffered servers
are used as follows:
• If there is data in the queue after changing the
buffered server, that data will not be sent to PI Data
Archive.
• If you later start buffering to this server again from the
same node, the data will be sent to PI Data Archive.
Procedure
1. In a command window, navigate to the \PIPC\bin directory.
2. Enter: pibufss -cfg
3. In the resulting display, note the number of total events sent.
4. Wait a few seconds, then enter pibufss -cfg again.
You may want to repeat this step one or two more times. If buffering is working properly,
the number of total events sent increases each time. The number of queued events
should remain at or near zero.
snapshot. You must configure your points to support PI to PI. When using PI to PI in a PI Data
Archive collective, you must pay special attention to buffering, startup, and history recovery.
PI to PI installation location
You can install PI to PI on a PI Data Archive computer or on a separate computer.
• For the most robust configuration, install PI to PI on a different computer than your PI Data
Archive so that you can:
◦ Enable source PI Data Archive failover, which allows PI to PI to connect to an alternate
source server if the main source server becomes unavailable.
◦ Use n-way buffering to write data to any number of target servers in a PI Data Archive
collective.
◦ Install a redundant interface to support interface failover.
• If you install PI to PI on a PI Data Archive computer, you can install PI to PI on the target
server, such that the target server pulls data from the source server. If the connection
between the servers breaks, the target PI Data Archive can request data from the proper
time point upon restoration of the connection.
Alternatively, you can install PI to PI on the source server, such that the source server
pushes data to the target server. In this case, you must set up a buffering service to control
the data flow and send data in case of a lost connection.
• If you are using PI to PI to copy data to multiple servers in a PI Data Archive collective, you
must use n-way buffering and you must push data from the source server to the collective's
servers.
PI to PI source data
PI to PI can gather data from the snapshot at the source server or from the archive at the
source server. You must select the source most appropriate for your needs and expectations.
Gather data from the snapshot if your system requires frequent updates and current data.
Because snapshot data is not compressed, archives might vary slightly among servers. Gather
data from the archive if your system requires identical data at all servers, such as for detailed
analysis. A point's scan class determines the method used.
If PI to PI gathers snapshot data, choose a fast scan rate. PI to PI requests updates like any
other client. The large amounts of data typically requested by PI to PI can overwhelm PI Update
Manager. A faster scan rate clears the subsystem and avoids memory issues. Also, if PI to PI
gathers snapshot data, then you must configure compression at the target server.
If PI to PI gathers archive data, choose a longer scan rate, such as hourly or daily. Set the scan
rate such that there is at least one value in the archive. A longer scan rate avoids clogging PI
Archive Subsystem with many smaller queries. Also, because data in the archives has been
compressed, you can set compression to zero or turn it off on the target server.
PI to PI point definition
The PI to PI target server must contain defined points to receive data from each unique point
on the source server. Each point's scan-class setting determines whether the point receives
archive data or snapshot data (that is, exception data that has not been compressed) from the
source server. By default, points assigned to the first scan class receive snapshot data, and
points assigned to any other defined scan class receive archive data. You can configure an
alternate scan class to receive snapshot data (that is, exception data).
PI to PI buffering
If buffering data with PI to PI in a PI Data Archive collective, use care not to send data back to
the source server. By default, PI Buffer Subsystem uses N-way buffering to send data to all
servers in a collective. Therefore, if the source server and target servers are in the same
collective and you are using PI Buffer Subsystem, you must disable data copying to all collective
members and explicitly select which servers you want sent the data.
Caution:
OSIsoft recommends that you do not use PI Buffer Subsystem version 3.4.375.38 with PI
to PI. This version does not support all archive write options and can lead to data loss.
Instead, use a later version of PI Buffer Subsystem.
Caution:
OSIsoft does not recommend using both PI to PI and PI SDK buffering to replicate data
between collective members. This may cause errors, duplicate events, or both.
PI to PI startup
The PI to PI interface is not aware of PI Data Archive collectives. PI to PI only knows about
servers specified in its startup file. You specify a source (and possibly an alternate source) in
addition to the target PI Data Archive (the host). Each of these servers must be available when
PI to PI starts so that PI to PI can initialize its point list.
PI to PI history recovery
You can use history recovery to recover data for time periods when PI to PI was not running or
could not collect data. You can configure the history-recovery period. The default value is eight
hours. You can also specify a start time and end time to recover history from a specific time
range. You use this technique to transfer data from one server in a PI Data Archive collective to
another server in the collective when interfaces cannot send data directly.
If you use n-way buffering to write data from the PI to PI interface to a PI Data Archive
collective, history recovery requires that all target servers be in the same initial state. Upon
startup, PI to PI checks the snapshot value on the target PI Data Archive server for each tag in
its tag list. PI to PI uses the snapshot value to determine the starting time point for history
recovery. However, PI to PI only checks the snapshot value at the target server specified in its
startup file (the host server). If the values are not the same at other servers in the collective,
the single starting time point will result in either a data gap or a data overlap. To avoid this,
initialize each PI Data Archive in the target collective with the same set of data before
implementing the PI to PI interface.
secondary server is located in a high-latency business network. For more detailed information,
see the interface manual, PI to PI TCP/IP Interface to the PI System.
Procedure
1. Install the PI to PI interface on the secondary server.
2. Click Start > All Programs > PI System > PI Interface Configuration Utility.
3. Create a new instance of the interface.
a. Select Interface > New Windows Interface Instance from BAT File.
b. Navigate to the PItoPI directory.
c. Select PItoPI.bat_new and click Open.
d. In Select the host PI Server/Collective, select the collective and click OK. PI ICU creates a
new instance of the interface.
4. Select IO Rate in the page tree and clear the Enable IORates for this interface check box.
5. Select General in the page tree and set the following properties:
a. In Point Source(s), add the point source identifier for each interface that sends data to
the primary server.
b. Set SDK Member to secondary server.
c. Set API Hostname to secondary server.
d. Click Apply. You might also consider editing the existing scan class and reducing the scan
frequency.
6. Select PItoPI in the page tree and click the Required tab.
◦ In Source host, type name of the primary server.
7. Click the Location tab.
◦ Select the Override location 1 check box.
◦ Select the Override location 2 check box and select 0 in the corresponding drop-down
list.
◦ Select the Override 3 check box and select 3 in the corresponding drop-down list.
◦ Select the Override 4 check box and select Sign up for exceptions.
◦ Select the Override 5 check box and select 0 in the corresponding drop-down list.
8. Click the Optional tab.
◦ Select the Source tag definition attribute check box.
◦ Select the option Use TagName on both (Ignoring Exdesc and InstrumentTag point
attributes).
9. Select Service in the page tree, and click Create .
10. Click the Start interface service button to start the service.
Limitation of PI AF collectives
Because secondary PI AF collective members are read-only, applications that require writes to
the PI AF Configuration database (such as asset analytics and notifications), or applications
that write event frames, will not work when the PI AF collective primary server is unavailable.
Procedure
1. AF Collective Manager.
2. Prepare to create a PI AF collective.
3. Create a PI AF collective.
4. Configure PI AF collective properties.
5. Check PI AF collective status.
6. Add a secondary server to a PI AF collective.
7. Connect or switch to a specific member of a PI AF collective.
8. Remove a secondary server from a PI AF collective.
AF Collective Manager
Starting with PI Server 2018, PI AF collective creation has been moved out of PI System
Explorer and into the AF Collective Manager. AF Collective Manager provides a graphical user
interface for creating, editing, and managing PI AF collectives.
AF Collective Manager is available for installation with the PI Server install kit and PI AF Client
install kit.
Procedure
1. Select Start > All Programs > PI System > AF Collective Manager. A message appears
informing you that OSIsoft no longer recommends using PI AF collectives as a High
Availability option. See the OSIsoft Knowledge Base article High Availability (HA) options
for PI Asset Framework (PI AF) (https://customers.osisoft.com/s/knowledgearticle?
knowledgeArticleUrl=KB00634).
2. To start the AF Collective Manager:
◦ Click No start the AF Collective Manager tool.
◦ Click Yes to read the KB article.
The AF Collective Manager window opens.
Procedure
1. Make sure that you meet all general collective creation requirements. See Configuration
requirements for PI AF collectives.
2. Make sure that you meet all SQL Server requirements. See SQL Server requirements for PI
AF collectives.
3. Make sure that you meet all security requirements. See Security requirements for PI AF
collectives.
4. A single instance of PI AF server consists of the PI AF application service and the PI AF SQL
database. These components may be installed on separate machines. Make sure that PI AF
server is installed on each member of the collective. This means that at least two complete
PI AF server systems must be installed. This could be two machines (PI AF application
service and PI AF SQL database installed on both machines), or four machines (two
machines with PI AF application service only, and two machines with PI AF SQL database
only).
5. Make a full backup of the PI AF SQL Server database, typically named PIFD.
OSIsoft highly recommends that you make regular backups of SQL Server data, especially on
the primary server. The PI AF installation process creates a SQL Server backup job that is
scheduled to run by SQL Server Agent. Make sure you copy these backups to media other
than the media that contains the data.
6. Verify that TCP/IP and Named Pipes are enabled on all SQL Server computers for the correct
instance. Run SQL Server Configuration Manager, choose your instance, and verify that the
correct protocols are enabled.
7. Make sure the SQL Agent service is running on the primary SQL Server computer.
8. All computers upon which the PI AF application service runs must be in a domain. Check the
domain for each computer:
a. Click Start and right-click Computer.
b. Select Properties to view workgroup and domain settings.
• Two SQL Server instances are required, each on separate physical hardware.
• The PI AF SQL database computers can be in a workgroup or a domain. If the PI AF SQL
database computers are in a workgroup, see PI AF collectives in a domain or workgroup.
• The primary PI AF server requires a non-Express Edition of a supported version of SQL
Server. (Review the PI AF Release Notes for supported SQL Server Versions and Editions.)
• The secondary SQL Server computer can use the SQL Express edition, with limitations. Refer
to Microsoft's web site for details.
• SQL Server Compact edition is not supported.
• It is not necessary to have the same SQL Server edition and version for all members of a
collective, but it is recommended.
• SQL Server Agent must be running on the primary SQL Server computer.
• SQL Server Replication must be installed on the primary SQL Server computer; it is not
required on the secondary collective members. If replication is subsequently added or
installed, you must restart SQL Server Agent to prevent errors.
• When the SQL Agent is run under a domain account and the primary AF database server is
64-bit SQL Server, you must configure the C:\Program Files\Microsoft SQL Server
\100\COM\ folder on the primary AF database server to allow read/write access to the SQL
Agent domain account.
PI AF application service
Beginning with PI AF 2.7, by default the PI AF application service is run under a virtual
account, NT SERVICE\AFService. Do not run it under the Local System account. The best
practice is to use a low-privileged domain account, as this account does not require special
access to the PI AF SQL database. The PI AF application service account is added to a local
Windows security group, which is assigned the appropriate access in the PI AF SQL database.
Component Action required
Permissions • Run as a low-privileged account.
• Do not run as Local System.
Primary PI AF server No action required.
Secondary PI AF servers No action required.
PI AF collective creator
A domain user, with Windows credentials that are authenticated by PI AF, Windows, and SQL
Server, runs the AF Collective Manager client that is used to create the PI AF collective.
Component Action required
Permissions The credentials that are used to create the PI AF collective are used only once
to create the PI AF collective. After you create the PI AF collective, you can
remove the special permissions.
Primary PI AF server Add the credentials used to create the PI AF Collective in AF Collective
Manager to the Local Administrators group.
Secondary PI AF servers Add the credentials used to create the PI AF Collective in AF Collective
Manager to the Local Administrators group.
Primary PI AF SQL • If it does not already exist, create a login in SQL Server for the PI AF
database collective creator's domain account.
• Add the credentials used to create the PI AF Collective in AF Collective
Manager to the Local Administrators group.
• Grant the sysadmin server role to this account.
Secondary PI AF SQL • If it does not already exist, create a login in SQL Server for the PI AF
databases collective creator's domain account.
• Grant the sysadmin server role to this account.
Procedure
1. Using the Windows credentials that you will use to create the collective, login to the
workstation from which you will create the collective (do not do this on the SQL Server
computer) and connect to each PI AF server that will be part of the collective.
2. On the same workstation, verify that you can perform a simple file share access to each SQL
Server:
a. Select Start > Run.
b. Enter \\SQL_Server_computer_name for each SQL server.
This ensures that your credentials authenticate to each SQL Server at the Windows level.
3. Establish a connection to each SQL Server via SQL Server Management Studio (SSMS) or
sqlcmd.exe.
4. Once connected, run the following query:
SELECT IS_SRVROLEMEMBER ('sysadmin') "is sysadmin", CURRENT_USER "connected
as", SYSTEM_USER "login user" ;
where
"is sysadmin" returns 1=true, 0=false
"connected as" returns "dbo"
"login user" returns the user’s Windows user principal
Do not proceed until the connection and query succeeds for each SQL Server that will be
part of your PI AF collective.
Create a PI AF collective
Before you start
Perform all the steps in Prepare to create a PI AF collective.
Procedure
1. Start the SQL Server Agent Service.
SQL Server replication depends on the SQL Server Agent service. If it is not running, when
you attempt to set up a PI AF collective, the setup fails without warning. The only way to
recover is to delete the collective, start the SQL Server Agent service, then set up the
collective.
2. In AF Collective Manager, right-click on a PI AF server that you want in the collective and
select Create Collective.
3. In the Create New Collective - Verify Backup Completed window, select the I have verified my
backups are valid check box and click Next.
4. In the Create New Collective - Select Primary window, choose your primary server.
5. Click Next.
6. From the Server list in the Create New Collective - Select Secondary Servers window, select a
PI AF server to add to the collective as a secondary server and click Add. Repeat to add
additional secondary servers. If you want to create the collective without adding a
secondary, then skip this step.
You can add secondary servers after the collective is created. See Add a secondary server to
a PI AF collective.
7. Click Next.
The Create New Collective – Verify Selections window opens.
8. Optional. Click Advanced Options. See Configure PI AF collective properties for a
description of the advanced option fields.
9. Click Next.
The collective is created and the Create New Collective – Finishing window opens.
10. Click OK to begin the replication process.
◦ If you click Exit before the secondary servers are listed in the lower area of the window,
the replication process stops on any secondary servers in the collective. A message
appears that indicates the replication process is not complete. You will need to start the
replication process on any secondary servers that currently belong to the collective.
◦ If you click Finish before the replication is complete, a message appears indicating the
replication is not complete, and where to look for the current replication status.
Results
When the replication process is complete, the status for the first row (the snapshot creation)
shows Succeeded. The status for the second row (the replication process as it relates to the
primary server) shows Idle. The status for the third row and subsequent rows (the replication
process as it relates to the secondary servers) shows Idle. For details about the collective
status, see PI AF collective status details.
This error can be corrected during the PI AF collective creation process; it is not necessary to
exit the Create New Collective window. The PI AF collective creation process will continue
normally after the following steps are completed.
Procedure
1. Open Microsoft SQL Server Management Studio, and connect to the SQL Server instance for
the primary server in the PI AF collective.
2. Under the SQL Server cluster instance, expand Security > Logins.
3. Right-click the login created for the AFServers domain group and select Properties.
4. Select the User Mapping page.
5. Under Users mapped to this login, select the Map check box for the <PIFD>_distribution
database row.
6. Ensure the User column for the <PIFD>_distribution row is set to the domain user group
(YourDomain\YourAFDomainGroup).
7. With the <PIFD>_distribution row selected, select the db_AFServer role check box under
Database role membership for: <PIFD>_distribution. The public role should be selected by
default; if it is not, select its check box.
8. Click OK to save the SQL Server login.
◦ Timeout
The number of seconds for an operation to finish on the PI AF server.
◦ Priority
The priority order for selecting the collective member on the current computer. You can
modify this value for each collective member.
◦ Period
The frequency, in seconds, in which a collective member checks the status of the
remaining collective members.
◦ Grace
The time, in seconds, that is allowed before the communication status is set to
TimedOutOnPrimary when there is no communication with the primary server.
Note:
The Port, Account, Role, and Status settings on the Collective tab are read-only. See
the descriptions of these settings for information on how each one is set.
◦ Port
The port through which the PI AF server communicates. This value is set in the
configuration of the PI AF server, before the server became a collective member.
◦ Account
The account under which the PI AF application service is running. This value is set in the
configuration of the PI AF server, before the server became a collective member.
◦ Status
The status of the selected collective member, including the last time communication was
verified with the primary server the last time the collective member was synchronized,
current synchronization status, and current communication status.
4. Click More to display the Collective Status Details window. See PI AF collective status details.
Procedure
1. Choose one of the following actions:
To check the status of a collective Do this ...
member in ...
AF Collective Manager Right-click a collective member and click Show Collective
Status.
PI System Explorer a. Select File > Connections.
b. In the Servers window, right-click a collective member and
click Show Collective Status.
The status of the selected member is displayed in the Collective Status Details window. Click
Refresh as needed to update status.
2. Choose one of the following actions:
To ... Do this ...
Review errors for secondary servers a. Select the Show Errors Only check box.
only
b. Click Refresh.
Specify how much detail you want to a. In the Max. Secondary Details field, select One per
see for secondary servers Secondary or specify a number from zero to 100.
b. Click Refresh.
• The first row shows the status of the snapshot creation process. This row is always
displayed.
• The second row shows the status of the replication process between primary server and
secondary servers. This row is always displayed.
• The third and ensuing rows show the latest replication status messages for the secondary
servers. The level of detail depends on the settings you have selected for Show Errors Only
and Max. Secondary Details.
If there is no current activity, the Details area is empty.
Details grid
The Details grid contains the following columns:
• Name
The name of the collective member.
• Time Stamp
The time stamp from the SQL call to obtain the replication status, displayed in five-minute
intervals.
• Commands Delivered
The number of commands being sent from the primary server to the secondary server.
• Status
The synchronization status between the server members in the collective.
The status of the replication process from the primary server to the secondary servers.
• Comment
The current stage of the replication process.
• Error Code
If an error occurs, the associated error code.
• Error Message
If an error occurs, the associated error message.
Procedure
1. In AF Collective Manager, right-click the primary PI AF server and select Add Server to
Collective. The Adding Secondaries – Select Secondary Servers window opens.
2. From the Server list, select the PI AF server to add to the collective as a secondary server.
3. Click Add to add the PI AF server to the list.
4. Click Next.
The Adding Secondaries - Verify Selections window opens.
5. Click Next. The secondary server is added to the collective.
The Adding Secondaries – Finishing window appears. The process of replicating data to the
secondary server begins and the window displays collective status details during the
process. When the replication process is complete on the secondary server, the Status for
the third and subsequent rows display Idle. For more on status details, see PI AF collective
status details.
Note:
If you click Exit before the window lists the newly added secondary server, the
replication process stops on that secondary server. A message appears that indicates
the replication process is not complete. You will need to start the replication process
on any secondary servers that currently belong to the collective.
Procedure
1. To connect to a specific collective member, choose one of the following actions:
2. In the Choose Collective Member window, select the collective member to which you want to
connect from the Collective Member list.
3. Click OK.
You are now connected to the selected collective member.
Procedure
1. In AF Collective Manager, select the PI AF Collective that contains the secondary server to be
removed and click the Properties button.
2. Click the Collective tab.
3. Right-click the secondary server and select Delete.
Procedure
1. In AF Collective Manager, right-click the PI AF Collective that contains the secondary server
on which you want to stop replication and click the Properties button.
2. Click the Collective tab.
3. Right-click the secondary server and select Stop Replication.
Replication is stopped on the secondary server. As long as the server is a member of the
collective, you can start replication at a later time.
Procedure
1. In AF Collective Manager, right-click the PI AF Collective that contains the primary server on
which you want to stop replication and click the Properties button.
2. Click the Collective tab.
3. Right-click the primary server and select Stop Replication.
Replication is stopped on the primary server and all secondary servers. As long as the
collective still exists, you can start replication on the primary server at a later time; you will
need to start replication on each secondary server, too.
Procedure
1. In AF Collective Manager, right-click the PI AF Collective that contains the servers on which
you want to start replication and click the Properties button.
2. Click the Collective tab.
3. Right-click the server and select Start Replication. If this is the primary server, you also
need to start replication on each secondary server.
Procedure
1. In AF Collective Manager, right-click the PI AF Collective that contains the server you want
to reinitialize and click the Properties button.
2. Click the Collective tab.
3. Right-click the server and select Reinitialize Replication.
Procedure
1. On the primary PI AF SQL database computer, open Windows Explorer.
2. Navigate to the \repldata folder for the SQL Server instance where the PI AF SQL database
is installed.
3. Right-click the \repldata folder and select Properties.
4. Click the Security tab and click Edit.
5. In the Permissions for repldata window, click Add.
6. In the Select Users, Computers, or Groups window, check that the From this location: field
shows the correct domain. If not, click Location and navigate to and select the correct
domain.
7. In the Enter the object names to select field, enter the name of the domain account under
which the SQL Server Agent service runs.
8. Click OK.
9. In the Permissions for [SQL Agent Account Name] area of the Permissions for repldata
window, select the Modify check box and ensure that all check boxes except Full control and
Special permissions are selected.
10. Click OK.
11. Click OK to return to Windows Explorer.
Troubleshoot PI AF collectives
Use the topics in this section to troubleshoot issues with PI AF collectives.
This message indicates that the logged-on user is unable to access one of the servers included
in the collective. The error is most likely related to the fact that the logged-on user does not
have the correct permissions on the primary PI AF SQL database computer.
Review the Application event logs on the PI AF server and PI AF SQL database computers,
beginning with the primary PI AF server, to determine which computer is receiving the
connection error.
Be sure that the login account is given sysadmin privileges to SQL Server on the AF SQL
database computer.
In the SnapShot status row (the first row in the bottom section), the message displays:
Access to the path ‘[..\repldata\...] is denied.
This message indicates that the SQL Server Agent service account does not have Write access
to the \repldata folder for the SQL Server instance into which the primary PI AF SQL
database was installed. See Configure folder permissions on the PI AF collective primary
server.
After setting the proper security permissions on the \repldata folder, exit the Create New
Collective – Finishing window. A message displays, indicating the primary server’s replication
has not finished.
Click OK and return to the Collective tab in the AF Server Properties window. Delete the
collective, then recreate the collective, and the snapshot is created correctly.
Click OK to exit the error window. In the Create New Collective – Finishing window the same
message appears. Click Cancel to exit the window. The collective was not created. Start the SQL
Server Agent service on the primary server, then create the new collective.
• Server offload
Functionality for handling high availability is moved from the server to an application
network infrastructure device.
Hardware load balancing is usually managed by an application delivery controller or an
application accelerator. These devices relieve the load on servers by handling such functions
as SSL termination and acceleration, TCP multiplexing, cookie encryption and decryption,
compression, caching, URI rewriting, and application security.
• Alerts
You can configure the load balancer to notify you when a server is placed into or taken out
of service.
• Change control
You can take a server out of service for maintenance without affecting the availability of
your application.
Check HTTP content to verify PI Vision application server and SQL Server
availability
Parameters in HTTP content can provide useful information. For example, the HTML Title tag
that is returned when you access a URL can show you if the page was successfully accessed or
an error occurred.
To monitor if the PI Vision application server and SQL Server are available:
Procedure
1. Create a test display.
2. Periodically call the test display directly by name, for example: http://webServer/
pivision/#/Displays/7/YourTestDisplay, where webServer is the name of your
PI Vision web server. If SQL Server is available, you should see the name of the display
returned in the Title tag of the HTML code. If SQL Server is down, an error page is shown,
instead.
To examine the contents of the Title tag:
Note:
This procedure requires the Windows Performance Monitor (perfmon) utility. It does not
require the PI Performance Monitor interface.
Procedure
1. Run Internet Information Services (IIS) on the AF server.
2. Create a page that will show the value of the perfmon counter (see code sample, later).
3. Configure the hardware load balancer to read the content from this web page to determine
the availability of AF.
<script runat="server">
sub Page_Load(sender as Object, e as EventArgs)
Dim perfAFHealth as New PerformanceCounter("PI AF Server", "Health")
If perfAFHealth.NextValue() = 0
lblperfAFHealth.Text = "DOWN"
ElseIf perfAFHealth.NextValue() = 1
lblperfAFHealth.Text = "UP"
Else lblperfAFHealth.Text = "INVALID"
End If
end sub
</script>
<!DOCTYPE html>
<html>
<head>
<title>AF Health Check</title>
<meta http-equiv="refresh" content="5" />
</head>
<body>
</body>
</html>
Procedure
1. Create a listener. For example:
$endpoint = new-object System.Net.IPEndPoint
([system.net.ipaddress]::any, $port)
Note that providing more than two AF servers can help with high availability, but does not
increase scalability.
• Use a common SQL Server for the AF servers to share
Ideally, your hardware load balancer monitors the AF Health Check on one port and if the
AF service is up and running, it load balances the traffic on port 5457.
• Clients connect to AF servers by using the virtual IP address (VIP) of the hardware load
balancer.
Clients should not connect directly to AF servers without going through the load balancer,
because the PIFD database will return the same AFSystemID value for each AF server, which
causes errors for the AF SDK.
Failure handling
In the event of a failure, configure the following actions to occur:
• If one AF server is taken out of service, direct traffic to the other AF server.
• If the SQL Server fails, take both AF servers out of service.
• If no AF server is available, inform users that the site is down or under maintenance.
PI Notifications should not be configured to run under the load balancer. PI Notifications has its
own heart-beat check to determine which servers are active making it unnecessary to run
under a load balancer. You can run PI Notifications on a separate server, or on the AF servers.
Only one PI Notifications server can be active at a time; the active server is registered in the AF
database on the SQL Server.