FalconStor
CDP/NSS
ADMINISTRATION GUIDE
FalconStor CDP/NSS Administration Guide
Version 7
FalconStor Software, Inc.
2 Huntington Quadrangle, Suite 2S01
Melville, NY 11747
Phone: 631-777-5188
Fax: 631-501-7633
Web site: www.falconstor.com
Copyright 2001-2011 FalconStor Software. All Rights Reserved.
FalconStor Software, IPStor, DynaPath, HotZone, SafeCache, TimeMark, TimeView, and ZeroImpact are either registered
trademarks or trademarks of FalconStor Software, Inc. in the United States and other countries.
Linux is a registered trademark of Linus Torvalds.
Windows is a registered trademark of Microsoft Corporation.
All other brand and product names are trademarks or registered trademarks of their respective owners.
FalconStor Software reserves the right to make changes in the information contained in this publication without prior notice. The
reader should in all cases consult FalconStor Software to determine whether any such changes have been made.
This product is protected by United States Patents Nos. 7,093,127 B2; 6,715,098; 7,058,788 B2; 7,330,960 B2; 7,165,145 B2
;7,155,585 B2; 7.231,502 B2; 7,469,337; 7,467,259; 7,418,416 B2; 7,406,575 B2 , and additional patents pending.
122211.7.0
CDP/NSS Administration Guide
Contents
Introduction
Network Storage Server (NSS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14
Continuous Data Protector (CDP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15
Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15
Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16
Acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18
Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20
Web Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25
Getting started with CDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26
Getting started with NSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27
FalconStor Management Console
Launch the console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .28
Connect to your storage server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .29
Configure your server using the configuration wizard . . . . . . . . . . . . . . . . . . . . . . . . . . .30
Step 1: Enter license keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .30
Step 2: Setup network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .30
Step 3: Set hostname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31
FalconStor Management Console user interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32
Discover storage servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .33
Protect your storage servers configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .33
Licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .34
Set Server properties (updated 12/21/11) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .35
Manage accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .41
Change the root users password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .44
Check connectivity between the server and console . . . . . . . . . . . . . . . . . . . . . . . .45
Add an iSCSI User or Mutual CHAP User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .45
Apply software patch updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .46
System maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .47
Physical Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .50
Physical resource icons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .51
Prepare devices to become logical resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . .52
Rename a physical device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .53
Use IDE drives with CDP/NSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .53
Rescan adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .53
Import a disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .55
Test physical device throughput . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .56
SCSI aliasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .56
Repair paths to a device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .56
Logical Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .58
Logical resource icons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .59
Write caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .60
Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .60
SAN Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .61
Add a client from the FalconStor Management Console . . . . . . . . . . . . . . . . . . . . .61
CDP/NSS Administration Guide
Contents
Add a client for FalconStor host applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .62
Change the ACSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .63
Grant access to a SAN Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .63
Console options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .64
Create a custom menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .65
Storage Pools
Manage storage pools and the devices within storage pools . . . . . . . . . . . . . . . . . . . . .66
Create storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .67
Set properties for a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .68
Logical Resources
Types of SAN resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .72
Virtual devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .72
Check the status of a thin disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .74
Service-Enabled Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .75
Create SAN resources - Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .76
Prepare devices to become SAN resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .76
Create a virtual device SAN resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .76
Create a Service-Enabled Device SAN resource . . . . . . . . . . . . . . . . . . . . . . . . . .83
Assign a SAN resource to one or more clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .86
After client assignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .90
Windows clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .90
Solaris clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .90
Expand a virtual device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .92
Service-Enabled Device (SED) expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .94
Grant access to a SAN resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .95
Unassign a SAN resource from a client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .95
Delete a SAN resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .95
CDP/NSS Appliances
Start the CDP/NSS appliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .96
Stop the CDP/NSS appliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .96
Log into the CDP/NSS appliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .98
Telnet access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .98
Check the IPStor Server processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .99
Check physical resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .100
Check activity statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .101
Remove a physical storage device from a storage server . . . . . . . . . . . . . . . . . . . . . .102
Configure iSCSI storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .102
Configuring iSCSI software initiator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .102
Configuring iSCSI hardware HBA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .103
Uninstall a storage server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .104
CDP/NSS Administration Guide
Contents
iSCSI Clients
Supported platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .106
Requirements for iSCSI clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .106
Configuring iSCSI clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .107
Enabling iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .107
Configure your iSCSI initiator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .107
Add your iSCSI client in the FalconStor Management Console . . . . . . . . . . . . . . . . . .108
Create storage targets for the iSCSI client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .113
Restart the iSCSI initiator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .114
Windows iSCSI clients and failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .114
Disable iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .114
Logs and Reports
Event Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .115
Sort information in the Event Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .116
Filter information stored in the Event Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .116
Refresh the Event Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .117
Print/Export Event Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .117
Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .118
Before you begin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .118
Create an individual report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .119
View a report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .123
Export data from a report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .123
Schedule a report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .124
E-mail a scheduled report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .125
Report types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .125
Client Throughput Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .125
Delta Replication Status Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .126
Disk Space Usage Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .128
Disk Usage History Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .129
Fibre Channel Configuration Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .132
Physical Resources Configuration Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .133
Physical Resources Allocation Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .134
Physical Resource Allocation Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .135
Resource IO Activity Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .135
SCSI Channel Throughput Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .137
SCSI Device Throughput Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .139
SAN Client Usage Distribution Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .140
SAN Client/Resources Allocation Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .141
SAN Resources Allocation Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .142
SAN Resource Usage Distribution Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .143
Server Throughput and Filtered Server Throughput Report . . . . . . . . . . . . . . . . .143
Storage Pool Configuration Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .146
User Quota Usage Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .147
Report types - Global replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .148
Create a global replication report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .148
View global report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .148
CDP/NSS Administration Guide
Contents
Fibre Channel Target Mode
Supported platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .149
Fibre Channel Target Mode - Configuration overview . . . . . . . . . . . . . . . . . . . . . . . . .150
Configure Fibre Channel hardware on server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .151
Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .151
Downstream Persistent binding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .151
VSA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .151
Zoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .152
Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .153
QLogic HBAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .153
Configure Fibre Channel hardware on clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .155
NetWare clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .156
Solaris clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .156
HBA failover settings for FC client connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .160
Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .160
HP-UX 10, 11, and 11i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .162
AIX 4.3 and higher . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .163
Linux all versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .163
Solaris 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .164
NetWare all versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .164
Enable Fibre Channel target mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .165
Disable Fibre Channel target mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .165
Verify the Fibre Channel WWPN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .165
Set QLogic ports to target mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .166
Fibre Channel over Ethernet (FCoE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .167
QLogic NPIV HBAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .167
Set NPIV ports to target mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .167
Set up your failover configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .168
Install and run client software and/or manually add Fibre Channel clients . . . . . . . . . .169
Associate World Wide Port Names (WWPN) with clients . . . . . . . . . . . . . . . . . . . . . . .170
Assign virtualized resources to Fibre Channel Clients . . . . . . . . . . . . . . . . . . . . . . . . .171
View new devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .174
Install and configure DynaPath on your Client machines . . . . . . . . . . . . . . . . . . . . . . .174
Spoofing an HBA WWPN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .175
SAN Clients
Add a client from the FalconStor Management Console . . . . . . . . . . . . . . . . . . . . . . .176
Add a client for FalconStor host applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .177
Security
System management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .178
Data access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .178
Account management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .179
Security recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .179
Storage network topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .180
CDP/NSS Administration Guide
Contents
Physical security of machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .180
Disable ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .180
Failover
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .181
Shared storage failover sample configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . .184
Failover requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .185
General failover requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .185
General failover requirements for iSCSI clients . . . . . . . . . . . . . . . . . . . . . . . . . . .186
Shared storage failover requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .186
FC-based Asymmetric failover requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . .187
Pre-flight checklist for failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .188
Connectivity failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .188
Default failover behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .189
Storage device path failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .190
Storage device failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .190
Storage server failure (including storage device failure) . . . . . . . . . . . . . . . . . . . . . . . .191
Failover restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .192
Failover setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .192
Recreate the configuration repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .203
Power Control options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .203
Check Failover status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .206
Failover Information report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .206
Failover network failure status report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .207
After failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .207
Manual recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .207
Auto recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .209
Fix a failed server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .209
Recover from a cross-mirror disk failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .210
Re-synchronize Cross mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .211
Remove Cross mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .211
Check resources and swap if possible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .211
Verify and repair a cross mirror configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . .211
Modify failover configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .216
Make changes to the servers in your failover configuration . . . . . . . . . . . . . . . . . .216
Convert a failover configuration into a mutual failover configuration . . . . . . . . . . .217
Exclude physical devices from health checking . . . . . . . . . . . . . . . . . . . . . . . . . . .217
Change your failover intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .218
Verify physical devices match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .218
Start/stop failover or recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .219
Force a takeover by a secondary server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .219
Manually start a server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .219
Manually initiate a recovery to your primary server . . . . . . . . . . . . . . . . . . . . . . . .219
Suspend/resume failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .220
Remove a failover configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .221
Mirroring and Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .222
TimeMark/CDP and Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .222
Throttle and Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .222
CDP/NSS Administration Guide
Contents
HotZone and Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .222
Enable HotZone using local storage with failover . . . . . . . . . . . . . . . . . . . . . . . . .223
Performance
SafeCache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .225
Configure SafeCache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .226
Create a cache resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .226
Global Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .230
SafeCache for groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .231
Check the status of your SafeCache resource . . . . . . . . . . . . . . . . . . . . . . . . . . .231
SafeCache properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .231
Disable your SafeCache resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .231
HotZone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .232
Read Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .232
Prefetch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .232
Configure HotZone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .233
Check the status of HotZone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .237
HotZone Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .239
Disable HotZone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .239
Mirroring
Synchronous mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .240
Asynchronous mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .241
Mirror requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .242
Mirror setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .242
Create cache resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .249
Check mirroring status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .250
Swap the primary disk with the mirrored copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .250
Promote the mirrored copy to become an independent virtual drive . . . . . . . . . . . . . .250
Recover from a mirroring hardware failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .252
Replace a disk that is part of an active mirror configuration . . . . . . . . . . . . . . . . . . . . .252
Replace a failed physical disk without rebooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . .253
Expand the primary disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .254
Manually synchronize a mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .254
Set mirror throttle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .255
Set alternative read mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .256
Set mirror resynchronization priority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .256
Rebuild a mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .258
Suspend/resume mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .258
Change your mirroring configuration options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .259
Set global mirroring options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .259
Remove a mirror configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .260
Mirroring and failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .260
CDP/NSS Administration Guide
Contents
Snapshot Resource
Create a Snapshot Resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .261
Check status of a Snapshot Resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .268
Protect your Snapshot Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .269
Options for Snapshot Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .269
Snapshot Resource shrink and reclamation policies . . . . . . . . . . . . . . . . . . . . . . . . . .270
Enable Reclamation Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .270
Global reclamation policy and retention schedule . . . . . . . . . . . . . . . . . . . . . . . . .272
Disable Reclamation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .273
Check reclamation status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .274
Shrink Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .274
Shrink a snapshot resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .276
Use Snapshot to copy a SAN resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .276
Check Snapshot Copy status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .280
Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .281
Create a group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .281
Groups with TimeMark/CDP enabled . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .282
Groups with SafeCache enabled . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .282
Groups with replication enabled . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .282
Grant access to a group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .283
Add resources to a group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .283
Remove resources from a group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .285
TimeMarks and CDP
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .287
Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .288
Check TimeMark status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .294
Check CDP journal status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .295
Protect your CDP journal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .296
Add a tag to the CDP journal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .296
Add a comment or change priority of an existing TimeMark . . . . . . . . . . . . . . . . . . . . .296
Manually create a TimeMark . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .297
Copy a TimeMark . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .298
Recover data using the TimeView feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .300
Remap a TimeView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .307
Delete a TimeView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .307
Remove TimeView Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .308
Set TimeView Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .309
Rollback or roll forward a drive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .310
Change your TimeMark/CDP policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .311
TimeMark retention policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .312
Delete TimeViews in batch mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .314
Suspend/resume CDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .314
Delete TimeMarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .315
Disable TimeMark and CDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .315
Replication and TimeMark/CDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .315
CDP/NSS Administration Guide
Contents
NIC Port Bonding
Enable NIC Port Bonding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .316
Remove NIC Port Bonding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .319
Change IP address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .319
Replication
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .320
Remote replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .320
Local replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .320
How replication works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .321
Delta replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .321
Continuous replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .321
Replication configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .322
Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .322
Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .322
Create a Continuous Replication Resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .331
Check replication status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .333
Replication tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .333
Event Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .334
Replication object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .334
Delta Replication Status Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .335
Replication performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .336
Set global replication options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .336
Tune replication parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .336
Assign clients to the replica disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .337
Switch clients to the replica disk when the primary disk fails . . . . . . . . . . . . . . . . . . . .337
Recreate your original replication configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .338
Use TimeMark/TimeView to recover files from your replica . . . . . . . . . . . . . . . . . . . . .339
Change your replication configuration options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .339
Suspend/resume replication schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .341
Stop a replication in progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .341
Manually start the replication process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .341
Set the replication throttle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .342
Add a Target Site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .343
Manage Throttle windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .345
Manage Link Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .347
Add link types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .348
Edit link types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .348
Delete link types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .348
Set replication synchronization priority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .349
Reverse a replication configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .349
Reverse a replica when the primary is not available . . . . . . . . . . . . . . . . . . . . . . . . . . .350
Forceful role reversal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .350
Repair a replica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .351
Relocate a replica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .351
Remove a replication configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .352
Expand the size of the primary disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .352
CDP/NSS Administration Guide
Contents
Replication with other CDP or NSS features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .353
Replication and TimeMark . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .353
Replication and Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .353
Replication and Mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .353
Replication and Thin Provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .353
Near-line Mirroring
Near-line mirroring requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .355
Near-line mirroring setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .355
Enable Near-line Mirroring on multiple resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . .363
Whats next? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .363
Check near-line mirroring status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .364
Near-line recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .365
Recover data from a near-line mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .365
Recover data from a near-line replica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .367
Recover from a near-line replica TimeMark using forceful role reversal . . . . . . . . . . . .370
Swap the primary disk with the near-line mirrored copy . . . . . . . . . . . . . . . . . . . . . . . .373
Manually synchronize a near-line mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .373
Rebuild a near-line mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .373
Expand a near-line mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .374
Expand a service-enabled disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .376
Suspend/resume near-line mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .377
Change your mirroring configuration options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .377
Set global mirroring options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .377
Remove a near-line mirror configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .378
Recover from a near-line mirroring hardware failure . . . . . . . . . . . . . . . . . . . . . . . . . .379
Replace a disk that is part of an active near-line mirror . . . . . . . . . . . . . . . . . . . . . . . .380
Replace a failed physical disk without rebooting your storage server . . . . . . . . . . . . .380
Set Recovery Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .381
ZeroImpact Backup
Configure ZeroImpact backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .382
Back up a CDP/NSS logical resource using dd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .385
Restore a volume backed up using ZeroImpact Backup Enabler . . . . . . . . . . . . . . . . .386
Multipathing
Load distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .388
Preferred paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .388
Path management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .389
Command Line Interface
Installation and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .391
Using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .391
Common arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .392
CDP/NSS Administration Guide
Contents
Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .393
Command Line Interface (CLI) error codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .407
SNMP Integration
SNMPTraps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .430
Implement SNMP support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .433
Microsoft System Center Operations Manager (SCOM) . . . . . . . . . . . . . . . . . . . . . . . .434
HP Network Node Manager (NNM) i9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .435
HP OpenView Network Node Manager 7.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .436
Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .436
Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .436
Statistics in NNM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .437
CA Unicenter TNG 2.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .438
Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .438
Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .438
View traps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .439
Statistics in TNG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .439
Launch the FalconStor Management Console . . . . . . . . . . . . . . . . . . . . . . . . . . .439
IBM Tivoli NetView 6.0.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .440
Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .440
Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .440
Statistics in Tivoli . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .441
BMC Patrol 3.4.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .442
Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .442
Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .442
View traps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .443
Statistics in Patrol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .443
Advanced topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .444
The snmpd.conf file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .444
Use an SNMP configuration for multiple storage servers . . . . . . . . . . . . . . . . . . . . . . .444
IPSTOR-MIB tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .445
Email Alerts
Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .465
Modifying Email Alerts properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .476
Email format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .477
Limiting repetitve Emails . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .477
Script/program trigger information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .477
BootIP
BootIP Set up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .480
Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .481
Creating a boot image for a diskless client computer . . . . . . . . . . . . . . . . . . . . . . . . . .482
Initializing the configuration of the storage Server . . . . . . . . . . . . . . . . . . . . . . . . . . . .483
Enabling the BootIP from the FalconStor Management Console . . . . . . . . . . . . . . . . .483
CDP/NSS Administration Guide
10
Contents
Using DiskSafe to clone a boot image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .483
Setting the BootIP properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .484
Setting the Recovery Password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .484
Set the Recovery password from the iSCSI user management . . . . . . . . . . . . . .484
Set the authentication and Recovery password from iSCSI client properties . . . .484
Remote boot the diskless computer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .485
For Windows 2003 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .485
For Windows Vista/2008 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .485
Using the Sysprep tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .486
For Windows 2003: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .486
Using the Setup Manager tool to create the Sysprep.inf answer file . . . . . . . . . . .487
For Windows Vista/2008 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .488
Creating a TimeMark . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .489
Creating a TimeView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .490
Assigning a TimeView to a diskless client computer . . . . . . . . . . . . . . . . . . . . . . . . . .490
Adding a SAN Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .490
Assigning a TimeView to the SAN Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .491
Recovering Data via Remote boot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .491
Remotely booting the Linux Operating System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .493
Remotely installing CentOS to an iSCSI disk . . . . . . . . . . . . . . . . . . . . . . . . . . . .493
Remote boot from the FalconStor Management Console . . . . . . . . . . . . . . . . . . .493
Remote boot from the Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .494
BootIP and DiskSafe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .494
Remote boot and DiskSafe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .494
Troubleshooting / FAQs
Frequently Asked Questions (FAQ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .496
NIC Port Bonding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .497
SNMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .497
Event log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .497
Virtual devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .497
Multipathing method: MPIO vs. MC/S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .498
BootIP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .500
SCSI adapters and devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .501
Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .503
Fibre Channel Target Mode and storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .503
Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .505
FalconStor Management Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .506
iSCSI Downstream Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .506
Power control option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .508
Protecting data in a Windows environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .508
Protecting data in a Linux environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .508
Protecting data in an AIX environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .509
Protecting data in an HP-UX environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .509
Logical resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .510
Network connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .510
Jumbo frames support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .512
Diagnosing client connectivity issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .512
CDP/NSS Administration Guide
11
Contents
Windows Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .513
Windows client debug information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .513
Clients with iSCSI protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .515
Clients with Fibre Channel protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .516
Linux SAN Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .516
NetWare SAN Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .517
Storage Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .518
Storage server X-ray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .518
Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .520
Cross-mirror failover on a virtual appliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .521
Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .522
TimeMark . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .523
SafeCache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .523
Command line interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .523
Service-Enabled Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .524
Error codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .525
Port Usage
SMI-S Integration
SMI-S Terms and concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .607
Using the SMI-S Provider . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .608
Launch the Command Central Storage console . . . . . . . . . . . . . . . . . . . . . . . . . .608
Add FalconStor Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .608
View FalconStor Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .609
View Storage Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .609
View LUNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .609
View Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .609
View Masking Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .610
Enable SMI-S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .610
RAID Management for VS-Series Appliances (Updated 12/1/11)
Prepare for RAID management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .612
Preconfigured storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .613
Unconfigured storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .613
Launch the RAID Management Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .615
Discover storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .615
Future storage discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .617
Display a storage profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .618
Rename storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .619
Refresh the display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .619
Configure controller connection settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .620
View enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .621
Individual enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .621
Manage controller modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .623
Individual controller modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .624
CDP/NSS Administration Guide
12
Contents
Manage disk drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .625
Interactive enclosure images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .625
Individual disk drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .627
Configure a hot spare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .628
Remove a hot spare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .628
Manage RAID arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .629
Create a RAID array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .630
Create a Logical Unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .631
Individual RAID arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .632
Rename the array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .634
Delete the array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .634
Check RAID array actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .635
Replace a physical disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .635
Logical Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .638
Define LUN mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .638
Remove LUN mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .640
Rename LU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .640
Delete Logical Unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .641
Logical Unit Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .642
Unmapped Logical Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .642
Mapped Logical Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .644
Upgrade RAID controller firmware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .645
Event log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .646
Filter the event log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .646
Clear the event log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .646
Monitor storage from the FalconStor Management console . . . . . . . . . . . . . . . . . . . . .647
Storage information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .647
Server information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .648
Index
CDP/NSS Administration Guide
13
Introduction
As business IT operations grow in size and complexity, many computing
environments are stressed in attempting to keep up with the demand to store and
access data. Information and the effective management of the corresponding
storage infrastructure are critical to a company's success. Reliability, availability, and
disaster recovery capabilities are all key factors in the successful management and
protection of data.
FalconStor Continuous Data Protector (CDP) and Network Storage Server (NSS)
solutions address the growing need for data management, protection, preservation,
and integrity.
Network Storage Server (NSS)
FalconStor Network Storage Server (NSS) enables storage virtualization,
optimization, and efficiency across heterogeneous storage from any storage system,
providing consolidation, business continuity, and automated disaster recovery (DR
NSS enables high availability and data assurance, provides instant recovery with
data integrity for all applications, and protects investments across mixed storage
environments.
Business continuity is a critical aspect of operations and revenue generation. As
businesses evolve, storage infrastructures increase in complexity. Some resources
remain underutilized while others are over-utilized - an inefficient use of power,
capacity, and money. FalconStor solutions allow organizations to consolidate
storage resources for simple and centralized management with high availability.
Complete application awareness provides quick and 100% transactionally
consistent data recovery. Automated DR technology simplifies operations. An open,
flexible architecture enables organizations to leverage existing IT resources to
create an integrated, multi-tiered, cost-efficient storage environment that ensures
business continuity
FalconStor NSS includes FalconStor TimeMark snapshots that work with Snapshot
Agents for databases and messaging applications, providing 100% transactional
integrity for instant recovery to known points in timehelping IT meet recovery point
objectives (RPO) and recovery time objectives (RTO). Data managed by FalconStor
NSS may be efficiently replicated via IP using the FalconStor Replication option for
real-time disaster recovery (DR) protection. Thin Provisioning helps automate
storage resource allocation and capacity management while virtualization provides
centralized management for large, heterogeneous storage environments.
14
Introduction
Continuous Data Protector (CDP)
FalconStor Continuous Data Protector (CDP) advanced data protection solutions
allows organizations to customize and define protection policies per business
application, maximizing IT business operations and profitability.
Protecting data from loss or corruption requires thorough, effective planning.
Comprehensive data protection solutions from FalconStor provide unified backup
and disaster recovery (DR) for continuous data availability. Organizations can
recover emails, files, applications, and entire systems within minutes, locally and
remotely. Application-level integration ensures quick, 100% transactionally
consistent recovery to any point in time. WAN-optimized replication maximizes
network efficiency. By fully automating the resumption of servers, storage, networks,
and applications in a pre-determined, coordinated process, embedded DR
automation technology stages the recovery of complete services - thus facilitating
service-oriented disaster recovery.
CDP automates disaster recovery for physical and virtual servers, allows rapid
recovery of files, databases, systems, and entire sites while reducing the cost and
complexity associated with recovery.
Architecture
NSS
FalconStor NSS is available in multiple form factors. Appliances with internal
storage are available in various sizes for easy deployment to remote sites or offices.
Two FalconStor NSS devices can be interconnected for mirroring and active/active
failover, ensuring HA operations. FalconStor NSS gateway appliances can be
connected to any external storage arrays, allowing you to leverage the storage
systems you have in place. FalconStor NSS can also be purchased as software
(software appliance kits) to install on servers.
CDP
FalconStor CDP can be deployed in several ways to best fit your organizations
needs. FalconStor CDP is available in multiple configurations suitable for remote
offices, branch offices, data centers, and remote DR sites. Appliances with internal
storage for both physical and virtual servers are available in various sizes for easy
deployment to remote sites or offices. Gateway appliances can be connected to any
existing external storage array, allowing you to use and reuse the storage systems
you already have in place.
FalconStor CDP can also be purchased as a software appliance kit to install on
servers or as a virtual appliance that integrates with virtual server technology.
FalconStor CDP can use both a host-based approach and a fabric-based approach
to capture and track data changes. For a host-based model, a FalconStor DiskSafe
Agent runs on the application server to capture block-level changes made to a
system or data disk without impacting application performance. It mirrors the data to
a back-end FalconStor CDP appliance, which handles all of the data protection
operations. All journaling, snapshot processing, mirroring, and replication occur on
the out-of-band FalconStor CDP appliance, so that primary storage I/O remains
unaffected.
15
Introduction
In the fabric-based model, a pair of FalconStor CDP Connector write-splitting
appliances are placed into a FC or iSCSI SAN fabric. FalconStor CDP gateway
appliances function similarly to switches: they split data writes off to one or more
out-of-band FalconStor CDP appliances that provide data protection functionality.
The pair of FalconStor connector appliances is always configured in a high
availability (HA) cluster to provide fault tolerance.
Components
The primary components of the CDP/NSS Storage Network are the storage server,
SAN clients, and the FalconStor Management Console. These components all sit on
the same network segment, the storage network.
Server
The storage server is a dedicated network storage server. The storage server is
attached to the physical SCSI and/or Fibre Channel storage devices on one or more
SCSI or Fibre Channel busses.
The job of the storage server is to communicate data requests between the clients
and the logical (SAN) resources (logically mapped storage devices on the storage
network) via Fibre Channel or iSCSI.
SAN Clients
SAN Clients are the actual file and application servers. They are sometimes referred
to as IPStor SAN Clients because they utilize the storage resources via the storage
server.
You can have iSCSI or Fibre Channel SAN Clients on your storage network. SAN
Clients access their storage resources via iSCSI initiators (for iSCSI) or HBAs (for
Fibre Channel or iSCSI). The storage resources appear as locally attached devices
to the SAN Clients operating systems (Windows, Linux, Solaris, etc.) even though
the SCSI devices are actually located at the storage server.
Console
The FalconStor Management Console is the administration tool for the storage
network. It is a Java application that can be used on a variety of platforms and
allows IPStor administrators to create, configure, manage, and monitor the storage
resources and services on the storage network.
Physical
Resource
Physical resources are the actual devices attached to this storage server. These can
be hard disks, tape drives, device libraries, and RAID cabinets.
Logical
Resource
All resources defined on the storage server, including physical SAN Resources
(virtual drives, and Service-Enabled Devices), Replica Resources, and Snapshot
Groups.
Clients do not gain access to physical resources; they only have access to logical
resources. This means that an administrator must configure each physical resource
to one or more logical resources so that they can be assigned to the clients.
16
Introduction
Logical Resources consists of sets of storage blocks from one or more physical hard
disk drives. This allows the creation of Logical Resources that contain a portion of a
larger physical disk device or an aggregation of multiple physical disk devices.
Understanding how to create and manage Logical Resources is critical to a
successful storage network. See Logical Resources for more information.
17
Introduction
Acronyms
Acronym
Definition
ACL
Access Control List
ACSL
Adaptor, Channel, SCSI ID, LUN
API
Application Programming Interface
BDC
Backup Domain Controller
BMR
Bare Metal Recovery
CCM
Central Client Manager
CCS
Command Central Storage
CDP
Continuous Data Protector
CDR
Continuous Data Replication
CHAP
Challenge Handshake Authentication Protocol
CIFS
Common Internet File System
CLI
Command Line Interface
DAS
Direct Attached Storage
FC
Fibre Channel
FCoE
Fibre Channel over Ethernet
GUI
Graphical User Interface
GUID
Globally Unique Identifier
HBA
Host Bus Adapter.
HCA
Host Channel Adapter.
IMA
Intelligent Management Administrator
I/O
Input / Output
IPMI
Intelligent Platform Management Interface
iSCSI
Internet Small Computer System Interface
JBOD
Just a Bunch Of Disks
LAN
Local Area Network
LUN
Logical Unit Number.
MIB
Management Information Base
NFS
Network File System
NIC
Network Interface Card
18
Introduction
Acronym
Definition
NPIV
N_Port ID Virtualization
NSS
Network Storage Server
NTFS
NT File System
NVRAM
Non-volatile Random Access Memory
OID
Object Identifier
PDC
Primary Domain Controller
POSIX
Portable Operating System Interface
RAID
Redundant Array of Independent Disks
RAS
RAS (Reliability, Availability, Service)
RPC
Remote Procedure Call
SAN
Storage Area Network
SCSI
Small Computer System Interface
SDM
SAN Disk Manager
SED
Service-Enabled Device
SMI-S
Storage Management Initiative Specification
SNMP
Simple Network Management Protocol
SRA
Snapshot Resource Area
SSD
Solid State Disk
VAAI
vStorage APIs for Array Integration
VLAN
Virtual Local Area Network
VSS
Volume Shadow Copy Service
WWNN
World Wide Node Number
WWPN
World Wide Port Number
19
Introduction
Terminology
ApplianceBased
protection
Appliance-based protection, or In-band protection refers to a storage server that is
placed in the data path between an application host and its storage. This allows
CDP/NSS to provision the disk back to the application host while allowing data
protection services.
Bare Metal
Recovery
(BMR)
The process of rebuilding a computer after a catastrophic failure. The normal bare
metal restoration process is: install the operating system from the product disks,
install the backup software (so you can restore your data), and then restore your
data.
Central Client
Manager
(CCM)
A Java console that provides central management of client-side applications
(DiskSafe, snapshot agents) and monitors client storage. CCM allows you to
manage your clients in groups, enhancing accuracy and consistency of policies
across grouped servers. For example, Exchange groups, SharePoint groups. For
additional information, refer to the Central Client Manager User Guide.
CDP Gateway
Additional term for a storage server/ CDP appliance that is providing Continuous
Data Protection.
Command Line
Interface
The Command Line Interface (CLI) is a simple interface that allows client machines
to perform some of the more common functions currently performed by the
FalconStor Management Console. Administrators can use the CLI to automate
many tasks, as well as integrate CDP/NSS with their existing management tools.
The CLI is installed as part of the CDP/NSS Client installation. Once installed, a path
must be set up for Windows clients in order to be able to use the CLI. Refer to the
Command Line Interface section for details.
Cross-Mirror
Fail over
(For virtual appliances only). A non-shared storage failover option that provides high
availability without the need for shared storage. Used with virtual appliances
containing internal storage. Mirroring is facilitated over a dedicated, direct IP
connection. This option removes the requirements of shared storage between two
partner storage server nodes. For additional information on using this feature for
your virtual appliances, refer to Cross-mirror failover requirements.
DiskSafe
Agent
The CDP DiskSafe Agent is host-based replication software that delivers block-level
data protection with centralized management for Microsoft Windows-based servers
as part of the CDP solution. The DiskSafe Agent delivers real-time and periodic
mirroring for both DAS and SAN storage to complement the CDP Journaling feature,
TimeMark Snapshots, and Replication.
DynaPath
DynaPath is a load balancing/path redundancy application that ensures constant
data availability and peak performance across the SAN by performing Fibre Channel
load-balancing, transparent failover, and fail-back services. DynaPath creates
parallel active storage paths that transparently reroute server traffic without
interruption in the event of a storage network problemFor additional information,
refer to the DynaPath User Guide.
20
Introduction
E-mail Alerts
Using pre-configured scripts (called triggers), Email Alerts monitors a set of predefined, critical system components (SCSI drive errors, offline device, etc.) so that
system administrators are able to take corrective measures within the shortest
amount of time, ensuring optimum service uptime and IT efficiency. For additional
information, refer to the Email Alerts section.
FalconStor
Management
Console
Comprehensive, graphical administration tool to configure all data protection
services, set properties, and manage storage. For more information, refer to the
FalconStor Management Console section.
FileSafe
GUID
FileSafe is a software application that protects your data by backing up files and
folders to another location. Data is backed up to a location called a repository. The
repository can be local (on your computer or on a USB device), remote (on a shared
network server or NAS resource), or on a storage server where the FileSafe Server
option is licensed and enabled. For more information, see the FileSafe User Guide.
The Globally Unique Identifier (GUID) is a unique 128-bit number that is used to
identify a particular component, application, file, database entry, and/or user.
Host Zone
Usually a Fibre Channel zone that is comprised of an application server's initiator
port and a CDP/NSS target port. More more information, refer to Zoning.
Host-based
protection
Host-based protection refers to DiskSafe and FileSafe, where the locally
attached disk is mirrored to a CDP-provisioned disk with data protection services.
HotZone
A CDP/NSS option that automatically re-maps data from frequently used areas of
disks to higher performance storage devices in the infrastructure, resulting in
enhanced read performance for the application accessing the storage. This feature
is not available for CDP connector appliances. For additional information, refer to
the HotZone section.
HyperTrac
The HyperTrac Backup Accelerator (HyperTrac) works in conjunction with CDP and
NSS to increase tape backup speed, eliminate backup windows, and off load
processing from application servers.
HyperTrac for VMware enhances the functionality of VMware Consolidated Backup
(VCB) by allow TimeViews of the production virtual disk to be used as the source of
the VCB snapshot. Unlike the traditional HyperTrac model, the TimeViews are not
mounted directly to the storage server.
HyperTrac for Hyper-V enables mounting production TimeViews for backup via
Microsoft Hyper-V machines. For more information, refer to the HyperTrac User
Guide.
IPMI
iSCSI Client
Intelligent Platform Management Interface (IPMI) is a hardware level interface that
monitors various hardware functions on a server.
iSCSI clients are the file and application servers that access CDP/NSS SAN
Resources using the iSCSI protocol.
21
Introduction
iSCSI Target
Logical
Resources
A storage target for the client.
Logically mapped devices on the storage server. They are comprised of physical
storage devices, known as Physical Resources.
MIB
A Management Information Base (MIB) is an ASCII text file that describes SNMP
network elements as a list of data objects. It is a database of information, laid out in
a tree structure, with MIB objects as the leaf nodes, that you can query from an
SNMP agent. The purpose of the MIB is to translate numerical strings into humanreadable text. When an SNMP device sends a Trap, it identifies each data object in
the message with a number string called an object identifier (OID). Refer to the
SNMP Integration section for additional information.
MicroScan
FalconStor MicroScan is a patented de-duplication technology that minimizes the
amount of data transferred during replication by eliminating inefficiencies at the
application and file system layer. Data changes and replicated are replicated at the
smallest possible level of granularity, reducing bandwidth and associated storage
costs for disaster recovery (DR), or any time data is replicated from one source to
another. MicroScan is an integral part of the replication option for CDP and NSS
solutions.
NPIV
N_Port ID Virtualization (NPIV) allows multiple N_Port IDs to share a single physical
N_Port, this allows us to have an initiator, target and standby occupying the same
physical port This is not supported when using a non-NPIV driver.
NIC Port
Bonding
NIC Port Bonding is a load-balancing/path-redundancy feature (available for Linux)
that enables your storage server to load-balance network traffic across two or more
network connections creating redundant data paths throughout the network.
OID
The Object Identifier (OID) is the unique number written as a sequence of sub
identifiers in decimal notation. For example, 1.3.6.1.4.1.2681.1.2.102. It uniquely
identifies data objects that are the subjects of an SNMP message. When your
SNMP device sends a Trap or a GetResponse, it transmits a series of OIDs, paired
with their current values.
Prefetch
A feature that enables pre-fetching of data for clients. This allows clients to read
ahead consecutively, which can result in improved performance because the
storage server will have the data ready from the anticipatory read as soon as the
next request is received from the client. This will reduce the latency of the command
and improve the sequential read benchmarks in most cases. For additional
information, refer to the Prefetch section.
Read Cache
RecoverTrac
An intelligent, policy-driven, disk-based staging mechanism that automatically
remaps "hot" (frequently used) areas of disks to high-speed storage devices, such
as RAM disks, NVRAM, or Solid State Disks (SSDs). For additional information,
refer to the Read Cache section.
FalconStor RecoverTrac is a disaster recovery tool that maps servers, applications,
networking storage, and failover procedures from source sites to recovery sites,
automating the logistics involved in resuming business operations at the recovery
22
Introduction
site. While RecoverTrac extends the functionality of FalconStor CDP/NSS solutions,
the application operates in all environments, independent of server, network,
application, or storage vendor.
Recovery
Agents
FalconStor recovery agents (available from http://fscs.falconstor.com) offer recovery
solutions for your database and messaging systems. FalconStor Message
Recovery for Microsoft Exchange (MRE) and Message Recovery for Lotus Notes/
Domino (MRN) expedite mailbox/message recovery by enabling IT administrators to
quickly recover individual mailboxes from point-in-time snapshot images of their
messaging server. FalconStor Database Recovery for Microsoft SQL Server
expedites database recovery by enabling IT administrators to quickly recover a
database from pointin- time snapshot images of their SQL database. For details,
refer to the Recovery Agents User Guide.
Replication
The process by which a SAN Resource maintains a copy of itself either locally or at
a remote site. The data is copied, distributed, and then synchronized to ensure
consistency between the redundant resources. The SAN Resource being replicated
is known as the primary disk. The changed data is transmitted from the primary to
the replica disk so that they are synchronized. Under normal operation, clients do
not have access to the replica disk.
The replication option works with both CDP and NSS solutions to replicate data over
any existing infrastructure. In addition, it can be used for site migration, remote site
consolidation for backup, and similar tasks. Using a TOTALLY Open storagecentric approach, replication is configured and managed independently of servers,
so it integrates with any operating system or application for cost-effective disaster
recovery (DR). For For additional information, refer to the Replication section.
Replication
Scan
Retention
A scan comparing the primary and replica disk for differences. If primary and replica
disk are known to have similar data (bit by bit, not file by file) then a manual scan is
recommended. The initial scan is automatically triggered and all subsequent scans
must be manually triggered (right-click on a device and select Replication > Scan).
TimeMark retention allows you to set TimeMark preservation patterns. The
TimeMark retention schedule can be set by right-clicking on the server, and
selecting Properties --> TimeMark Maintenance tab.
SafeCache
This option offers improved performance by using high-speed storage devices as a
persistent (non-volatile) read/write cache. The persistent cache can be mirrored for
added protection. This option is not available for CDP connector appliances. For
additional information, refer to the SafeCache section.
SAN Resource
Provides storage for file and application servers (called SAN Clients). When a SAN
Resource is assigned to a SAN client, a virtual adapter is defined for that client. The
SAN Resource is assigned a virtual SCSI ID on the virtual adapter. This mimics the
configuration of actual SCSI storage devices and adapters, allowing the operating
system and applications to treat them like any other SCSI device. For information on
creating a SAN resource, refer to the Create SAN resources - Procedures section.
23
Introduction
ServiceEnabled Device
Service-Enabled Devices are hard drives or RAID LUNs with existing data that can
be accessed by CDP or NSS to make use of all key CDP/NSS storage services
(mirroring, snapshot, etc.). This can be done without any migration/copying, without
any modification of data, and with minimal downtime. Service-Enabled Devices are
used to migrate existing drives into the SAN.
SMI-S
The FalconStor Storage Management Initiative Specification (SMI-S) Provider for
CDP and NSS storage enables CDP and NSS users to have central management of
multi-vendor storage networks for more efficient utilization. CDP and NSS solutions
use the SMI-S standard to expose the storage systems it manages to the SMI-S
Client. A typical SMI-S Client can discover FalconStor devices through this interface.
It utilizes CIM-XML while is a WBEM protocol that uses XML over HTTP to
exchange Common Information Model (CIM) information. For additional information,
refer to the SMI-S Integration section.
Snapshot
A snapshot of an entire device allows us to capture data at any given moment in
time and move it to either tape or another storage medium, while allowing data to be
written to the device. You can perform a snapshot to capture a point-in-time image of
your data volumes (virtual drives) using minimal storage space. For additional
information, refer to the Snapshot Resource section.
Snapshot Agent
Application-aware Snapshot Agents provide complete data protection for active
databases such as Microsoft SQL Server, Oracle, Sybase, and DB2, and messaging
applications such as Microsoft Exchange and Lotus Notes. These agents work with
both CDP and NSS to ensure that snapshots are taken with full transactional
integrity. For details, refer to the Snapshot Agents User Guide.
SNMP
Simple Network Management Protocol (SNMP) is an Internet-standard protocol for
managing devices on IP networks. For additional information, refer to the SNMP
Integration section.
Storage Cluster
Interlink Port
A physical connection between two servers. Version 7.0 and later requires a Storage
Cluster Interlink Port for failover setup. For additional information regarding the
Storage Cluster Interlink, refer to the Failover section.
Thin
Provisioning
For virtual resources, Thin Provisioning allows you to use your storage space more
efficiently by allocating a minimum amount of space for the virtual resource. Then,
when usage thresholds are met, additional storage is allocated as necessary. Thin
Provisioning may be applied to primary storage, replica storage (at the disaster
recovery [DR] site), and mirrored storage. For additional information, refer to the
Thin Provisioning section.
TimeMark
TimeMark technology works with CDP and NSS to enable you to create scheduled
and on-demand point-in-time delta snapshot copies of data volumes. TimeMark
includes the FalconStor TimeView feature, which creates an accessible, mountable
image of any snapshot. This provides a tool to freely create multiple and
instantaneous virtual copies of an active data set. The TimeView images can be
assigned to multiple application servers with read/write access for concurrent,
24
Introduction
independent processing, while the original data set is actively accessed and
updated by the primary application server. For additional information, refer to the
TimeMarks and CDP section.
TimeView
An extension of the TimeMark option that allows you to mount a virtual drive as of a
specific point in time. For additional information, refer to the Recover data using the
TimeView feature section.
Trap
Asynchronous notification from agent to manager. Includes current sysUpTime
value, an OID identifying the type of trap and optional variable bindings. Destination
addressing for traps is determined in an application specific manner typically
through trap configuration variables in the MIB. For additional information, refer to
the SNMPTraps section.
Trigger
VAAI
WWN Zoning
ZeroImpact
Backup Enabler
An event that tells your CDP/NSS-enabled application when it is time to perform a
snapshot of a virtual device. FalconStors Replication, TimeMark/CDP, Snapshot
Copy, and ZeroImpact Backup options all trigger snapshots.
A VAAI- aware storage device is able to understand commands from hypervisor
resources and perform storage functions.
Zoning which uses the WWPN in the configuration. The WWPN remains the same in
the zoning configuration regardless of the port location. If a port fails, you can simply
move the cable from the failed port to another valid port without having to
reconfigure the zoning.
Allows you to perform a local raw device tape backup/restore of your virtual drives.
This eliminates the need for the application server to play a role in backup and
restore operations.
Web Setup
Once you have physically connected the appliance, powered it on, and the following
steps have been performed via the Web Setup installation and server setup, you are
ready to begin using your CDP or NSS storage server.
This step may have already been completed for you. Refer to the Software Quick
Start Guide for details regarding each of the following steps:
1. Configure the Appliance
The first time you connect, you will be asked to:
Select a language.
(If the wrong language is selected, click your browser back button or go
to: //10.0.0.2/language.php to return to the language selection page.
Read and agree to the FalconStor End User License Agreement.
(Storage appliances only) Configure your RAID system.
Enter the network configuration for your appliance
2. Manage License Keys
25
Introduction
Enter the server license keys.
3. Check for Software Updates
Click the Check for Updates button to check for updated agent software.
Click the Download Updates button to download the selected client
software.
4. Install Management Software and Guides
5. Install Client Software and Guides
6. Configure Advanced Features
Advanced features allow you to add storage capacity via Fibre Channel or iSCSI
or disable web services if your business policy requires web services to be
disabled.
If you encounter any problems while configuring your appliance, contact FalconStor
technical support via the web at: www.falconstor.com/supportrequest.
Getting started with CDP
Once you have connected your CDP hardware to your network and set your network
configuration via Web Setup, you are ready to protect your data.
The Host-based CDP method uses a host-based device driver to mirror existing
user volumes/LUNs to the CDP appliance. For information on protecting your data in
a Windows or Linux environment, refer to the DiskSafe User Guide.
For Unix platforms, such as HP-UX, the native OS volume manager is used to mirror
data to the CDP appliance. For information on protecting your data in an AIX, HPUX, or Solaris environment, refer to the appropriate section in the Protecting
Systems with Volume Managers User Guide, which can be dowloaded from
FalconStor.com.
Protection can also be set using the FalconStor Management Console. Refer to the
FalconStor Management Console section.
On the CDP appliance, TimeMark and CDP journaling can be configured to create
recovery points to protect the mirrored disk. Replication can also be used for
disaster recovery protection. FalconStor Snapshot Agents are installed on the host
machines to ensure transactional level integrity of each snapshot or replica.
26
Introduction
Getting started with NSS
The storage appliance is the central component of the network. It is the storage
device that connects to hosts via industry standard iSCSI (or Fibre Channel)
protocols.
Before you undertake the activities described in this guide, make sure the appliance
has already been racked, connected, and the initial power-on instructions have been
completed for the appliance according to the FalconStor Hardware QuickStart
Guide that was shipped with the appliance.
Also make sure Web Setup has been completed according to the instructions in the
FalconStor NSS Software QuickStart Guide, which was also shipped with the
appliance.
Once you have connected your NSS hardware, you can discover all storage servers
on your storage subnet by selecting Tools --> Discover. For details, refer to Connect
to your storage server in the FalconStor Management Console section.
27
FalconStor
Management Console
The FalconStor Management Console is the administration tool for the storage
network. It is a Java application that can be used on a variety of platforms and
allows administrators to create, configure, manage, and monitor the storage
resources and services on the storage server network as well as run/view reports,
enter licensing information, and add/delete administrators.
The FalconStor Management Console software can be installed on each machine
connected to a storage server. The console is also available via download from your
storage server appliance. If you cannot install the FalconStor Management Console
on every client, you can launch a web-based version of the console from your
browser and enter the IP address of the CDP/NSS server..
Launch the console
To launch an installed version of the console in a Microsoft Windows environment,
select Start --> Programs --> FalconStor --> IPStor --> IPStor Console.
In a Linux and other UNIX environment, execute the following:
cd /usr/local/ipstorconsole
./ipstorconsole
Notes:
If your screen resolution is 640 x 480, the splash screen may be cut off
while the console loads.
The console might not launch on certain systems with display settings
configured to use 16 colors.
The console needs to be run from a directory with write access.
Otherwise, the host name information and message log file retrieved
from the storage server will not be able to be saved to the local directory.
As a result, the console will display event messages as numbers and
console options will not be able to be saved.
You must be signed on as the local administrator of the machine on
which you are installing the Windows console package.
To launch a web-based version of the console, open a browser from any machine
and enter the IP address of the CDP/NSS server (for example: http://10.0.0.2) and
the console will launch. If you have Web Setup, select the Go button next to Install
Management Software and Guides and click the Launch Console link.
In the future, to skip going through Web Setup, open a browser from any machine
and enter the IP address of the VTL server followed by :81, for example: http://
10.0.0.2:81/ to launch the console. The computer running the browser must have
Java Runtime Environment (JRE) version 1.6.
CDP/NSS Administration Guide
28
FalconStor Management Console
Connect to your storage server
1. Discover all storage servers on your storage subnet by selecting Tools -->
Discover.
2. Connect to a storage server.
You can connect to an existing storage server, by right-clicking on it and
selecting Connect. Enter a valid user name and password (both are case
sensitive).
To connect to a server that is not listed, right-click on the storage servers object
and select Add, enter the name of the server, a valid user name and password.
When you connect to a server for the first time, a configuration wizard is
launched to guide you through the set up process.
You may see a dialog box notifying you of new devices attached to the server.
Here, you will see all devices that are either unassigned or reserved devices. At
this point you can either prepare the device (reserve it for a virtual or ServiceEnabled Device) and/or create a logical resource.
Once you are connected to a server, the server icon will change to show that you
are connected:
If you connect to a server that is part of a failover configuration, you will
automatically be connected to both servers.
Note: The FalconStor Management Console remembers the servers to which
the console has successfully connected. When you close and restart the console, the servers display in the tree but you are not automatically connected
to them.
CDP/NSS Administration Guide
29
FalconStor Management Console
Configure your server using the configuration wizard
The configuration wizard guides you through entering license keycodes and setting
up your network configuration. If this is the first time you are connecting to your CDP
or NSS server, you will see one of the following:
You will only see step 4 if IPStor detected
IPMI when the server booted up.
Step 1: Enter license keys
Click the Add button and enter your keycodes.
Be sure to enter keycodes for any options you have purchased. Each FalconStor
option requires that a keycode be entered before the option can be configured and
used. Refer to Licensing for more information.
Note: After completing the configuration wizard, if you need to add license
keycodes, you can right-click on your CDP/NSS appliance and select License.
Step 2: Setup network
Enter information about your network configuration.
If you need to change storage server IP addresses, you must make these changes
using System Maintenance --> Network Configuration in the console. Using yast or
other third-party utilities will not update the information correctly.
Refer to Network configuration for more information.
Note: After completing the configuration wizard, if you need to change these
settings, you can right-click on your CDP/NSS appliance and select System
Maintenance --> Network Configuration.
CDP/NSS Administration Guide
30
FalconStor Management Console
Step 3: Set hostname
Enter a valid name for your storage appliance.
Valid characters are letters, numbers, underscore, or dash.
You will need to restart the server if you change the hostname.
Note: Do not change the hostname if you are using block devices. If you do, all
block devices claimed by CDP/NSS will be marked offline and seen as foreign
devices.
CDP/NSS Administration Guide
31
FalconStor Management Console
FalconStor Management Console user interface
The FalconStor Management Console displays the configuration for the storage
servers on your storage network. The information is organized in a familiar Explorerlike tree view.
The tree allows you to navigate the various storage servers and their configuration
objects. You can expand or collapse the display to show only the information that
you wish to view. To expand an item that is collapsed, you can click on the
symbol next to the item. To collapse an item, click on the
symbol next to the
item. Double-clicking on the item will also toggle the expanded/collapsed view of the
item.
You need to connect to a server before you can expand it.
When you highlight an object in the tree, the right-hand pane contains detailed
information about the object. You can select one of the tabs for more information.
The console log located at the bottom of the window displays information about the
local version of the console. The log features a drop-down box that allows you to
see activity from this console session.
Search for
objects in the
tree
The console has a search feature that helps you find any physical device, virtual
device, or client on any storage server. To search:
1. Highlight a storage server in the tree.
2. Select Edit menu --> Find.
3. Select the type of object to search for and the search criteria.
Once you select an object type, a list of existing objects appears. If you highlight
one, you will be taken directly to that object in the tree.
Alternatively, you can type the full name, ID, ACSL (adapter, channel, SCSI,
LUN), or GUID (Globally Unique Identifier). Once you click the Search button,
you will be taken directly to that object in the tree.
Storage server
status and
configuration
The console displays the configuration and status of the storage server.
Configuration information includes the version of the CDP or NSS software and
base operating system, the type and number of processors, amount of physical and
swappable memory, supported protocols, and network adapter information.
The Event Log tab displays system events and errors.
CDP/NSS Administration Guide
32
FalconStor Management Console
Alerts
The console displays all critical alerts upon login to the server. Select the Display
only the new alerts next time if you only want to see new critical alerts the next time
you log in. Selecting this option indicates acknowledgement of the alerts.
Discover storage servers
CDP/NSS can automatically discover all storage servers on your storage subnet.
Storage servers running CDP or NSS will be recognized as storage servers. To
discover the servers:
1. Select Tools --> Discover
2. Enter your network criteria.
Protect your storage servers configuration
FalconStor provides several convenient ways to protect your CDP or NSS
configuration. This is useful for disaster recovery purposes, such as if a storage
server is out of commission but you have the storage disks and want to use them to
build a new storage server. You should create a configuration repository even on a
standalone server.
Continuously
save
configuration
You can create a configuration repository that maintains a continuously updated
version of your storage system configuration. The status of the configuration
repository is displayed on the console under the General tab. In the case of a failure
of the configuration repository, the console displays the time of the failure along with
the last successful update. This feature works seamlessly with the FalconStor
Failover option to provide business continuity in the event that a storage server fails.
For additional redundancy, the configuration repository can be mirrored to another
disk.
To create a configuration repository, make there is at least 10 GB of available space.
1. Highlight a storage server in the tree.
2. Right-click on the server and select Options --> Enable Configuration
Repository.
3. Select the physical device(s) for the Configuration Repository resource.
4. Confirm all information and click Finish to create the repository.
You will now see a Configuration Repository object in the tree under Logical
Resources.
To mirror the repository, right-click on it and select Mirror --> Add.
If you are using the FalconStor Failover option and a storage server fails, the
secondary server will automatically use the configuration repository to maintain
business continuity and service clients.
CDP/NSS Administration Guide
33
FalconStor Management Console
If you are not using the Failover option, you will need to contact technical support
when preparing your new storage server.
Auto save
configuration
You can set your system to automatically replicate your system configuration to an
FTP server on a regular basis. Auto Save takes a point-in-time snapshot of the
storage server configuration prior to replication. To use Auto Save:
1. Right-click on the server and select Properties.
2. Select the Auto Save Config tab and enter information automatically saving your
storage server system configuration.
For detailed information about this dialog, refer to the Auto Save Config section.
Licensing
To license CDP/NSS and its options, make sure you have obtained your CDP/NSS
keycode(s) from FalconStor or its representatives. Once you have the license
keycodes, follow the steps below:
1. In the console, right-click on the server and select License.
The License Summary window is informational only and displays a list of the
options supported for this server. You can enter keycodes for your purchased
options on the Keycodes Detail window.
CDP/NSS Administration Guide
34
FalconStor Management Console
2. Press the Add button on the Keycodes Detail window to enter each keycode.
Note: If multiple administrators are logged into a storage server at the same
time, license changes made from one console will take effect in other console
only when the administrator disconnects and then reconnects to the server.
3. If your licenses have not been registered yet, click the Register button on the
Keycodes Detail window.
You can register online if you have an Internet connection.
To register offline, you must save the registration information to a file on your
hard drive and then email it to FalconStors registration server. When you
receive a reply, save the attachment to your hard drive and send it to the
registration server to complete the registration.
Note: Registration information file names can only use alphanumeric
characters and must have a .dat extension. You cannot use a single digit as
the name. For example, company1.dat is valid (1.dat is not valid).
Set Server properties (updated 12/21/11)
To set properties for a specific server:
1. Right-click on the server and select Properties.
The tabs you see will depend upon your storage server configuration.
2. If you have multiple NICs (network interface cards) in your server, enter the IP
addresses using the Server IP Addresses tab.
CDP/NSS Administration Guide
35
FalconStor Management Console
If the first IP address stops responding, the CDP/NSS clients will attempt to
communicate with the server using the other IP addresses you have entered in
the order they are listed.
Notes:
In order for the clients to successfully use an alternate IP address,
your subnet must be set properly so that the subnet itself can redirect
traffic to the proper alternate adapter.
You cannot assign two or more NICs within the same subnet.
The client becomes aware of the multiple IP addresses when it
initially connects to the server. Therefore, if you add additional IP
addresses in the console while the client is running, you must rescan
devices (Windows clients) or restart the client (Linux/Unix clients) to
make the client aware of these IP addresses.
3. On the Activity Database Maintenance tab, indicate how often the SAN data
should be purged.
The Activity Log is a database that tracks all system activity, including all data
read, data written, number of read commands, write commands, number of
errors etc. This information is used to generate SAN information for the CDP/
NSS reports.
CDP/NSS Administration Guide
36
FalconStor Management Console
4. On the SNMP Maintenance tab, indicate which types of messages should be
sent as traps to your SNMP manager.
Five levels are available:
None (Default) No messages will be sent.
Critical - Only critical errors that stop the system from operating properly
will be sent.
Error Errors (failure such as a resource is not available or an operation
has failed) and critical errors will be sent.
Warning Warnings (something occurred that may require maintenance
or corrective action), errors, and critical errors will be sent.
Informational Informational messages, errors, warnings, and critical error
messages will be sent.
CDP/NSS Administration Guide
37
FalconStor Management Console
5. On the iSCSI tab, set the iSCSI portal that your system should use as default
when creating an iSCSI target.
If you have multiple NICs, when you create an iSCSI target, this IP address will
be selected by default for you.
6. If necessary, change settings for mirror resynchronization and replication on the
Performance tab.
CDP/NSS Administration Guide
38
FalconStor Management Console
The settings on this tab affect system performance. The defaults should be
optimal for most configurations. You should only need to change the settings for
special situations, such as if your mirror is remotely located.
Mirror Synchronization Throttle - Set the default value for the individual mirror
device to use (since throttle is disabled by default for individual mirror devices).
Each mirror device will be able to synchronize up to the value set here (in KB per
second). If you select 0 (zero), all mirror devices will use their own throttle value
(if set), otherwise there is no limit for the device.
Select the Start initial synchronization when mirror is added option to have
synchronization begin immediately for newly created mirrors. The synchronize
out-of-sync mirror policy does not apply in this case. If the Start initial
synchronization when mirror is added option is not selected, the mirror begins
synchronization based on the policy configured.
Synchronize Out-of-Sync Mirrors - Determine how often the system should
check and attempt to resynchronize active out-of-sync mirrors, how often it
should retry synchronization if it fails to complete, and whether or not to include
replica mirrors. These setting will only be used for active mirrors. If a mirror is
suspended because the lag time exceeds the acceptable limit, that
resynchronization policy will apply instead. This is the mirror policy that applies
to all individual mirrors that contain the following settings:
Check and synchronize out-of-sync mirrors every [n][unit] - Check the
mirror status at this interval and trigger a mirror synchronization when
the mirror is not synchronized.
Up to [n] mirrors at each interval - Indicate the number of mirrors that
can be synchronized concurrently. This rule does not apply to userinitiated operations, such as synchronize, resume, and rebuild. This rule
also does not apply when the Start initial synchronization when mirror is
added option is enabled.
Retry synchronization for each resource up to [n] times when
synchronization failed - Indicate the number of times that an out-of-sync
mirror will retry to synchronize the mirror at the interval set by the Check
and synchronize out-of-sync mirrors every rule. Once the mirror fails to
synchronize the number of times specified, a manual synchronization
will be required to initiate the mirror synchronization again.
Include replica mirrors in the automatic synchronization process Enable this option to include replica mirrors in the automatic
synchronization process. This option is disabled by default, which
means the mirror policy will not apply to any replica device with mirror on
the server. In this case, a manual synchronization is required to re-sync
the replica mirror. When this option is enabled, then the mirror policies
will apply to the replica mirror.
Replication Throttle - Click the Configure Throttle button to launch the Configure
Target Throttle screen, allowing you to set, modify, or delete replication throttle
settings. Refer to Set the replication throttle for additional information.
Enable MicroScan - MicroScan analyzes each replication block on-the-fly during
replication and transmits only the changed sections on the block. This is
beneficial if the network transport speed is slow and the client makes small
CDP/NSS Administration Guide
39
FalconStor Management Console
random updates to the disk. The global MicroScan option sets a default in all
replication setup wizards. MicroScan can still be enabled/disabled for each
individual replication via the wizard regardless of the global MicroScan setting.
7. Select the Auto Save Config tab and enter information automatically saving your
storage server system configuration.
You can set your system to automatically replicate your system configuration to
an FTP server on a regular basis. Auto Save takes a point-in-time snapshot of
the storage server configuration prior to replication.
The target server you specify in the Ftp Server Name field must have FTP server
installed and enabled.
The Target Directory is the directory on the FTP server where the files will be
stored. The directory name you enter here (such as ipstorconfig) is a directory
on the FTP server (for example ftp\ipstorconfig). You should not enter an
absolute path like c:\ipstorconfig.
The Username is the user that the system will log in as. You must create this
user on the FTP site. This user must have read/write access to the directory
named here.
In the Interval field, determine how often to replicate the configuration.
Depending upon how frequently you make configuration changes to CDP/NSS,
set the interval accordingly. You can always save manually in between if needed.
To do this, highlight your storage server in the tree, select File menu --> Save
Configuration.
In the Number of Copies field, enter the maximum copies to keep. The oldest
copy will be deleted as each new copy is added.
CDP/NSS Administration Guide
40
FalconStor Management Console
8. On the Location tab, you can enter a specific physical location of the machine.
You can also select an image (smaller than 500 KB) to identify the server
location. Once the location information is saved, the new tab displays in the
FalconStor Management Console for that server.
9. On the TimeMark Maintenance tab, you can set a global reclamation policy.
Manage accounts
Only the root user can manage users and groups or reset passwords. You will need
to add an account for each person who will have administrative rights in CDP/NSS.
You will also need to add a user account for clients that will be accessing storage
resources from a host-based application (such as FalconStor DiskSafe or FileSafe).
To make account management easier, users can be grouped together and handled
simultaneously.
To manage users and groups:
1. Right-click on the server and select Accounts.
A list of all existing users and administrators are listed on the Users tab and a list
of all existing groups is listed on the Groups tab.
CDP/NSS Administration Guide
41
FalconStor Management Console
2. Select the appropriate option.
Note: You cannot manage accounts or reset a password when a server is in
failover state.
Add a user
To add a user:
1. Click the Add button.
2. Enter the name for this user.
The username must adhere to the naming convention of the operating system
running on your storage server. Refer to your operating systems documentation
for naming restrictions.
3. Enter a password for this user and then re-enter it in the Confirm Password field.
For iSCSI clients and host-based applications, the password must be between
12 and 16 characters. The password is case sensitive.
4. Specify the type of account.
Users and administrators have different levels of permissions in CDP/NSS.
IPStor Admins can perform any CDP/NSS operation other than managing
accounts. They are also authorized for CDP/NSS client authentication.
IPStor Users can manage virtual devices assigned to them and can
allocate space from the storage pool(s) assigned to them. They can also
create new SAN resources, clients, and groups as well as assign
resources to clients, and join resources to groups, as long as they are
authorized. IPStor Users can only view resources to which they are
assigned. IPStor Users are also authorized for CDP/NSS client
authentication. Any time an IPStor User creates a new SAN resource,
client, or group, access rights will automatically be granted for the user to
that object.
5. (IPStor Users only) If desired, specify a quota.
Quotas enable the administrator to place manageable restrictions on storage
usage as well as storage used by groups, users, and/or hosts.
CDP/NSS Administration Guide
42
FalconStor Management Console
A user quota limits how much space is allocated to this user for auto-expansion.
Resources managed by this user can only auto-expand if the users quota has
not been reached. The quota also limits how much space a host-based
application, such as DiskSafe, can allocate.
6. Click OK to save the information.
Add a group
To add a group:
1. Select the Groups tab.
2. Click the Add button.
3. Enter a name for the group.
You cannot have any spaces or special characters in the name.
4. If desired, specify a quota.
The quota limits how much space is allocated to each user in this group. The
group quota overrides any individual user quota that may be set.
5. Click OK to save the information.
Add users to
groups
Each user can belong to only one group.
You can add users to groups on both the Users and Groups tabs.
On the Users tab, you can highlight a single user and click the Membership button to
add the user to a group.
CDP/NSS Administration Guide
43
FalconStor Management Console
On the Groups tab, you can highlight a group and click the Membership button to
add multiple users to that group.
You will see this dialog
from the Users tab.
You will see this dialog
from the Groups tab.
Set a quota
You can set a quota for a user on the Users tab and you can set a quota for a group
on the Groups tab.
The quota limits how much space is allocated to each user. If a user is in a group,
the group quota will override the user quota.
Reset a
password
To change a password, select Reset Password. You will need to enter a new
password and then re-type the password to confirm.
You cannot change the root users password from this dialog. Use the Change
Password option below.
Change the root users password
This option lets you change the root users CDP/NSS password if you are currently
connected to a server.
1. Right-click on the server and select Change Password.
2. Enter your old password, the new one, and then re-enter it to confirm.
CDP/NSS Administration Guide
44
FalconStor Management Console
Check connectivity between the server and console
You can check if the console can successfully connect to the storage server by rightclicking on a server and selecting Connectivity Test.
By running this test, you can determine if your network connectivity is good. If it is
not, the test may fail at some point. You should then check with your network
administrator to determine the problem.
Add an iSCSI User or Mutual CHAP User
As a root user, you can add, delete or reset the CHAP secret of an iSCSI User or a
mutual CHAP user. Other users (i.e. IPStor administrator or IPStor user) can also
change the CHAP secret of an iSCSI user if they know the original CHAP secret.
To add an iSCSI user or Mutual CHAP User from an iSCSI server:
1. Right-click on the server and select iSCSI Users from the menu.
2. Select Users.
The iSCSI User Management screen displays.
From this screen, you can select an existing user from the list to delete the user
or reset the Chap secret.
3. Click the Add button to add a new iSCSI user.
CDP/NSS Administration Guide
45
FalconStor Management Console
The iSCSI User add dialog screen displays.
4. Enter a unique user name for the new iSCSI user.
5. Enter and confirm the password and click OK.
The Mutual CHAP level of security allows the target and the initiator authenticate
each other. A separate secret is set for each target and for each initiator in the
storage area network (SAN). You can select Mutual CHAP Users (Right-click on the
iSCSI server --> iSCSI Users --> Mutual CHAP User) to manage iSCSI Mutual
CHAP Users.
The iSCSI Mutual CHAP User Management screen displays allowing you to delete
users or reset the Mutual CHAP secret.\
Apply software patch updates
You can apply patches to your storage server through the console.
Add patch
To apply a patch:
1. Download the patch onto the computer where the console is installed.
2. Highlight a storage server in the tree.
CDP/NSS Administration Guide
46
FalconStor Management Console
3. Select Tools menu --> Add Patch.
The patch will be copied to the server and installed.
Rollback patch
To remove (uninstall) a patch and restore the original files:
1. Highlight a storage server in the tree.
2. Select Tools menu --> Rollback Patch.
System maintenance
The FalconStor Management Console gives you a convenient way to perform
system maintenance for your storage server.
Note: The system maintenance options are hardware-dependent. Refer to your
hardware documentation for specific information.
Network
configuration
If you need to change storage server IP addresses, you must make these changes
using Network Configuration. Using YaST or other third-party utilities will not update
the information correctly.
1. Right-click on a server and select System Maintenance --> Network
Configuration.
Domain name - Internal domain name.
Append suffix to DNS lookup - If a domain name is entered, it will be appended
to the machine name for name resolution.
DNS - IP address of your DNS server.
CDP/NSS Administration Guide
47
FalconStor Management Console
Default gateway - IP address of your default gateway.
NIC - List of Ethernet cards in the server. Select the NIC you wish to modify from
the drop-down list.
Enable Telnet - Enable/disable the ability to Telnet into the server.
Enable FTP - Enable/disable the ability to FTP into the server. The storage
server must have the "pure-ftpd" package installed in order to use FTP.
Allow root to log in to telnet session - Log in to your telnet session using root.
Network Time Protocol - Allows you to keep the date and time of your storage
server in sync with Internet NTP servers. Click Config NTP to enter the IP
addresses of up to five Internet NTP servers.
2. Click Config to configure each Ethernet card..
If you select Static, you must add addresses and net masks.
MTU - Set the maximum transfer unit of each IP packet. If your card supports it,
set this value to 9000 for jumbo frames.
Note: If the MTU is changed from 9000 to 1500, a performance drop will occur. If
you then change the MTU back to 9000, the performance will not increase until
the server is restarted.
Set hostname
Right-click on a server and select System Maintenance --> Set Hostname to change
your hostname. You must restart the server if you change the hostname.
Note: Do not change the hostname if you are using block devices. If you do, all
block devices claimed by CDP/NSS will be marked offline and seen as foreign
devices.
CDP/NSS Administration Guide
48
FalconStor Management Console
Restart IPStor
Restart network
Reboot
Right-click on a server and select System Maintenance --> Restart IPStor to restart
the Server processes.
Right-click on a server and select System Maintenance --> Restart Network to
restart your local network configuration.
Right-click on a server and select System Maintenance --> Reboot to reboot your
server.
Halt
Right-click on a server and select System Maintenance --> Halt to turn off the server
without restarting it.
IPMI
Intelligent Platform Management Interface (IPMI) is a hardware level interface that
monitors various hardware functions on a server.
If CDP/NSS detects IPMI when the server boots up, you will see several IPMI
options on the System Maintenance --> IPMI sub-menu, Monitor, and Filter.
Monitor - Displays the hardware information that is presented to CDP/NSS.
Information is updated every five minutes but you can click the Refresh button to
update more frequently.
You will see a red warning icon
component.
in the first column if there is a problem with a
In addition, you will see a red exclamation mark on the server
appear with details about the error.
. An Alert tab will
CDP/NSS Administration Guide
49
FalconStor Management Console
Filter - You can filter out components you do not want to monitor. This may be useful
for hardware you do not care about or erroneous errors, such as when you do not
have the hardware that is being monitored. You must enter the Name of the
component being monitored exactly as it appears on the hardware monitor above.
Physical Resources
Physical resources are the actual devices attached to this storage server. SCSI
adapters supported include SAS, FC, FCoE, and iSCSI. The SCSI adapters tab
displays the adapters attached to this server and the SCSI Devices tab displays the
SCSI devices attached to this server. These devices can include hard disks, tape
libraries, and RAID cabinets. For each device, the tab displays the SCSI address
CDP/NSS Administration Guide
50
FalconStor Management Console
(comprised of adapter number, channel number, SCSI ID, LUN) of the device, along
with the disk size (used and available). If you are using FalconStors Multipathing,
you will see entries for the alternate paths as well.
The Storage Pools tab displays a list of storage pools that have been defined,
including the total size and number of devices in each storage pool.
The Persistent Binding tab displays the binding of each storage port to its unique
SCSI ID.
When you highlight a physical device, the Category field in the right-hand pane
describes how the device is being used. Possible values are:
Reserved for virtual device - A hard disk that has not yet been assigned to a
SAN resource or Snapshot area.
Used by virtual device(s) - A hard disk that is being used by one or more
SAN resources or Snapshot areas.
Reserved for Service-Enabled Device - A hard disk with existing data that
has not yet been assigned to a SAN resource.
Used by Service-Enabled Device - A hard disk with existing data that has
been assigned to a SAN resource.
Unassigned - A physical resource that has not been reserved yet.
Not available for IPStor - A miscellaneous SCSI device that is not used by
the storage server (such as a scanner or CD-ROM).
System - A hard disk where system partitions exist and are mounted (i.e.
swap file, file system installed, etc.).
Physical resource icons
The following table describes the icons that are used to describe physical resources:
Icon
Description
The D icon is indicates that the port is both an initiator AND a
target.
The T icon indicates that this is a target port.
The I icon indicates that this is an initiator port.
The V icon indicates that this disk has been virtualized or is
reserved for a virtual disk.
The S icon indicates that this is a Service-Enabled Device or is
reserved for a Service-Enabled Device.
The a icon indicates that this device is used in the logical resource
that is currently being highlighted in the tree.
CDP/NSS Administration Guide
51
FalconStor Management Console
Icon
Description
The D icon indicates that an adapter using NPIV when it's enabled
in dual-mode.
Failover and Cross-mirror icons:
The physical disk appearing in color indicates that it is local to this
server. The V indicates that the disk is virtualized for this server. If
there were a Q on the icon, it would indicate that this disk is the
quorum disk that contains the configuration repository.
The physical disk appearing in black and white indicates that it is a
remote physical disk. The F indicates that the disk is a foreign disk.
Prepare devices to become logical resources
You can use one of the FalconStor disk preparation options to change the category
of a physical device. This is important to do if you want to create a logical resource
using a device that is currently unassigned.
The storage server detects new devices when you connect to it. When they
are detected you will see a dialog box notifying you of the new devices. At
this point you can highlight a device and press the Prepare Disk button to
prepare it.
The Physical Devices Preparation Wizard will help you to virtualize, serviceenable, unassign, or import physical devices.
CDP/NSS Administration Guide
52
FalconStor Management Console
At any time, you can prepare a single unassigned device by doing the
following: Highlight the device, right-click, select Properties and select the
device category. (You can find all unassigned devices under the Physical
Resources/Adapters node of the tree view.)
For multiple unassigned devices, highlight Physical Resources, right-click
and select Prepare Disks. This launches a wizard that allows you to
virtualize, unassign, or import multiple devices at the same time.
Rename a physical device
When a device is renamed on a server in a failover pair, the device gets renamed on
the partner server also. However, it is not possible to rename a device when the
server has failed over to its partner.
1. To rename a device, right-click on the device and select Rename.
2. Type the new name and press Enter.
Use IDE drives with CDP/NSS
If you have an IDE drive that you want to virtualize and use as storage, you must
create a block device from it. To do this:
1. Right-click on Block Devices (under Physical Devices) and select Create Disk.
2. Select the device and specify a SCSI ID and LUN for it.
The defaults are the next available SCSI ID and LUN.
3. Click OK when done.
This virtualizes the device. When it is finished, you will see the device listed
under Block Devices. You can now create logical resources from this device.
Unlike a regular SCSI virtual device, block devices can be deleted.
Note: Do not change the hostname if you are using block devices. If you do, all
block devices claimed by CDP/NSS will be marked offline and seen as foreign
devices.
Rescan adapters
1. To rescan all adapters, right-click on Physical Resources and select Rescan.
CDP/NSS Administration Guide
53
FalconStor Management Console
If you only want to scan a specific adapter, right-click on that adapter and select
Rescan.
If you want to discover new devices without scanning existing devices, click the
Discover New Devices radio button and then check the Discover new devices
only without scanning existing devices check box. You can then specify
additional scan details.
2. Determine what you want to rescan.
If you are discovering new devices, set the range of adapters, SCSI IDs, and
LUNs that you want to scan.
CDP/NSS Administration Guide
54
FalconStor Management Console
Use Report LUNs - The system sends a SCSI request to LUN 0 and asks for a
list of LUNs. Note that this SCSI command is not supported by all devices. (If
VSA is enabled and the actual LUN is beyond 256, you will need to use this
option to discover them.)
LUN Range - It is only necessary to use the LUN range if the Use Report Lun
option does not work for your adapter.
Stop scan when a LUN without a device is encountered - This option (used with
LUN Range) will scan LUNs sequentially and then stop after the last LUN is
found. Use this option only if all of your LUNs are sequential.
Auto detect FC HBA SCSI ID - Select this option to enable auto detection of
SCSI IDs with persistent binding. This will scan QLogic HBAs to discover
devices beyond the scan range specified above.
Read partition from inactive path when all the paths are inactive. - Select this
option to force a status check of the from a path that is not in use.
Import a disk
You can import a foreign disk into a CDP or NSS appliance. A foreign disk is a
virtualized physical device containing FalconStor logical resources previously set up
on a different storage server. You might need to do this if a storage server is
damaged and you want to import the servers disks to another storage server.
When you right-click on a disk that CDP/NSS recognizes as foreign and select the
Import option, the disks partition table is scanned and an attempt is made to
reconstruct the virtual drive out of all of the segments.
If the virtual drive was constructed from multiple disks, you can highlight Physical
Resources, right-click and select Prepare Disks. This launches a wizard that allows
you to import multiple disks at the same time.
As each drive is imported, the drive is marked offline because it has not yet found all
of the segments. Once all of the disks that were part of the virtual drive have been
imported, the virtual drive is re-constructed and is marked online.
Importing a disk preserves the data that was on the disk but does not preserve the
client assignments. Therefore, after importing, you must either reassign clients to
the resource.
Notes:
The GUID (Global Unique Identifier) is the permanent identifier for each
virtual device. When you import a disk, the virtual ID, such as SANDisk00002, may be different from the original server. Therefore, you should
use the GUID to identify the disk.
If you are importing a disk that can be seen by other storage servers, you
should perform a rescan before importing. Otherwise, you may have to
rescan after performing the import.
CDP/NSS Administration Guide
55
FalconStor Management Console
Test physical device throughput
You can test the following for your physical devices:
Sequential throughput
Random throughput
Sequential I/O rate
Random I/O rate
Latency
To check the throughput for a device:
1. Right-click on the device (under Physical Resources).
2. Select Test from the menu.
The system will test the device and then display the throughput results on a new
Throughput tab.
SCSI aliasing
SCSI aliasing works with the FalconStor Multipathing option to eliminate a potential
point of failure in your storage network by providing multiple paths to your storage
devices using multiple Fibre Channel switches and/or multiple adapters and/or
storage devices with multiple controllers. In a multiple path configuration, CDP/NSS
automatically detects all paths to the storage devices. If one path fails, CDP/NSS
automatically switches to another.
Refer to the Multipathing chapter for more information.
Repair paths to a device
Repair is the process of removing one or more physical device paths from the
system and then adding them back. Repair may be necessary when a device is not
responsive which can occur if a storage controller has been reconfigured or if a
standby alias path is offline/disconnected.
If a path is faulty, adding it back may not be possible. To repair paths to a device:
CDP/NSS Administration Guide
56
FalconStor Management Console
1. Right-click on the device and select Repair.
If all paths are online, the following message will be displayed instead: There
are no physical device paths that can be repaired.
2. Select the path to the device that needs to be repaired.
If the path is still missing after the repair or the entire physical device is gone
from the console, the path could not be repaired. You should investigate the
cause, correct the problem, and then rescan adapters with the Discover New
Devices option.
CDP/NSS Administration Guide
57
FalconStor Management Console
Logical Resources
Logical resources are all of the resources defined on the storage server, including
SAN resources, and groups.
SAN
Resources
SAN logical resources consist of sets of storage blocks from one or more physical
hard disk drives. This allows the creation of logical resources that contain a portion
of a larger physical disk device or an aggregation of multiple physical disk devices.
Clients do not gain access to physical resources; they only have access to logical
resources. This means that an administrator must configure each physical resource
to one or more logical resources so that they can be assigned to the clients.
CDP/NSS Administration Guide
58
FalconStor Management Console
When you highlight a SAN resource, you will see a small icon next to each device
that is being used by the resource. In addition, when you highlight a SAN resource,
you will see a GUID field in the right-hand pane.
The GUID (Global Unique Identifier) is the permanent identifier for this virtual device.
The virtual ID, SANDisk-00002, is not. You should make note of the GUID, because,
in the event of a disaster, this identifier will be important if you need to rebuild your
system and import this disk.
Groups
Groups are multiple drives (virtual drives and Service-Enabled drives) that will be
assembled together for SafeCache or snapshot synchronization purposes. For
example, when one drive in the group is to be replicated or backed up, the entire
group will be snapped together to maintain a consistent image.
Logical resource icons
The following table describes the icons that are used to show the status of logical
resources:
Icon
Icon alert / warning description
Virtual device offline (or has incomplete segments)
Mirror is out of sync
Mirror is suspended
TimeMark rollback failed
Replication failed
One or more supporting resources is not accessible (SafeCache,
CDP, Snapshot resource, HotZone, etc.)
CDP/NSS Administration Guide
59
FalconStor Management Console
Icon
Icon alert / warning description
Replica in disaster recovery state (after forcing a replication
reversal)
Cross-mirror need to be repaired on the virtual appliance
Primary replica is no longer valid as a replica
Invalid replica
Write caching
You can leverage a third party disk subsystem's built-in caching mechanism to
improve I/O performance. Write caching allows the third party disk subsystem to
utilize its internal cache to accelerate I/O.
To write cache a resource, right-click on it and select Write Cache --> Enable.
Replication
The Incoming and Outgoing objects under the Replication object display information
about each server that replicates to this server or receives replicated data from this
server. If the servers icon is white, the partner server is "connected" or "logged in". If
the icon is yellow, the partner server is "not connected" or "not logged in".
When you highlight the Replication object, the right-hand pane displays a summary
of replication to/from each server.
For each replica disk, you can promote the replica or reverse the replication. Refer
to the Replication chapter for more information about using replication.
CDP/NSS Administration Guide
60
FalconStor Management Console
SAN Clients
Storage Area Network (SAN) Clients are the file and application servers that utilize
the storage resources via the storage server. Since SAN resources appear as locally
attached SCSI devices, the applications, such as file services, databases, web and
email servers, do not need to be modified to utilize the storage.
On the other hand, since the storage is not locally attached, there may be some
configuration needed to locate and mount the required storage. The SAN Clients
access their storage resources via iSCSI initiators (for iSCSI) or HBAs (for Fibre
Channel or iSCSI). The storage resources appear as locally attached devices to the
SAN Clients operating systems (Windows, Linux, Solaris, etc.) even though the
devices are actually located at the storage server site.
When you highlight a specific SAN client, the right-hand pane displays the Client ID,
type, and authentication status, as well as information about the client machine.
The Resources tab displays a list of SAN resources that are allocated to this client.
The adapter, SCSI ID and LUN are relative to this CDP/NSS SAN client only; other
clients that may have access to the SAN resource may have different adapter SCSI
ID and LUN information.
Add a client from the FalconStor Management Console
1. In the console, right-click on SAN Clients and select Add.
2. Enter a name for the SAN Client, select the operating system, and indicate
whether or not the client machine is part of a cluster.
CDP/NSS Administration Guide
61
FalconStor Management Console
If the clients machine name is not resolvable, you can enter an IP address and
then click Find to discover the machine.
3. Determine if you want to limit the amount of space that can be automatically
assigned to this client.
The quota represents the total allowable space that can be allocated for all of the
resources associated with this client. It is only used to restrict certain types of
resources (such as Snapshot Resource and CDP Resource) that expand
automatically. This prevents them from allocating storage space indefinitely.
Instead, they can only expand if the total size of all the resources associated with
the client does not exceed the pre-defined quota for that client.
4. Indicate if you want to enable persistent reservation.
This option allows clustered SAN Clients to take advantage of Persistent
Reserve/Release to control disk access between various cluster nodes.
Note: If you are using AIX SAN Client cluster nodes, this option should be
cleared.
5. Select the clients protocol(s).
If you select iSCSI, you must indicate if this is a mobile client. You will then be
asked to select the initiator that this client uses and add/select users who can
authenticate for this client. Refer to Add iSCSI clients for more information.
If you select Fibre Channel, you will have to select WWPN initiators. You will
then be asked to select Volume Set Addressing. Refer to Add Fibre Channel
clients for more information.
6. Confirm all information and click Finish to add this client
Add a client for FalconStor host applications
If you are using FalconStor client/agent software, such as snapshot agents, or
HyperTrac, refer to the FalconStor Intelligent Management Agent (IMA) User Guide
or the appropriate agent user guide for details regarding adding clients via
FalconStor Intelligent Management Agent (IMA).
FalconStor client/agent software allows you to add a storage server directly in IMA/
SDM or the SAN Client.
For example, if you are using HyperTrac, the first time you start HyperTrac, the
system scans and imports all storage servers identified by IMA/SDM or the SAN
Client. These storage servers are then listed in the HyperTrac the console.
Alternatively, you can add a storage server directly in IMA/SDM or the SAN Client.
CDP/NSS Administration Guide
62
FalconStor Management Console
Change the ACSL
You can change the ACSL (adapter, channel, SCSI, LUN) for a SAN resource
assigned to a SAN client if the device is not currently attached to the client. To
change, right-click on the SAN resource under the SAN Client object (you cannot do
this from the SAN resources object) and select Properties. You can enter a new
adapter, SCSI ID, or LUN.
Note: For Windows clients: One SAN resource for each client must have a LUN
of 0. Otherwise, the operating system will not see the devices assigned to the
SAN client. In addition, for the Linux OS, the rest of the LUNs must be sequential.
Grant access to a SAN Client
By default, only the root user and IPStor admins can manage SAN resources,
groups, or clients. While IPStor users can create new SAN Clients, if you want an
IPStor user to manage an existing SAN Client, you must grant that user access. To
do this:
1. Right-click on a SAN Client and select Access Control.
2. Select which user can manage this SAN Client.
Each SAN Client can only be assigned to one IPStor user. This user will have
rights to perform any function on this SAN Client, including assigning, adding
protocols, and deletion.
CDP/NSS Administration Guide
63
FalconStor Management Console
Console options
To set options for the console, select Tools --> Console Options. Then make any
necessary changes.
Remember password for session - If the console is already connected to a
server, when you attempt to open a second, third, or subsequent server, the
console will use the credentials that were used for the last successful
connection. If this option is unchecked, you will be prompted to enter a
password for every server you try to open.
Automatically time out servers after nn minute(s) - The console will collapse
a server that has been idle for the number of minutes you specify. If you
need to access the server again, you will have to reconnect to it. The default
is 10 minutes. Enter 00 minutes to disable the timeout.
Update statistics every nn second(s) - The console will update statistics by
the frequency you specify.
Automatically refresh the event log every nn second(s) - The console will
update the event log by the frequency you specify, only when you are
viewing it.
Console Log Options - The console log (ipstorconsole.log) is kept on the
local machine and stores information about the local version of the console.
The console log is displayed at the very bottom of the console screen. The
options affect how information for each console session will be maintained:
Overwrite log file - Overwrite the information from the last console session
when you start a new session.
Append to log file - Keep all session information.
Do not write to log file - Do not maintain a console log.
CDP/NSS Administration Guide
64
FalconStor Management Console
Create a custom menu
You can create a menu in the FalconStor Management Console from which you can
launch external applications. This can add to the convenience of FalconStors
centralized management paradigm by allowing administrators to start all of their
applications from a single place. The Custom menu will appear in your console
along with the normal menu (between Tools and Help).
To create a custom menu, select Tools --> Set up Custom Menu. Then click Add and
enter the information needed to launch this application.
Menu Label - The application title that will be displayed in the Custom menu.
Command - The file (usually an.exe) that launches this application.
Command Argument - An argument that will be passed to the application. If
you are launching an Internet browser, this could be a URL.
Menu Icon - The graphics file that contains the icon for this application. This
will be displayed in the Custom menu.
CDP/NSS Administration Guide
65
CDP/NSS Administration Guide
CDP/NSS Administration Guide
Storage Pools
A storage pool is a group of one or more physical devices. Creating a storage pool
enables you to provide all of the space needed by your clients in a very efficient
manner. You can create and manage storage pools in a variety of ways, including:
Tiers - Performance levels, cost, or redundancy
Device categories - Virtual, Service-Enabled
Types - Primary storage, Journal, CDR, Cache, HotZone, virtual
headers, Snapshot, TimeView, and configuration.
Specific application use - FalconStor DiskSafe, etc.
For example, you can classify your storage by tier (low-cost, high-performance,
high-redundancy, etc.) and assign it based on these classifications. Using this
example, you may want to have your business critical applications use storage from
the high-redundancy or high-performance pools while having your less critical
applications use storage from other pools.
Storage pools work with all automatic allocation mechanisms in CDP/NSS. This
capacity-on-demand functionality automatically allocates storage space from a
specific pool when storage is needed for a specific use.
As your storage needs grow, you can easily extend your storage capacity by adding
more devices to a pool and then creating more logical resources or allocating more
space to your existing resources. The additional space is immediately and
seamlessly available.
Manage storage pools and the devices within storage pools
Only root users and IPStor administrators can manage storage pools. The root user
and the IPStor Administrator have full privileges for storage pools. The root user or
the IPStor Administrator must create the pools first and then the IPStor Users can
manage them.
IPStor Users can create virtual devices and allocate space from the storage pools
assigned to them but they cannot create, delete, or modify storage pools. The
storage pool management rights of each type of user are summarized in the table
below:
Type of User
Can create/delete pools?
Can add/remove
storage from pools
Root
Yes
Yes
IPStor Administrator
Yes
Yes
IPStor User
No
No
Refer to the Account management section for additional information regarding user
access rights.
CDP/NSS Administration Guide
66
Storage Pools
Create storage pools
Physical devices must be prepared (virtualized, service-enabled) before they can be
added into a storage pool.
Each storage pool can only contain the same type of physical devices. Therefore, a
storage pool can contain only virtualized drives or only service-enabled drives. A
storage pool cannot contain mixed types.
Physical devices that have been allocated for a logical resource can still be added to
a storage pool.
To create a storage pool:
1. Right-click on Storage Pools and select New.
2. Enter a name for the storage pool.
3. Indicate which type of physical devices will be in this storage pool.
Each storage pool can only contain the same type of physical devices.
4. Select the devices that will be assigned to this storage pool or you can leave the
storage pool empty for later use.
Physical devices that have been allocated for any logical resource can still be
added to a storage pool.
5. Click OK to create the storage pool.
CDP/NSS Administration Guide
67
Storage Pools
Set properties for a storage pool
To set properties:
1. Right-click on a storage pool and select Properties.
On the General tab you can change the name of the storage pool and add/delete
devices assigned to this storage pool.
2. Select the Type tab to designate how each storage pool should be allocated.
CDP/NSS Administration Guide
68
Storage Pools
The type affects how each storage pool should be allocated. When you are in a
CDP/NSS creation wizard, the applicable storage pool(s) will be presented for
selection. However, you can still select from another storage pool type if needed.
All Types can be used for any type of resource.
Storage is the preferred storage pool to create SAN resources and their
corresponding replicas.
Snapshot is the preferred storage pool for snapshot resources.
Cache is the preferred storage pool for SafeCache resources.
HotZone is the preferred storage pool for HotZone resources.
Journal is the preferred storage pool for CDP resources and CDP resource
mirrors.
CDR is the preferred storage pool for continuous data replicas.
VirtualHeader is the preferred storage pool for the virtual header that is
created for a Service-Enabled Device SAN Resource.
Configuration is the preferred storage pool to create the configuration
repository for failover.
TimeView is the preferred storage pool for TimeView images.
ThinProvisioning is the preferred storage pool for thin disks.
Allocation Block Size allows you to specify the minimum size that will be
allocated when a virtual resource is created or expanded.
Using this feature is highly recommended for thin disks (ThinProvisioning
selected as the type for this storage pool) for several reasons.
The maximum number of segments that is supported per virtual device is 1024.
When Allocation Block Size is not enabled, thin disks are expanded in
increments of 10 GB. With frequent expansion, it is easy to reach the maximum
number of segments. Using Allocation Block Size with the largest block size
feasible for your storage can prevent devices from reaching the maximum.
In addition, larger block sizes mean more consecutive space within each block,
limiting disk fragmentation and improving performance for thin disks.
The default for the Allocation Block Size is 16 GB and the possible choices are
1, 2, 4, 8, 16, 32, 64, 128, and 256 GB.
If you enable Allocation Block Size for resources other than thin disks, ServiceEnabled Devices, or any copy of a resource (replica, mirror, snapshot copy,
etc), you should be aware that the allocation will round up to the next multiple
when you create a resource. For example, if you have the Allocation Block Size
set to 16 GB and you attempt to create a 20 GB virtual device, the system will
create a 32 GB device.
If you do not enable Allocation Block Size, you can specify any size when
creating/expanding devices. You may want to do this for disks that are not thin
disks since they do not expand as often and will rarely reach the maximum
number of segments.
When specifying an Allocation Block Size, your physical disk should be evenly
divisible by the number you specify so that all space can be used. For example,
if you have a 500 GB disk and you select 128 GB as the block size, the system
CDP/NSS Administration Guide
69
Storage Pools
will only be able to allocate three blocks of 128 GB each (128*3=384) from that
disk because the remaining 116 GB is not enough to allocate. When you look at
the Available Disk Space statistics in the console, this remaining 116 GB will be
excluded.
3. Select the Tag tab to set a tag string to limit client side applications to specific
storage pools.
When an application requests storage with a specific tag string, only the storage
pools with the same tag can be used. You can have your own internal application
that has been programmed to use a tag.
4. Select the Security tab to designate which users and administrators can manage
this storage pool.
Each storage pool can be assigned to one or more User or Group. The assigned
users can create virtual devices and allocate space from the storage pools
assigned to them but they cannot create, delete, or modify storage pools.
CDP/NSS Administration Guide
70
CDP/NSS Administration Guide
Logical Resources
Once you have physically attached your physical SCSI or Fibre Channel devices to
your storage server you are ready to create Logical Resources to be used by your
CDP/NSS clients. This configuration can be done entirely from the FalconStor
Management Console.
Logical Resources are logically mapped devices on the storage server. They are
comprised of physical storage devices, known as Physical Resources. Physical
resources are the actual SCSI and/or Fibre Channel devices (such as hard disks,
tape drives, and RAID cabinets) attached to the server.
Clients do not have access to physical resources; they have access only to Logical
Resources. This means that physical resources must be defined as Logical
Resources first, and then assigned to the clients so they can access them.
SAN resources provide storage for file and application servers (called SAN Clients).
When a SAN resource is assigned to a SAN client, a virtual adapter is defined for
that client. The SAN resource is assigned a virtual SCSI ID on the virtual adapter.
This mimics the configuration of actual SCSI storage devices and adapters, allowing
the operating system and applications to treat them like any other SCSI device.
Understanding how to create and manage Logical Resources is critical to a
successful CDP/NSS storage network. Please read this section carefully before
creating and assigning Logical Resources.
CDP/NSS Administration Guide
71
Logical Resources
Types of SAN resources
SAN resources can be of the following types: virtual devices and Service-Enabled
Devices.
Virtual devices
IPStor technology gives CDP and NSS the ability to aggregate multiple physical
storage devices (such as JBODs and RAIDs) of various interface protocols (such as
SCSI or Fibre Channel) into logical storage pools. From these storage pools, virtual
devices can be created and provisioned to application servers and end users. This
is called storage virtualization.
Virtual devices are defined as sets of storage blocks from one or more physical hard
disk drives. This allows the creation of virtual devices that can be a portion of a
larger physical disk drive, or an aggregation of multiple physical disk drives.
Virtual devices offer the added capability of disk expansion. Additional storage
blocks can be appended to the end of existing virtual devices without erasing the
data on the disk.
Virtual devices can only be assembled from hard disk storage. It does not work for
CD-ROM, tape, libraries, or removable media.
When a virtual device is allocated to an application server, the server thinks that an
actual SCSI storage device has been physically plugged into it.
Virtual devices are assigned to virtual adapter 0 (zero) when mapped to a client. If
there are more than 15 virtual devices, a new adapter will be defined.
Virtualization
examples
The following diagrams show how physical disks can be mapped into virtual
devices.
SAN
Resources
Virtual Device:
SCSI ID = any.
Adapter number does not need to
match.
Sectors are mapped, combining
sectors from multiple physical
disks.
Adapter = 0
SCSI ID = 1
sectors 019999
Physical
Devices
Adapter = 1
SCSI ID = 3
sectors 09999
Adapter = 1
SCSI ID = 4
sectors 09999
CDP/NSS Administration Guide
72
Logical Resources
The diagram above shows a virtual device being created out of two physical disks.
This allows you to create very large virtual devices for application servers with large
storage requirements. If the storage device needs to grow, additional physical disks
may be added to increase the size of a virtual device. Note that this will require that
the client application server resize the partition and file system on the virtual device.
SAN
Resources
Virtual Disk:
SCSI ID = any
Adapter number does not need to
match
Sectors are mapped, a single
physical device maps to multiple
virtual devices
Adapter = 1
SCSI ID = 5
sectors 04999
Adapter = 1
SCSI ID = 6
sectors 04999
Physical
Devices
Adapter = 2
SCSI ID = 3
sectors 04999
Adapter = 2
SCSI ID = 3
sectors 50009999
The example above shows a single physical disk split into two virtual devices. This is
useful when a single large device exists, such as a RAID, which could be shared
among multiple client application servers.
Virtual devices can be created using various combining and splitting methods,
although you will probably not create them in this manner in the beginning. You may
end up with devices like this after growing virtual devices over time.
Thin
Provisioning
Thin Provisioning allows storage space to be assigned to clients dynamically, on a
just-enough and just-in-time basis, based on need. This avoids under-utilization of
storage by applications while allowing for expansion in the long-term. The maximum
size of a disk (virtual SAN resource) with Thin Provisioning enabled is limited to
67,108,596 MB. You can expand a thin disk up to the maximum size of 67,108,596
MB. When expanded, the mirror on it automatically expands also. A replica on a thin
disk will be able to use space on other virtualized devices as long as there is
available space. If space is not available for expansion, the Thin Provisioned disk on
primary will be prevented from expanding and a message will display on the console
indicating why expansion is not possible. The minimum permissible size of a thin
disk is 10 GB. Once the threshold is met, the thin disk expands in 10 GB increments.
With Thin Provisioning, a single pool of storage can be provisioned to multiple client
hosts. Each client sees the full size of its provisioned disk while the actual amount of
storage used is much smaller. Because so little space is actually being used, Thin
Provisioning allows resources to be over-allocated, meaning that more storage can
be provisioned to hosts than actually exists.
CDP/NSS Administration Guide
73
Logical Resources
Because each client sees the full size of its provisioned disk, Thin Provisioning is the
ideal solution for users of legacy databases and operating systems that cannot
handle dynamic disk expansion.
The mirror of a disk with Thin Provisioning enabled is another disk with Thin
Provisioning enabled. When a thin disk is expanded, the mirror also automatically
expands. If the mirrored disk is offline, storage cannot be added to the thin disk
manually.
If the mirror is offline when the threshold is reached and automatic storage addition
is about to occur, the offline mirror is removed. Storage is automatically added to the
Thin Provisioned disk, but the mirror must be recreated manually.
A replica on a thin disk can use space on other virtualized devices as long as space
is available. If there is no space available for expansion, the thin disk on the primary
will be prevented from expanding and a message will display on the console.
Note: When using Thin Provisioning, it is recommended that you create a disk
with an initial size that is at least 15% the maximum size of the disk. Some write
operations, such as creating a file system in Linux, may scatter their writes across
the span of a disk.
Check the status of a thin disk
You can check the status of the thin disk from the FalconStor Management Console
by highlighting the thin disk and clicking the General tab.
CDP/NSS Administration Guide
74
Logical Resources
The usage percentage is displayed in green as long as the available sectors are
greater than 120% of the threshold (in sectors).
It is displayed in Blue when available sectors are less than 120% of the threshold (in
sectors) but still greater than the threshold (in sectors). The usage percentage is
displayed in Red when the available sectors are less than the threshold (in sectors)..
Note: Do not perform disk defragmentation on a Thin Provisioned disk. Doing so
may cause data from the used sectors of the disk to be moved into non-used sectors and result in unexpected thin-provisioned disk space increase. In fact, any
disk or filesystem utility that might scan or access any unused sector could also
cause a similar unexpected space usage increase.
Service-Enabled Devices
Service-Enabled Devices are hard drives with existing data that can be accessed by
CDP/NSS to make use of all key CDP/NSS storage services (mirroring, snapshot,
etc.), without any migration/copying, without any modification of data, and with
minimal downtime. Service-Enabled Devices are used to migrate existing drives into
the SAN.
Because Service-Enabled Devices are preserved intact, and existing data is not
moved, the devices are not virtualized and cannot be expanded. Service-Enabled
Devices are all maintained in a one-to-one mapping relationship (one physical disk
equals one logical device). Unlike virtual devices, they cannot be combined or split
into multiple logical devices.
CDP/NSS Administration Guide
75
Logical Resources
Create SAN resources - Procedures
SAN resources are created in the FalconStor Management Console.
Note: After you make any configuration changes, you may need to rescan or
restart the client in order for the changes to take effect. After you create a new virtual device, assign it to a client, and restart the client (or rescan), you will need to
write a signature, create a partition, and format the drive so that the client can use
it.
Prepare devices to become SAN resources
You can use one of FalconStors disk preparation options to change the category of
a device. This is important if you want to create a logical resource using a device
that is currently unassigned.
CDP and NSS appliances detect new devices as you connect to them (or
when you execute the Rescan command). When new devices are detected,
a dialog box displays notifying you of the discovered devices. At this point
you can highlight a device and press the Prepare Disk button to prepare it.
At any time, you can prepare a single unassigned device by following the
steps below:
Highlight the device and right-click
Select Properties
Select the device category. (You can find all unassigned devices under
the Physical Resources/Adapters node of the tree view.)
For multiple unassigned devices, highlight Physical Resources, right-click
and select Prepare Disks. This launches a wizard that allows you to
virtualize, unassign, or import multiple devices at the same time.
Create a virtual device SAN resource
You can create a virtual device SAN resource by following the steps below. Each
storage server supports a maximum of 1024 SAN resources.
1. Right-click on SAN Resources and select New.
CDP/NSS Administration Guide
76
Logical Resources
2. Select Virtual Device.
3. Select the storage pool or physical device(s) from which to create this SAN
resource.
You can create a SAN resource from any single storage pool. Once the resource
is created from a storage pool, additional space (automatic or manual
expansion) can only be allocated from the same storage pool.
You can select List All to see all storage pools, if needed.
CDP/NSS Administration Guide
77
Logical Resources
Depending upon the resource type, you may have the option to select to Use
Thin Provisioning for more efficient space allocation.
4. Select the Use Thin Provisioning checkbox to allocate a minimum amount of
space for a virtual resource. When usage thresholds are met, additional storage
is allocated as necessary.
5. Specify the fully allocated size of the resource to be created.
For NSS, the default initial size is 1 GB and the default allocation is 10 GB.
For CDP, the default initial size is 16 GB and the default allocation is 16 GB.
A disk with Thin Provisioning enabled can be configured to replicate to a SAN
resource or to another disk with Thin Provisioning enabled.
From the client side, it appears that the full disk size is available.
Thin provisioning is supported for the following resource types:
SAN Virtual
SAN Virtual Replica
SAN resources can replicate to a disk with Thin Provisioning as long as the size
of the SAN resource is 10GB or greater.
CDP/NSS Administration Guide
78
Logical Resources
6. Select how you want to create the virtual device.
Custom lets you select which physical device(s) to use and lets you
designate how much space to allocate from each.
Express lets you designate how much space to allocate and then
automatically creates a virtual device using an available device.
Batch lets you create multiple SAN resources at one time. These SAN
resources will all be the same size.
CDP/NSS Administration Guide
79
Logical Resources
If you select Custom, you will see the following windows:
Select either an entirely unallocated or
partially unallocated device.
Only one device can be selected at a
time from this dialog. To create a virtual
device SAN resource from multiple
physical devices, you will need to add
the devices one at a time. After
selecting the parameters for the first
device, you will have the option to add
more devices.
Indicate how much space to
allocate from this device.
Click Add More if
you want to add
another physical
device to this SAN
resource.
If you select to add
more devices, you
will go back to the
physical device
selection screen
where you can
select another
device.
CDP/NSS Administration Guide
80
Logical Resources
If you select Batch, you will see a window similar to the following:
Indicate how to name each resource. The SAN Resource Prefix is
combined with the starting number to form the name of each SAN
resource. You can deselect the Use default ID for Starting Number option
to restart numbering from one.
In the Resource Size field, indicate how much space to allocate for each
resource.
Indicate how many SAN resources to create in the Number of Resources
field.
CDP/NSS Administration Guide
81
Logical Resources
7. (Express and Custom only) Enter a name for the new SAN resource.
The Express screen is shown above and the Custom screen is shown below:
Note:
The name is not case sensitive.
The Set this as the resource name (not prefix) option does not append
the name with the virtual ID number.
CDP/NSS Administration Guide
82
Logical Resources
8. Confirm that all information is correct and then click Finish to create the virtual
device SAN resource.
9. (Express and Custom only) Indicate if you would like to assign the new SAN
resource to a client.
If you select Yes, the Assign a SAN Resource Wizard will be launched.
Note: After you assign the SAN resource to a client, you may need to restart the
client. You will also need to write a signature, create a partition, and format the
drive so that the client can use it.
Create a Service-Enabled Device SAN resource
1. Right-click on SAN Resources and select New.
2. Select Service Enabled Device.
3. Select how you want to create this device.
CDP/NSS Administration Guide
83
Logical Resources
Custom lets you select one physical device to use.
Batch lets you create multiple SAN resources at one time.
4. Select the device that you want to make into a Service-Enabled Device.
A list of the storage pools and physical resources that have been reserved for
this purpose are displayed.
5. (Service-Enabled Devices only) Select the physical device(s) for the ServiceEnabled Devices virtual header.
CDP/NSS Administration Guide
84
Logical Resources
Even though Service-Enabled Devices are used as is, a virtual header is created
on another physical device to allow CDP/NSS storage services to be supported.
6. Enter a name for the new SAN resource.
The name is not case sensitive.
7. Confirm that all of the information is correct and then click Finish to create the
SAN resource.
8. Indicate if you would like to assign the new SAN resource to a client.
If you select Yes, the Assign a SAN Resource Wizard is launched.
CDP/NSS Administration Guide
85
Logical Resources
Assign a SAN resource to one or more clients
You can assign a SAN resource to one or more clients or you can assign a client to
one or more SAN resources. While the wizard is initiated differently, the outcome is
the same.
Note: (For AIX Fibre Channel clients running DynaPath) If you are re-assigning
SAN resources to the same LUN, you must reboot the AIX client after unassigning a SAN resource.
1. Right-click on a SAN Resources object and select Assign.
The wizard can also be launched from the Create SAN Resource wizard.
Alternatively, you can right-click on a SAN Client and select Assign.
2. If this server has multiple protocols enabled, select the type of client to which
you will be assigning this SAN resource.
3. Select the client to be assigned and determine client access rights.
If you initiated the wizard by right-clicking on a SAN Client instead of a SAN
resource, you will need to select the SAN resource(s) instead.
Read/Write - Only one client can access this SAN resource at a time. All others
(including Read Only) will be denied access. This is the default.
Read/Write Non-Exclusive - Two clients can connect at the same time with both
read and write access. You should be careful with this option because if you
have multiple clients writing to a device at the same time, you have the potential
to corrupt data. This option should only be used by clustered servers, because
the cluster itself prevents multiple clients from writing at the same time.
Read Only - This client will have read only access to the SAN resource. This
option is useful for a read-only disk.
Notes:
In a Fibre Channel environment, we recommend that only one CDP/
NSS Client be assigned to a SAN resource (with Read/Write access).
If two or more Fibre Channel clients attempt to connect to the same
SAN resource, error messages will be generated each time the
second client attempts to connect to the resource.
If multiple Windows 2000 clients are assigned read-only access to
the same virtual device, the only partition they can read from is FAT.
CDP/NSS Administration Guide
86
Logical Resources
For Fibre Channel clients, you will see the following screen:
For iSCSI clients, you will see the following screen:
You must have already created a target for this client. Refer to for more
information.
You can add any application server, even if it is currently offline.
Note: You must enter the clients name, not an IP address.
CDP/NSS Administration Guide
87
Logical Resources
4. If this is a Fibre Channel client and you are using Multipath software (such as
FalconStor DynaPath), enter the World Wide Port Name (WWPN) mapping.
This WWPN mapping is similar to Fibre Channel zoning and allows you to
provide multiple paths to the storage server to limit a potential point of network
failure. You can select how the client will see the virtual device in the following
ways:
One to One - Limits visibility to a single pair of WWPNs. You will need to select
the clients Fibre Channel initiator WWPN and the servers Fibre Channel target
WWPN.
One to All - You will need to select the clients Fibre Channel initiator WWPN.
All to One - You will need to select the servers Fibre Channel target WWPN.
All to All - Creates multiple data paths. If ports are ever added to the client or
server, they will automatically be included in the WWPN mapping.
CDP/NSS Administration Guide
88
Logical Resources
5. If this is a Fibre Channel client and you selected a One to n option, select which
port to use as an initiator for this client.
6. If this is a Fibre Channel client and you selected an n to One option, select which
port to use as a target for this client.
7. Confirm all of the information and then click Finish to assign the SAN resource to
the client(s).
CDP/NSS Administration Guide
89
Logical Resources
The SAN resource will now appear under the SAN Client in the configuration
tree view:
After client assignment
Depending upon the operating system of the client, you may be required to reboot
the client machine in order to be able to use the new SAN resource.
Windows clients
If an assigned SAN resource is larger than 3GB, formatting the resource as a FAT
partition will not format properly.
Solaris clients
x86 vs SPARC
If you create a virtual device and format it for Solaris x86, the device will fail to mount
if you try to use that same virtual device under Solaris SPARC.
Label devices
When you create a new virtual device, it needs to be labeled (the drive metrics need
to be specified) and a file system has to be put on the virtual device in order to
mount it. Refer to the steps below.
Note: If the drive has already been labeled and you restart the client, you do not
need to run format and label it again.
Labeling a virtual disk for Solaris:
1. From the command prompt, execute the following command: format
A list of available disk selections will be displayed on the screen and you will be
asked to specify which disk should be selected. If you are asked to specify a disk
type, select Auto Configure.
2. Once the disk has been selected, you must label the disk.
For Solaris 7 or 8, you will automatically be prompted to label the disk once you
have selected it.
3. If you want to partition the newly formatted disk, type partition at the format
prompt.
You may accept the default partitions created by the format command or repartition the disk according to your needs.
On Solaris x86, if the disk is not fixed with the fdisk partitions, the format
command will prompt you to run fdisk first.
CDP/NSS Administration Guide
90
Logical Resources
For further information about the format utility, refer to the man pages.
4. To exit the format utility, type quit at the format prompt.
Creating a file system on a disk managed by the CDP/NSS software:
Warning: Make sure to choose the correct raw device when creating a file system. If
in doubt, check with an administrator.
1. To create a new file system, execute the following command:
newfs /dev/rdsk/c2t0d0s2
where c2t0d0s2 is the name of the raw device.
2. To create a mount point for the new file system, execute the following command:
mkdir /mnt/ipstor1
where /mnt/ipstor1 is the name of the mount point you are creating.
3. To mount the disk managed by the CDP/NSS software, execute the following
command:
mount /dev/dsk/c2t0d0s2 /mnt/ipstor1
where /dev/dsk/c2t0d0s2 is the name of the block device and /mnt/ipstor1 is the
name of the mount point you created.
For further information, refer to the man pages.
Virtual device
from a different
server
When assigning a virtual device from a different storage server, the SAN Client
software must be restarted in order to add the virtual device to the client machine.
The reason for this is that when virtual devices are added from other storage
servers, a new virtual SCSI adapter gets created on the client machine. Since
Solaris does not allow new adapters to be added dynamically, the CDP/NSS Client
software needs to be restarted in order for the new adapter and device to be added
to the system.
CDP/NSS Administration Guide
91
Logical Resources
Expand a virtual device
Since virtual devices do not represent actual physical resources, they can be
expanded as more storage is needed. The virtual device can be increased in size by
adding more blocks of storage from any unallocated space from the same server.
Note that you will still need to repartition the virtual devices and adjust/create/resize
any file-systems on the partition after the virtual device is expanded. Since partition
and file-system formats are specific to the operating system that the client is
running, the administrator must perform these tasks directly from the client. You can
use tools like Partition Magic, Windows Dynamic Disk, or Veritas Volume Manager
to add more drives to expand existing volume on-the-fly in real time (without
application down time).
Notes:
We do not recommend expanding a virtual device (SAN) while clients are
accessing the drives.
At the end of this section is important information about Windows
dynamic disks, Solaris clients, AIX clients, and Fibre Channel clients.
1. Right-click on a virtual device (SAN) and select Expand.
2. Select how you want to expand the virtual device.
Custom lets you select which physical device(s) to use and lets you designate
how much space to allocate from each.
Express lets you designate how much space to allocate and then automatically
creates a virtual device using an available device.
CDP/NSS Administration Guide
92
Logical Resources
The Size to Allocate is the maximum space available on all available devices. If
this drive is mirrored, this number will be half the full amount because the
mirrored drive will need an equal amount of space.
If you select Custom, you will see the following windows:
Select either an entirely unallocated or
partially unallocated device.
Only one device can be selected at a
time from this dialog. To expand a virtual
device from multiple physical devices,
you will need to add the devices one at
a time. After selecting the parameters
for the first device, you will have the
option to add more devices.
Indicate how much space to
allocate from this device.
Note: If this drive is mirrored,
you can only select up to half of
the available total space (from
all available devices). This is
because the mirrored drive will
need an equal amount of
space.
Click Add More if you
want to select space
from another physical
device.
CDP/NSS Administration Guide
93
Logical Resources
3. Confirm that all information is correct and then click Finish to expand the virtual
device.
Windows
Dynamic disks
Expansion of dynamic disks using the Expand SAN Resource Wizard is not
supported for clients using Fibre Channel. Due to the nature of dynamic disks, it is
not safe to alter the size of the virtual device. However, dynamic disks do provide an
alternative method to extend the dynamic volume.
To extend a dynamic volume using SAN resources, use the following steps:
1. Create a new SAN resource and assign it to the CDP/NSS Client. This will
become an additional disk which will be used to extend the dynamic volume.
2. Use Disk Manager to write the disk signature and upgrade the disk to "Dynamic.
3. Use Disk Manager to extend the dynamic volume.
The new SAN resource should be available in the list box of the Dynamic Disk
expansion dialog.
Solaris clients
The following procedure is valid for clients using Fibre Channel:
1. Use expand.sh to get the new capacity of the disk.
This will automatically label the disk.
2. Use the format utility to add a new partition or, if your file system supports
expansion, use your file systems utility to expand the file system.
Windows
clients (Fibre
Channel)
Linux clients
(Fibre Channel)
For Windows 2000 and 2003 clients, after expanding a virtual device you should
rescan the physical devices from the Computer Manager to see the expanded area.
1. Use rmmod qla2x00 to remove the module.
2. Use insmod qla2x00 to install the module back again.
3. Use fdisk/dev/sda to create a second partition.
The a in sda refers to the first disk. Use b, c, etc. for subsequent disks.
AIX clients
Expanding an CDP/NSS virtual disk will not change the size of the existing AIX
volume group. To expand the volume group, a new disk has to be assigned and the
extendvg command should be used to enlarge the size of the volume group.
Service-Enabled Device (SED) expansion
SED expansion must be done from the storage side first. Therefore, it is
recommended that you check with the storage vendor regarding how to expand the
underlying physical luns. It is also recommended that you schedule downtime
(under most scenarios) to avoid an unexpected outages.
CDP/NSS Administration Guide
94
Logical Resources
Grant access to a SAN resource
By default, only the root user and IPStor administrators can manage SAN resources,
groups, or clients. While IPStor users can create new SAN resources, if you want an
IPStor user to manage an existing SAN resource, you must grant that user access.
To do this:
1. Right-click on a SAN resource and select Access Control.
2. Select which user can manage this SAN resource.
Each SAN resource can only be assigned to one IPStor user. This user will have
rights to perform any function on this SAN resource, including assigning,
configuring for storage services, and deletion.
Note that if a SAN Resource is already assigned to a client, you cannot grant
access to the SAN resource if the user is not already assigned to the client. You
will have to unassign the SAN resource first, change the access for both the
client and the SAN resource, and then reassign the SAN resource to the client.
Unassign a SAN resource from a client
1. Right-click on the client or client protocol and select --> Unassign.
2. Select the resource(s) and click Unassign.
Note that when you unassign a Linux client connected, the client may be
temporarily disconnected from the server. If the client has multiple devices
offered from the same server, the temporary disconnect may affect these
devices. However, once I/O activities from those devices are detected, the
connection will be restored automatically and transparently.
Delete a SAN resource
1. (AIX and HP-UX clients) Prior to removing a CDP/NSS device, make sure any
logical volumes that were built on top have been removed.
If the CDP/NSS device is removed while logical volumes exist, you will not be
able to remove the logical volumes and the system will display error messages.
2. (Windows 2000/2003, Linux, Unix clients) You should disconnect/umount the
client from the SAN resource(s) prior to deleting the SAN resource.
3. Detach the SAN resource from any client that is using it.
For non-Windows clients, type ./ipstorclient stop from /usr/local/ipstorclient/bin.
4. In the Console, highlight the SAN resource, right-click and select Delete.
CDP/NSS Administration Guide
95
CDP/NSS Administration Guide
CDP/NSS Appliances
CDP and NSS appliances are also called IPStor servers. Both are storage servers
designed to require little or no maintenance.
All day-to-day CDP/NSS administrative functions can be performed through the
FalconStor Management Console. However, there may be situations when direct
access to the Server is required, particularly during initial setup and configuration of
physical storage devices attached to the Server or for troubleshooting purposes.
If access to the servers operating system is required, it can be done either directly
or remotely from computers on the SAN.
Start the CDP/NSS appliance
Execute the following commands to start the processes:
cd /usr/local/ipstor/bin
./ipstor start
If the server is already started, you can use ./ipstor restart to stop and then start the
processes. When you start the server, you will see the processes start.
Stop the CDP/NSS appliance
You will only see
these modules if iSCI
Target Mode is enabled.
You will only see this
module if failover is
enabled.
Starting IPStor SNMPD Module
Starting IPStor Configuration Module
Starting IPStor Base Module
Starting IPStor HBA Module
Starting IPStor Authentication Module
Starting IPStor Block Device Module
Starting IPStor Server (FSNBase) Module
Starting IPStor Server (Application) Module
Starting IPStor Server (Upcall) Module
Starting IPStor Target Module
Starting IPStor iSCSI Target Module
Starting IPStor iSCSI (Daemon)
Starting IPStor Communication Module
Starting IPStor CLI Proxy Module
Starting IPStor Logger Module
Starting IPStor Central Client Manager Module
Starting IPStor Self Monitor Module
Starting IPStor Failover Module
[OK]
[OK]
[OK]
[OK]
[OK]
[OK]
[OK]
[OK]
[OK]
[OK]
[OK]
[OK]
[OK]
[OK]
[OK]
[OK]
[OK]
[OK]
CDP/NSS Administration Guide
96
CDP/NSS Appliances
Warning: Stopping the storage server processes will shut down all access to the
storage resources managed by the Server. This can halt processing on your
application servers, or even cause them to crash, depending upon how they behave
if a disk is unexpectedly shut off or removed. It is recommended that you make sure
your application servers are not accessing the storage resources when you stop the
storage server processes.
To shut down the processes, execute the following commands:
cd /usr/local/ipstor/bin
./ipstor stop
You should see the processes stop.
CDP/NSS Administration Guide
97
CDP/NSS Appliances
Log into the CDP/NSS appliance
You can log in from a keyboard/display connected directly to the Server. There is no
graphical user interface (GUI) shell required. By default, only the root user has login
privileges to the operating system. Other IPStor administrators do not. To log in,
enter the username and the password for the root user.
Warning: Do not permit storage server login access by anyone except your most
trusted system or storage administrators. Administrators with login access to the
server have the ability to modify, damage or destroy data managed by the server.
Telnet access
By default, IPStor administrators do not have telnet access to the server. The server
is configured to deny all TCP/IP access, including telnet. To enable telnet:
1. Install the following rpm files on the machine:
#rpm ivh xinetd-..rpm
#rpm ivh telnet-..rpm
2. Enter the following command:
#vi /etc/xinetd.d/telnet
3. Then change disable=yes to disable=no.
#service xinetd restart
Linux Server
only
To grant telnet access to another computer on the network:
1. Log into the Server directly (on the local console keyboard and display).
2. Change the etc/passwd file.
For the appropriate administrator, change the line that looks like:
/dev/null:/dev/null
To:
Username:/homedirectory:/bin/bash
Where Username is an actual administrator name and homedirectory is the
actual home directory.
Note: For a more secure session, you may want to use the program ssh, which is
supplied by some versions of the Linux operating system. Please refer to the
Linux manual that came with your operating system for more details about configuration.
CDP/NSS Administration Guide
98
CDP/NSS Appliances
Check the IPStor Server processes
You can type the following command from the shell prompt to check the IPStor
Server processes:
cd /usr/local/ipstor/bin
./ipstor status
You should see something similar to the following:
You will only see the
HBA Module for
QLogic HBAs.
You will only see this
process if iSCSI
Target Mode is
enabled.
You will only see this
process if Failover is
enabled
Status of IPStor SNMPD Module
[RUNNING]
Status of IPStor Base Module
[RUNNING]
Status of IPStor HBA Module
[RUNNING]
Status of IPStor Initiator Module
[RUNNING]
Status of IPStor Control Module
[RUNNING]
Status of IPStor Authentication Module
[RUNNING]
Status of IPStor Block Device Module
[RUNNING]
Status of IPStor Server (Compression) Module [RUNNING]
Status of IPStor Server (FSNBase) Module
[RUNNING]
Status of IPStor Server (Upcall) Module
[RUNNING]
Status of IPStor Server (Transport)
[RUNNING]
Status of IPStor Server (Event) Module
[RUNNING]
Status of IPStor Server (Path Manager) Module [RUNNING]
Status of IPStor Server (Application)
[RUNNING]
Status of IPStor Advanced Backup Module
[RUNNING]
Status of IPStor Target Module
[RUNNING]
Status of IPStor iSCSI Target Module
[RUNNING]
Status of IPStor iSCSI (Daemon)
[RUNNING]
Status of IPStor Communication Module
[RUNNING]
Status of IPStor CLI Proxy Module
[RUNNING]
Status of IPStor Logger Module
[RUNNING]
Status of IPStor Local Client (VBDI)
[RUNNING]
Status of IPStor SANBridge Daemon
[RUNNING]
Status of IPStor Anti Virus Daemon
[RUNNING]
SStatus of IPStor Self Monitor Module
[RUNNING]
Status of IPStor Failover Module
[RUNNING]
CDP/NSS Administration Guide
99
CDP/NSS Appliances
Check physical resources
When adding physical resources or testing to see if the physical resources are
present, the following command can be executed from the shell prompt in Linux:
cat /proc/scsi/scsi.
These commands display the SCSI devices attached to the IPStor Server. For
example, you will see something similar to the following:
[0:0:0:0]
[0:0:1:0]
[2:0:1:0]
[2:0:2:0]
[2:0:3:0]
disk
disk
disk
disk
disk
3ware Logical Disk 0 1.2 /dev/sda
3ware Logical Disk 1 1.2 /dev/sdb
IBM-PSG ST318203FC !# B324 IBM-PSG ST318203FC !# B324 IBM-PSG ST318304FC !# B335 -
CDP/NSS Administration Guide
100
CDP/NSS Appliances
Check activity statistics
There is a utility that is installed with CDP/NSS that allows you to view activity
statistics for virtual and physical devices as well as for Fibre Channel target ports.
This utility can also report pending commands for physical and virtual devices.
To run this utility, type the ismon command on the storage server:
This command displays all virtual resources (SAN, Snapshot, etc.) for this storage
server. For each resource, the screen shows its size, amount of reads/writes in KB/
second, and number of read/write commands per second. Information on the screen
is automatically refreshed every five seconds.
You can change the information that is displayed or the way it is sorted. The
following options are available by pressing the appropriate key on your server:
Option
Description
Toggle incremental/cumulative mode
Display information for virtual devices
Display information for physical devices. You can launch
ismon -p at the command prompt to view this information directly.
Display information for each FC target mode. You can launch
ismon -t at the command prompt to view this information directly.
u
Page up
Page down
Sort by virtual device ID
Sort by KB read
Sort by read SCSI command
Sort by other SCSI command
Sort by ACSL
Sort by virtual device size
Sort by KB written
Sort by write SCSI command
Sort by SCSI command error
Sort by virtual device name
Sort by WWPN
Display Max value fields (incremental mode only)
Start logging
CDP/NSS Administration Guide
101
CDP/NSS Appliances
Option
Description
Reload virtual device name alias
Edit virtual device name alias
View help page
Quit
Remove a physical storage device from a storage server
1. Unassign and delete all SAN resources used by the physical storage device you
are removing.
2. Remove all Fibre Channel zones between the storage and the storage server.
3. From the console, perform a rescan on the physical adapters.
4. After the rescan has finished and the devices are offline, right-click and select
Delete.
Configure iSCSI storage
This section provides details regarding the requirements and procedures needed to
prepare your CDP/NSS appliance to use dedicated iSCSI downstream storage,
using either a software HBA (iscsi-initiator) or a hardware iSCSI HBA.
Configuring iSCSI software initiator
The iSCSI software initiator is provided with every CDP and NSS appliance and can
be configured to use dedicated iSCSI downstream storage using the iscsiadm
command line interface.
The CDP/NSS iSCSI software initiator supports up to 32 initiator-target host
connections. If you have n Ethernet port devices on the appliance, you are allowed
32 / n storage targets. An iSCSI hardware initiator does not have this limitation.
In order for the iSCSI software initiator to be properly configured, it must be
configured so it is aware of the individual interfaces it will use for connectivity to the
downstream storage.
1. Create a blank default configuration for each Ethernet device on the CDP/NSS
appliance using the iscsiadm command line interface.
iscsiadm -m iface -I iface-eth<device Number> -o new
For example, if you are using 4 Ethernet devices for an iSCSI connection, run
the following command (using the iscsiadm commands):
iscsiadm -m iface -I iface-eth0 -o new
iscsiadm -m iface -I iface-eth1 -o new
CDP/NSS Administration Guide
102
CDP/NSS Appliances
iscsiadm -m iface -I iface-eth2 -o new
iscsiadm -m iface -I iface-eth3 -o new
2. Bind persistently each Ethernet device to a MAC address to ensure that the
same device is always used for iSCSI connection. To do this, use the following
command:
iscsiadm -m iface -I iface-eth0 -o update -n
iface.hwaddress -v <MAC address>
3. Connect each Ethernet device to the iSCSI targets.
4. Discover targets that are accessible from your initiators using the following
command:
iscsiadm -m discovery -t st -p 192.168.0.254
5. Log the iSCSI initiator to the target using the following command:
iscsiadm -m node -L
6. Confirm configured Ethernet devices are associated with targets by running the
following command:
iscsiadm -m session
Command output example:
tcp: [1] 192.168.0.254:3260,0 <target iqn name>
7. Perform a rescan using the FalconStor Management Console to see all of the
iSCSI devices.
Configuring iSCSI hardware HBA
Only QLogic iSCSI HBAs are supported on a CDP or NSS appliance. The QLogic
SANSurfer command line interface "iscli" allows configuration of an iSCSI HBA.
The iSCSI HBA's should be configured such that they are in the same subnet as the
iSCSI storage. The iSCSI hardware initiator does not require any special
configuration for multipath support; you can just connect multiple HBA ports to a
downstream iSCSI target. The QLogic iSCSI HBA driver handles multipath traffic.
1. Run QLogic SANSurfer CLI to display the HBA configuration menu:
/opt/QLogic_Corporation/SANsurferiCLI/iscli
Note the information displayed in the menu header for the current HBA port. By
default, the configuration for HBA 0, port 0 displays.
The Port Level Info & Operations menu displays.
CDP/NSS Administration Guide
103
CDP/NSS Appliances
2. To configure the selected HBA port, select option 4 - Port Level Info &
Operations.
Make sure to save your changes the previous port before selecting another port,
otherwise your changes will be lost.
3. To change the IP address of the selected port, select option 2 - Port Network
Setting Menu.
The Port Network Setting Menu interface allows you to configure the IP address
for the selected port.
4. To change target parameters for the selected HBA port, select option 7 - Target
Level Info & Operations.
The HBA Target Menu displays.
5. Discover iSCSI targets by selecting option 10 - Target Discovery Menu.
The HBA Target Discovery Menu displays.
6. To add a new target, select option 3 - Add a Send Target.
Answer Yes when asked if you want the new send target to be auto-login
and persistent. Otherwise the target will not persist through reboot and
require manual intervention.
Enter the IP address
Indicate whether or not the send target requires CHAP authentication.
Confirm the send target has been added by listing the send targets (option
1).
7. Save the configuration changes for the selected HBA port by selecting option 12
- Save changes and reset HBA.
8. Once all of the ports are configured, return to the HBA Target Menu and select
option 11 - List LUN Information.
All discovered and connected targets are listed. Select a target to view all LUNs
associated with that target.
Uninstall a storage server
To uninstall a storage server:
1. Execute the following command:
rpm e ipstor
This command removes the installation of the storage server but leaves the
/ipstor directory and its subdirectories.
CDP/NSS Administration Guide
104
CDP/NSS Appliances
2. To remove the /ipstor directory and its subdirectories, execute the rm rf
ipstor command from the /usr/local directory:
Note: We do not recommend deleting the storage server files without using rpm
e. However, to re-install the CDP/NSS software if the storage server was
removed without using the rpm utility, or to install over an existing storage server
installation, the following command should be executed:
rpm -i - - force <package name>
To determine the package name, check the Server directory on the CDP/NSS
installation media. This will force a re-installation of the software. Refer to the rpm
man pages for more information.
CDP/NSS Administration Guide
105
CDP/NSS Administration Guide
iSCSI Clients
iSCSI clients are the file and application servers that access CDP/NSS SAN
resources using the iSCSI protocol. Just as the CDP/NSS appliance supports
different types of storage devices (such as SCSI, Fibre Channel, and iSCSI), the
CDP/NSS appliance is protocol-independent and supports multiple outbound target
protocols, including iSCSI Target Mode. This chapter provides an overview for
configuring iSCSI clients with CDP or NSS.
iSCSI builds on top of the regular SCSI standard by using the IP network as the
connection link between various entities involved in a configuration. iSCSI inherits
many of the basic concepts of SCSI. For example, just like SCSI, the entity that
makes requests is called an initiator, while the entity that responds to requests is
called a target. Only an initiator can make requests to a target; not the other way
around. Each entity involved, initiator or target, is uniquely identified.
By default, when a client machine is added as an iSCSI client of a CDP or NSS
appliance, it becomes an iSCSI initiator. The initiator name is important because it is
the main identity of an iSCSI initiator.
Supported platforms
iSCSI target mode is supported for iSCSI initiators on the following platforms:
Windows
VMware
NetWare
Linux
Solaris
HP-UX
AIX
Requirements for iSCSI clients
The following requirements are valid for all iSCSI clients regardless of platform:
You must install an iSCSI initiator on each of your client machines. iSCSI
software/hardware initiator is available from many sources and needs to be
installed and configured on all clients that will access shared storage. Refer
to the FalconStor certification matrix for a list of supported iSCSI initiators.
You should not install any storage server client software on the client unless
you are using a FalconStor snapshot agent.
CDP/NSS Administration Guide
106
iSCSI Clients
Configuring iSCSI clients
Refer to the following sections for an overview for configuring iSCSI clients with
CDP/NSS.
Enabling iSCSI
Configure your iSCSI initiator
Create storage targets for the iSCSI client
Add your iSCSI client in the FalconStor Management Console
Enabling iSCSI
In order to add a client using the iSCSI protocol, you must enable iSCSI for your
storage server. To do this, in the FalconStor Management Console, right-click on
your storage server and select Options --> Enable iSCSI.
As soon as iSCSI is enabled, a new SAN client called Everyone_iSCSI is
automatically created on your storage server. This is a special SAN client that does
not correspond to any specific client machine. Using this client, you can create
iSCSI targets that are accessible by any iSCSI client that connects to the storage
server.
Before an iSCSI client can be served by a CDP or NSS appliance, the two entities
need to mutually recognize each other. The following sections take you through this
process.
Configure your iSCSI initiator
You need to register your iSCSI client as an initiator to your storage server. This
enables the storage server to see the initiator.
To do this, you must launch the iSCSI initiator on the client machine and identify
your storage server as the target server. You will have to enter the IP address or
name (if resolvable) of your storage server.
Refer to the documentation provided by your iSCSI initiator for detailed instructions
about how to do this.
Afterwards, you may need to start, or restart the initiator if it is a Unix client.
CDP/NSS Administration Guide
107
iSCSI Clients
Add your iSCSI client in the FalconStor Management Console
1. Right-click on SAN Clients and select Add.
2. Select the protocol for the client you want to add.
Note: If you have more than one IP address, a screen will display prompting
you to select the IP address that the iSCSI target will be accessible over.
CDP/NSS Administration Guide
108
iSCSI Clients
3. Select the initiator that this client uses.
If the initiator does not appear, you may need to rescan. You can also manually
add it, if necessary.
4. Select the initiator or select the client to have mobile access.
Stationary iSCSI clients corresponds to specific iSCSI client initiators, and
consequently, the client machine that owns the specific initiator names. Only a
client machine with a correct initiator name can connect to the storage server to
access the resources assigned to this stationary client.
CDP/NSS Administration Guide
109
iSCSI Clients
5. Add/select users who can authenticate for this client. The user name defaults to
the initiator name. You will also need to enter the CHAP secret.
Click Advanced to add existing users to this target.
For unauthenticated access, select Allow Unauthenticated Access. With
unauthenticated access, the storage server will recognize the client as long as it
has an authorized initiator name. With authenticated access, an additional check
is added that requires the user to type in a username and password. More than
one username/password pair can be assigned to the client, but they will only be
useful when coming from the machine with an authorized initiator name.
Select the Enable Mutual CHAP secret if you want the target and the initiator to
authenticate to each other. A separate secret will be set for each target and each
initiator.
CDP/NSS Administration Guide
110
iSCSI Clients
6. Enter the name of the client, select the operating system, and indicate whether
or not the client machine is part of a cluster.
Note: It is very important that you enter the correct client name.
7. Click Find to locate the client machine.
The IP address of the machine with the specified host name will be automatically
filled in if the name is resolvable.
CDP/NSS Administration Guide
111
iSCSI Clients
8. Indicate if you want to enable persistent reservation.
This option allows clustered SAN Clients to take advantage of Persistent
Reserve/Release to control disk access between various cluster nodes.
9. Confirm all information and click Finish.
10.
CDP/NSS Administration Guide
112
iSCSI Clients
Create storage targets for the iSCSI client
1. In the FalconStor Management Console, right-click on the iSCSI protocol object
for an iSCSI client and select Create Target.
2. Enter a new target name for the client or accept the default.
Note: The Microsoft iSCSI initiator can only connect to an iSCSI target if the
target name is no longer than 221 characters. It will fail to connect if the target
name is longer than this.
3. Select the IP address(es) of the storage server to which this client can connect.
You can select multiple IPs if your iSCSI initiator has multipathing support (such
as the Microsoft initiator version 2.0).
If you specified a default portal (in Server Properties), that IP address will be
selected for you.
4. Select an access mode.
Read/Write - Only one client can access this SAN resource at a time. All others
(including Read Only) will be denied access.
Read/Write Non-Exclusive - Two or more clients can connect at the same time
with both read and write access. You should be careful with this option because
if you have multiple clients writing to a device at the same time, you have the
potential to corrupt data. This option should only be used by clustered servers,
because the cluster itself prevents multiple clients from writing at the same time.
Read Only - This client will have read only access to the SAN resource. This
option is useful for a read-only disk.
5. Select the SAN resource(s) to be assigned to the client.
If you have not created any SAN resources yet, you can assign them at a later
time. You may need to restart the iSCSI initiator afterwards.
CDP/NSS Administration Guide
113
iSCSI Clients
6. Use the default starting LUN.
Once the iSCSI target is created for a client, LUNs can be assigned under the
target using available SAN resources.
7. Confirm all information and click Finish.
Restart the iSCSI initiator
In order for the client to be able to access its storage, you must restart the iSCSI
initiator on Unix clients or log the client onto the target (Windows and NetWare).
It may be desirable to have a persistent target. Refer to the documentation provided
by your iSCSI initiator for detailed instructions about how to do this.
Windows iSCSI clients and failover
The Microsoft iSCSI initiator has a default retry period of 60 seconds. You must
change it to 300 seconds in order to sustain the disk for five minutes during failover
so that applications will not be disrupted by temporary network problems. This
setting is changed through the registry.
1. Go to Start --> Run and type regedit.
2. Find the following registry key:
HKEY_LOCAL_MACHINE\system\CurrentControlSet\control\class\
4D36E97B-xxxxxxxxx\<iscsi adapter interface>\parameters\
where iscsi adapter interface corresponds to the adapter instance, such as
0000, 0001, .....
3. Right-click Parameters and select Export to create a backup of the parameter
values.
4. Double-click MaxRequestHoldTime.
5. Pick Decimal and change the Value data to 300.
6. Click OK.
7. Double-click EnableNOPOut
8. Set the Value data to 1.
9. Click OK.
10. Reboot Windows for the change to take effect.
Disable iSCSI
To disable iSCSI for a CDP or NSS appliance, right-click on the server node in the
FalconStor Management Console, and select Options --> Disable iSCSI.
Note that before disabling iSCSI, all iSCSI initiators and targets for this ICDP or NSS
appliance must be removed.
CDP/NSS Administration Guide
114
CDP/NSS Administration Guide
Logs and Reports
The CDP/NSS appliance retains information about the health and behavior of the
physical and virtual storage resources on the server. It maintains an Event log to
record system events and errors. The appliance also maintains performance data on
the individual physical storage devices and SAN resources, which can be filtered to
produce various reports through the FalconStor Management Console.
Event Log
The Event Log details significant occurrences during the operation of the storage
server. The Event Log can be viewed in the FalconStor Management Console when
you highlight a Server in the tree and select the Event Log tab in the right pane.
The following is a sample log display:
The columns displayed are:
Type
I: This is an informational message. No action is required.
W: This is a warning message that states that something occurred
that may require maintenance or corrective action. However, the
storage server system is still operational.
E: This is an error that indicates a failure has occurred such that a
resource is not available, an operation has failed, or a licensing
violation. Corrective action should be taken to resolve the cause of
the error.
C: This is a critical error that stops the system from operating
properly. You will be alerted to all critical errors when you log into
the server from the console.
Date
The date on which the event occurred.
Time
The time at which the event occurred.
CDP/NSS Administration Guide
115
Logs and Reports
ID
This is the message number.
Event
Message
This is a text description of the event describing what has occurred.
Sort information in the Event Log
When you initially view the Event Log, all information is displayed in chronological
order (most recent at the top). If you want to reverse the order (oldest at top) or
change the way the information is displayed, you can click on a column heading to
re-sort the information. For example, if you click on the ID heading, you can sort the
events numerically. This can help you identify how often a particular event occurs.
Filter information stored in the Event Log
By default, all informational system messages, warnings, and errors are displayed.
To filter the information that is displayed, right-click on a Server and select Event Log
--> Filter.
CDP/NSS Administration Guide
116
Logs and Reports
Refresh the Event Log
Select which message
types you want to
include.
Select a category of
messages to display.
Search for records that
contain/do not contain
specific text.
Specify the maximum
number of lines to
display.
Select a time or date
range for messages.
You can refresh the current Event Log display by right-clicking on the Server and
selecting Event Log --> Refresh.
Print/Export Event Log
You can print the Event Log to a printer or save it as a text file. These options are
available (once you have displayed the Event Log) when you right-click on the
Server and select the Event Log options.
CDP/NSS Administration Guide
117
Logs and Reports
Reports
FalconStor provides reports that offer a wide variety of information:
Performance and throughput - By SAN Client, SAN resource, SCSI channel,
and SCSI device.
Usage/allocation - By SAN Client, SAN resource, Physical resource, and
SCSI adapter.
System configuration - Physical Resources.
Replication reports - You can run an individual report for a single server or
you can run a global report for multiple servers.
Individual reports are viewed from the Reports object in the console. Global
replication reports are created from the Servers object.
Before you begin
Prior to setting up reports, review the properties you have set in the Activity
Database Maintenance tab (Right click on the server and select Properties ' -->
Activity Database Maintenance).
Report feature polls log files to generate reports.
The default maximum size is 50MB. If the size of the log database is over
50MB, older logs will be deleted to maintain the maximum 50MB limit.
The default maximum days of log history to keep is 30. Log data older than
30 days will be deleted. If you are planning to create reports viewing data
older than 30 days, you must increase this value. For example: if you
generate a report viewing data for the past year but maximum log history is
set only to 30 days, you will only get 30 days of data in the report.
CDP/NSS Administration Guide
118
Logs and Reports
Create an individual report
1. To create a report, right-click on the Reports object and select New.
2. Select a report.
Depending upon which report you select, additional windows appear to allow
you to filter the information for the report. Descriptions of each report appear on
the following pages.
3. Select the reporting schedule.
Depending upon which report you select, you can select to run the report for one
time only, or select a daily, weekly, or monthly date range.
CDP/NSS Administration Guide
119
Logs and Reports
To create a one-time only report, click the For One Time Only radio button
and click Next
If applicable, specify the date or date range for the report and indicate which
SAN resources and Clients to use in the report.
Selecting Past n days/weeks/months will create reports that generate data
relative to the time of execution.
Include All SAN Resources and Clients Includes all current and previous
configurations for this server (including SAN resources and cients that you may
have changed or deleted).
Include Current Active SAN Resources and Clients Only Includes only those
SAN resource and clients that are currently configured for this server.
The Delta Replication Status report has a different dialog that lets you specify a
range by selecting starting and ending dates.
CDP/NSS Administration Guide
120
Logs and Reports
To create a daily report, click the Daily radio button, give the schedule a
name if desired and click Next.
Set the schedule frequency, duration, start time and click Next.
To create a weekly report, click the Weekly radio button.
CDP/NSS Administration Guide
121
Logs and Reports
To create a monthly report, click the Monthly radio button.
4. If applicable, select the objects necessary to filter the information in the report.
Depending upon which report you selected, you may be asked to select from a
list of storage servers, SCSI adapters, SCSI devices, SAN clients, SAN
resources, or replica resources.
5. If applicable, select which columns you want to display in the report and in which
sort order.
Depending upon which report you selected, you may be able to select which
column fields to display on the report. All available fields are selected by default.
You can also select whether you want the data sorted in ascending or
descending order.
CDP/NSS Administration Guide
122
Logs and Reports
6. Enter a name for the report.
7. Confirm all information and click Finish to create the report.
View a report
When you create a report, it is displayed in the right-hand pane and is added
beneath the Reports object in the configuration tree.
Expand the Reports object to see the existing reports available for this Server.
When you select an existing report, it is displayed in the right-hand pane.
Export data from a report
You can save the data from the server and device throughput and usage reports.
The data can be saved in a comma delimited (.csv) or tab delimited (.txt) text file. To
export information, right-click on a report that is generated and select Export.
CDP/NSS Administration Guide
123
Logs and Reports
Schedule a report
Reports can be generated on a regular basis or as needed. Some tips to remember
on scheduling are as follows:
The start and end dates in the report scheduler are inclusive.
When scheduling a monthly report, be sure to select a date that exists in every
month. For example, if you select to run a report on the 31st day, the report will not
be generated on months that do not have 31 days.
When scheduling a report to run every n days in selected months, the first report is
always generated on the first of the month and then every n number of days after.
Therefore if you chose 30 days (n = 30) and there are not 30 days left in the month,
the schedule will jump to the first day of the next month.
Some reports allow you to select a range of dates from the day you are generating
the report for the past n number of days. If you select for the past one day, the
report will be generated for one day.
When scheduling a daily report, it is best practice to schedule the report to run at the
end of the day to capture the most amount of data. Daily report data accumulation
begins at 12:00 am and ends at the scheduled run time.
CDP/NSS Administration Guide
124
Logs and Reports
E-mail a scheduled report
Scheduled reports can be sent to one or more e-mail addresses by selecting the Email option in the Report Wizard.
Select the E-mail option in the Report Wizard to enter e-mail addresses to have the
report sent. Enter e-mail addresses, separated by semi-colons. You can also have
the report sent to distribution groups, as long as the E-Mail server being used
supports this feature.
Report types
The FalconStor reporting feature includes many useful reports including allocation,
usage, configuration, and throughput reports. A description of each report follows.
Client Throughput Report
The SAN Resource tab of the Client Throughput Report displays the amount of data
read/written between this client and SAN resource. To see information for a different
SAN resource, select a different Resource Name from the drop-down box in the
lower right hand corner.
The Data tab shows the tabular data that was used to create the graphs.
CDP/NSS Administration Guide
125
Logs and Reports
The following is a sample page from a Client Throughput Report:
Delta Replication Status Report
This report displays information about replication activity, including compression,
encryption, MicroScan and protocol. It provides a centralized view for displaying
real-time replication status for all disks enabled for replication. It can be generated
for an individual disk, multiple disks, source server or target server, for any range of
dates. This report is useful for administrators managing multiple servers that either
replicate data or are the recipients of replicated data.
The report can display information about existing replication configurations only or it
can include information about replication configurations that have been deleted or
promoted (you must select to view all replication activities in the database).
CDP/NSS Administration Guide
126
Logs and Reports
The following is a sample Delta Replication Status Report:
The Replication Status Summary tab displays a consolidated summary for multiple
servers.
CDP/NSS Administration Guide
127
Logs and Reports
Disk Space Usage Report
This report shows the amount of disk space being used by each SCSI adapter.
The Disk Space Usage tab displays a pie chart showing the following space usage
amounts:
Storage Allocated Space
Snapshot Allocated Space
Cache Allocated Space
HotZone Allocated Space
Journal Allocated Space
CDR Allocated Space
Configuration Allocated Space
Total Free Space
A sample is displayed below:
The Data tab breaks down the disk space information for each physical device. The
Utilization tab breaks down the disk space information for each logical device.
CDP/NSS Administration Guide
128
Logs and Reports
Disk Usage History Report
This report allows you to create a custom report with the statistical history
information collected. You must have statistic log enabled to generate this report.
The data is logged once a day at a specified time. The data collected is a
representative sample of the day.
In addition, if servers are set up in as a failover pair, the Disk usage history log
must be enabled on the both servers in order for data to be logged during failover. In
a failover state, the data logging time set on the secondary server is followed.
Select the reporting period range, whether to include the disk usage information
from the storage pools, and the sorting criteria.
A sample is displayed below:
CDP/NSS Administration Guide
129
Logs and Reports
CDP/NSS Administration Guide
130
Logs and Reports
CDP/NSS Administration Guide
131
Logs and Reports
Fibre Channel Configuration Report
This report displays information about each Fibre Channel adapter, including type,
WWPN, mode (initiator vs. target), and a list of all WWPNs with client information.
The following is a sample Fibre Channel Configuration Report:
CDP/NSS Administration Guide
132
Logs and Reports
Physical Resources Configuration Report
This report lists all of the physical resources on this Server, including each physical
adapter and physical device. To make this report more meaningful, you can rename
the physical adapter (right-click on the adapter and select Rename). For example,
instead of using the default name, you can use a name such as Target Port A.
The following is a sample Physical Resources Configuration Report:
CDP/NSS Administration Guide
133
Logs and Reports
Physical Resources Allocation Report
This report shows the disk space usage and layout for each physical device. The
following is a sample Physical Resources Allocation Report:
CDP/NSS Administration Guide
134
Logs and Reports
Physical Resource Allocation Report
This report shows the disk space usage and layout for a specific physical device.
The following is a sample Physical Resource Allocation Report:
Resource IO Activity Report
The Resource IO Activity Report shows the input and output activity of selected
resources. The report options and filters allow you to select the SAN resource and
client to report on within a particular date/time range.
You can view a graph of the IO activity for each SAN resource including errors,
delayed IO, data, and configuration information. The Data tab shows the tabular
data that was used to create the graph and the Configuration Information tab shows
which SAN resources and Clients were included in the report.
CDP/NSS Administration Guide
135
Logs and Reports
The following is a sample of the Resource IO Activity Report.
CDP/NSS Administration Guide
136
Logs and Reports
The Resource IO Activity - data tab report results is displayed below:
SCSI Channel Throughput Report
The SCSI Channel Throughput Report shows the data going through each SCSI
channel on the Server. This report can be used to determine which SCSI bus is
heavily utilized and/or which bus is under utilized. If a particular bus is too heavily
utilized, it may be possible to move one or more devices to a different or new SCSI
adapter.
Some SCSI adapters have multiple channels. Each channel is measured
independently.
CDP/NSS Administration Guide
137
Logs and Reports
During the creation of the report, you select which SCSI channel to include in the
report.
When this report is created, there are three tabs of information.
The SAN Resource tab displays a graph showing the throughput of the channel. The
horizontal axis displays the time segments. The vertical axis measures the total data
transferred through the selected SCSI channel, in each time segment for both reads
and writes.
The System tab displays the CPU and memory utilization for the same time period
as the main graph.
The Data tab shows the tabular data that was used to create the graphs.
CDP/NSS Administration Guide
138
Logs and Reports
The following is a sample SCSI Channel Throughput Report:
SCSI Device Throughput Report
The SCSI Device Throughput Report shows the utilization of the physical SCSI
storage device on the Server. This report can show if a particular device is heavily
utilized or under utilized.
During the creation of the report, you select which SCSI device to include.
The SAN Resource tab displays a graph showing the throughput of the SCSI device.
The horizontal axis displays the time segments. The vertical axis measures the total
data transferred through the selected SCSI device, in each time segment for both
reads and writes.
The System tab displays the CPU and memory utilization for the same time period
as the main graph.
The Data tab shows the tabular data that was used to create the graphs.
CDP/NSS Administration Guide
139
Logs and Reports
The following is a sample SCSI Device Throughput Report:
SAN Client Usage Distribution Report
The Read Usage tab of the SAN Client Usage Distribution Report displays a bar
chart that shows the amount of data read by Clients of the current Server. The chart
shows three bars, one for each Client.
The Read Usage % tab displays a pie chart showing the percentage for each Client.
The Write Usage tab displays a bar chart that shows the amount of data written to
the Clients. The chart shows three bars, one for each active Client.
The Write Usage % tab displays a pie chart showing the percentage for each Client.
CDP/NSS Administration Guide
140
Logs and Reports
The following is a sample page from a SAN Client Usage Distribution Report:
SAN Client/Resources Allocation Report
For each Client selected, this report displays information about the resources
assigned to the Client, including disk space assigned, type of access, and
breakdown of physical resources.
The following is a sample SAN Client / Resources Allocation Report:
CDP/NSS Administration Guide
141
Logs and Reports
SAN Resources Allocation Report
This report displays information about the resources assigned to each Client,
including disk space assigned, type of access, and breakdown of physical
resources.
The following is a sample SAN Resources Allocation Report:
CDP/NSS Administration Guide
142
Logs and Reports
SAN Resource Usage Distribution Report
The Read Usage tab of the SAN Resource Usage Distribution Report displays a bar
chart that shows the amount of data read from each SAN Resource associated with
the current Server. The chart shows six bars, one for each SAN Resource (in order
of bytes read).
The Read Usage % tab displays a pie chart showing the percentage for each SAN
resource.
The Write Usage tab displays a bar chart that shows the amount of data written to
the SAN resources.
The Write Usage % tab displays a pie chart showing the percentage for each SAN
resource.
The following is a sample page from a SAN Resource Usage Distribution Report:
Server Throughput and Filtered Server Throughput Report
The Server Throughput Report displays the overall throughput of the Server.
The Filtered Server Throughput Report takes a subset of clients and/or SAN
resources and displays the throughput of that subset.
When creating the Filtered Server Throughput Report, you can specify which SAN
resources and which clients to include.
When these reports are created, there are several tabs of information.
CDP/NSS Administration Guide
143
Logs and Reports
The SAN Resource tab displays a graph showing the throughput of the Server. The
horizontal axis displays the time segments. The vertical axis measures the total data
transferred in each time segment for both reads and writes. For example:
The System tab displays the CPU and memory utilization for the same time period
as the main graph:
CDP/NSS Administration Guide
144
Logs and Reports
This helps the administrator identify time periods where the load on the Server is
greatest. Combined with the other reports, the specific device, client, or SAN
resource that contributes to the heavy usage can be identified.
The Data tab shows the tabular data that was used to create the graphs:
The Configuration Information tab shows which SAN Resources and Clients were
included in the report.
CDP/NSS Administration Guide
145
Logs and Reports
Storage Pool Configuration Report
This report shows detailed Storage Pool information. You can select the information
to display in each column as well as the order. This includes:
Device Name
SCSI Address
Sectors
Total (MB)
Used (MB)
Available (MB)
The following is a sample Storage Pool Configuration Report
CDP/NSS Administration Guide
146
Logs and Reports
User Quota Usage Report
This report shows a detailed description of the amount of space used by each of the
resources from the selected users on the current server. You can select the
information to display in each column, the sort order and the user on which to report
information. Report columns include:
ID
Resource Name
Type
Category
Size (MB)
The following is a sample User Quota Usage Report.
CDP/NSS Administration Guide
147
Logs and Reports
Report types - Global replication
While you can run a replication report for a single server from the Reports object,
you can also run a global report for multiple servers from the Servers object.
From the Servers object, you can also create a report for a single server, consolidate
existing reports from multiple servers, and create a template for future reports.
Create a global replication report
1. To run a global replication report, highlight the Servers object and select
Replication Status Reports --> New.
2. When prompted, enter a date range for the report and indicate whether you want
to use a saved template to create this report or if you are going to define this
report as you go through the wizard.
3. Select which servers to include in the report.
4. Select which resources to include from each server.
Be sure to select each primary server from the drop-down box to select
resources.
5. Select what type of information you want to appear in the report and the order.
Use the up/down arrows to order the information.
6. Set the sorting criteria for the columns.
Click in the Sorting field to alternate between Ascending, Descending, or Not
Sorted. You can also use the up/down arrows to change the sorting order of the
columns.
7. Give the report a name and indicate where to save it.
You can also save the current report template for future use.
8. Review all information and click Finish to create the report.
View global report
The group replication report will open in its own window. Here you can change what
is displayed, change the sort order, export data, or print.
Since you can select more columns than can fit on a page, when printing a report
where many columns have been selected, it is recommended that you preview the
report before printing. You may need to make sure the columns have not
overlapped.
CDP/NSS Administration Guide
148
CDP/NSS Administration Guide
Fibre Channel Target
Mode
Just as CDP and NSS supports different types of storage devices (such as SCSI,
Fibre Channel, and iSCSI), CDP and NSS appliances are protocol-independent and
support multiple outbound target protocols, including Fibre Channel Target Mode.
CDP/NSS support for the Fibre Channel protocol allows any Fibre Channel-enabled
system to take advantage of FalconStors extensive storage capabilities such as
virtualization, mirroring, replication, NPIV, and security. Support is offered for all
Fibre Channel topologies including, Point-to-Point, and Fabric.
This chapter provides configuration information for Fibre Channel Target Mode as
well as the associated Fibre Channel SAN equipment (i.e. switch, T3, etc.). An
application server can be an iSCSI Client, a Fibre Channel Client, or both. Using
separate cards and switches, you can have all types of FalconStor Clients (FC, or
iSCSI) on your storage network.
Supported platforms
Fibre Channel target mode is supported on the following platforms:
Windows
VMware
CDP/NSS Administration Guide
149
Fibre Channel Target Mode
NetWare
Linux
Solaris
HP-UX
AIX
Fibre Channel Target Mode - Configuration overview
The installation and configuration of Fibre Channel Target Mode involves several
steps. Detailed information for each step appears in subsequent sections.
1. Prepare your Fibre Channel hardware configuration.
2. Enable Fibre Channel target mode.
3. (If applicable) Set QLogic ports to target mode.
4. (Optionally) Set up your failover configuration.
5. Install and run client software and/or manually add Fibre Channel clients.
6. (Optionally) Associate World Wide Port Names (WWPN) with clients.
7. Assign virtualized resources to Fibre Channel Clients.
8. View new devices.
9. (Optionally) Install and configure DynaPath on your Client machines.
CDP/NSS Administration Guide
150
Fibre Channel Target Mode
Configure Fibre Channel hardware on server
CDP and NSS supports the use of QLogic HBAs for the storage server.
For a list of all supported HBAs, refer to the certification matrix on the FalconStor
website for a list of HBAs that are currently certified.
Ports
Your CDP/NSS appliance is equipped with several Fibre Channel ports. The ports
that connect to storage arrays are commonly known as Initiator Ports. The ports that
will interface with the backup servers' FC initiator ports will run in a different mode
known as Target Mode.
Downstream Persistent binding
Persistent binding is automatically configured for all QLogic HBAs connected to
storage device targets upon the discovery of the device (via a Console physical
device rescan with the Discover New Devices option enabled). However, persistent
binding will not be SET until the HBA is reloaded. You can reload HBAs by restarting
CDP/NSS with the ipstor restart all command.
After the HBA has been reloaded and the persistent binding has been set, you can
change the target port ID through the console. To do this, right-click on Physical
Resources or a specific adapter and select Target Port Binding.
Important: Do not change the target-port ID from the console prior to the persistent
binding being set.
VSA
The Volume Set Addressing (VSA) option must be disabled when using a
FalconStor Management Console version later than 6.0 to set up a near-line mirror
on a version 6.0 server. This also applies if you are setting up a near-line mirror from
a version 6.0 server to a later server.
Some storage devices (such as EMC Symmetric storage controller and older HP
storage) use VSA (Volume Set Addressing) mode. This addressing method is used
primarily for addressing virtual buses, targets, and LUNs.
CDP/NSS supports up to 4096 LUN assignments per VSA client when VSA is
enabled.
For upstream, you can set VSA for the client at the time of creation or you can
modify the setting after creation by right-clicking on the client.
When VSA is enabled and the actual LUN is beyond 256, use the Report LUN
option to discover them. Use the LUN range option only if Report LUN does not work
for the adapter.
CDP/NSS Administration Guide
151
Fibre Channel Target Mode
If new devices are assigned (from the storage server) to a VSA-enabled storage
server before loading up the CDP/NSS storage server, the newly assigned devices
will not be discovered during start up. A manual rescan is required.
Zoning
Two types of zoning can be configured on each switch: hard zoning (based on port
#) and soft zoning (based on WWPNs).
Soft zoning is zoning which is implemented in software and uses the WWPN in the
configuration. By using filtering implemented in fibre channel switches, ports cannot
be seen from outside of their assigned zones. The WWPN remains the same in the
zoning configuration regardless of the port location. If a port fails, you can simply
move the cable from the failed port to another valid port without having to
reconfigure the zoning.
CDP/NSS requires isolated zoning where one initiator is zoned to one target in order
to minimize I/O interruptions by non-related FC activities, such as port login/out,
reset, etc. With isolated zoning, each zone can contain no more than two ports or
two WWPNs. This applies to both initiator zones (storage) and target zones (clients).
For example, for the case of upstream (to client) zoning, if there are two client
initiators and two CDP/NSS targets on the same FC fabric and if it is desirable for all
four path combinations to be established, you should use four specific zones, one
for each path (Client_Init1/IPStor_Tgt1, Client_Init1/IPStor_Tgt2, Client_Init2/
IPStor_Tgt1, and Client_Init2/IPStor_Tgt2). You cannot create a single zone that
includes all four ports. The four-zone method is cleaner because it does not allow
the two client initiators nor the two CDP/NSS target ports to see each other. This
eliminates all of the potential issues such as initiators trying to log in to each other
under certain conditions.
The same should be done for downstream (to storage) zoning. If there are two CDP/
NSS initiators and two storage targets on the same fabric, there should be four
zones (IPStor_Init1/Storage_Tgt1, IPStor_Init1/Storage_Tgt2, IPStor_Init2/
Storage_Tgt1, and IPStor_Init2/Storage_Tgt2).
Make sure that storage devices are not zoned directly connected to the client.
Instead, since CDP/NSS will be provisioning the storage to the clients, the target
ports of the storage devices should be zoned to the CDP/NSS initiator ports while
the clients are zoned to the CDP/NSS target ports. Make sure that from the storage
units management GUI (such as SANtricity and NaviSphere), the LUNs are reassigned to the storage server as the host. CDP/NSS will either virtualize these
LUNS (if they are newly created without existing data) or service-enable them
(which preserves existing data). CDP/NSS can then define SAN resources out of
these LUNS and further provision them to the clients as Service-Enabled Devices.
CDP/NSS Administration Guide
152
Fibre Channel Target Mode
Switches
For the best performance, if you are using 4 or 8 Gig switches, all of your cards
should be 4 or 8 Gig cards. For example, the QLogic 2432 or 2462 4GB cards.
Check the certification matrix on the FalconStor website to see a complete list of
certified cards.
NPIV (point-to-point) mode is enabled by default. Therefore, all Fibre Channel
switches must support NPIV.
QLogic HBAs
Target mode
settings
The table below lists the recommended settings (changes are indicated in bold) for
QLogic HBA target mode. These values are set in the fshba.conf file and will
override those set through the BIOS settings of the HBA.
For initiators, please consult the best practice guideline as published by the storage
subsystem vendor. If an initiator is to be used by multiple storage brands, the best
practice is to select a setting that best satisfies both brands. If this is not possible,
consult FalconStor technical support for advice, or separate the conflicting storage
units to their own initiator connections.
Name
Default
Recommendation
frame_size
2 (2048byte)
2 (2048byte)
loop_reset_delay
adapter_hard_loop_id
0 but set to 1 if using arbitrated
loop topology
connection_option
1 (point to
point)
1 (point to point) but set to 0 if
using arbitrated loop topology
hard_loop_id
0-124
Make sure that both primary
target adapter and secondary
standby adapter (the failover pair)
are set to the SAME value.
fibre_channel_tape_support
0 (disable)
0 (disable)
data_rate
2 (auto)
Based on the switch capability
should be modified to either 0 (1
GB), 1 (2 GB), 2 (auto), or 3
(4GB)
execution_throttle
255
255
LUNs_per_target
256
256
enable_lip_reset
1 (enable)
1 (enable)
CDP/NSS Administration Guide
153
Fibre Channel Target Mode
Name
Default
Recommendation
enable_lip_full_login
1 (enable)
1 (enable)
enable_target_reset
1 (enable)
1 (enable)
login_retry_count
port_down_retry_count
link_down_timeout
45
45
extended_error_logging_flag
0 (no logging)
0 (no logging)
interrupt_delay_timer
iocb_allocation
512
512
enable_64bit_addressing
0 (disable)
0 (disable)
fibrechannelconfirm
0 (disable)
0 (disable)
class2service
0 (disable)
0 (disable)
acko
0 (disable)
0 (disable)
responsetimer
0 (disable)
0 (disable)
fastpost
0 (disable)
0 (disable)
driverloadrisccode
1 (enable)
1 (enable)
q12xmaxqdepth
255
255 (configurable via the console)
max_srbs
4096
4096
q12xfailover
q12xlogintimeout
20 seconds
20 seconds
q12xretrycount
20
20
q12xsuspendcount
10
10
q12xdevflag
q12xplogiabsentdevice
0 (no PLOGI)
0 (no PLOGI)
busbusytimeout
60 seconds
60 seconds
displayconfig
retry_gnnft
10
10
recoverytime
10 seconds
10 seconds
CDP/NSS Administration Guide
154
Fibre Channel Target Mode
Name
Default
Recommendation
failbacktime
5 seconds
5 seconds
bind
0 (by Port
Name)
0 (by Port Name)
qfull_retry_count
16
16
qfull_retry_delay
q12xloopupwait
10
10
Configure Fibre Channel hardware on clients
Persistent
binding
Fabric topology
You should use persistent binding for all clients to all QLogic targets.
(For all clients except Solaris SPARC clients) When setting up clients on a Fibre
Channel network using a Fabric topology, we recommend that you set the topology
that each HBA will use to log into your switch to Point-to-Point Only.
If you are using a QLogic HBA, the topology is set through the QLogic BIOS:
Configure Settings --> Extended Firmware settings --> Connection Option: Point-toPoint Only
Note: : For QLogic HBAs, it is recommend that you hard code the link speed of
the HBA to be in line with the switch speed.
CDP/NSS Administration Guide
155
Fibre Channel Target Mode
NetWare clients
Built into the latest QLogic driver is the ability to handle failover. HBA settings are
configured through nwconfig. Do the following after installing the card:
1. Type nwconfig.
2. Go to Driver Options and select Config disk and Storage device drivers.
3. Select an Additional Driver and type the path for the updated driver (i.e
sys:\qlogic).
4. Set the following parameters:
Scan All Luns = yes
FailBack Enabled = yes
Read configuration = yes
Requires configuration = no
Report all paths = yes
Use Portnames = no
Qualified Inquiry = no
Report Lun Zero = yes
GNFT SNS Query = no
Console Alerts = no
Solaris clients
For persistent binding on Solaris clients when using an HBA driver by QLogic
QLA. (If you are using the SUN QLC driver, no configuration steps are necessary
for persistent binding.)
1. Statically assign the targets WWPN by editing the QLogic driver configuration
file located in /kernel/drv/qla2200.conf or /kernel/drv/qla2300.conf depending on
the version of the card.
For example, if the target WWPN is 210000e08b04f136, you need to add the
following to /kernel/drv/qla2200.conf:
hba0-SCSI-target-id-0-fibre-channel-name="200000e08b04f136"
2. Edit the SCSI device driver located in /kernel/drv/sd.conf to add the devices
LUN number that you are assigning in order for the device to be seen.
For example, if you have a device with LUN 1, you would need to add the
following to /kernel/drv/sd.conf:
name="sd" class="scsi" target=0 lun=1;
If you add the above lines for LUN 1 to 8, devices from LUN 1 to LUN 8 will be
scanned when the card loads and any assigned devices will be found. Each time
CDP/NSS Administration Guide
156
Fibre Channel Target Mode
you add a new device from LUN 0 to 8, run devfsadm -i sd instead of editing
the file /kernel/drv/sd.conf and the devices will be found.
3. Reboot the client machine.
Solaris Internal Fibre
Channel drives
Some newer Sun machines, such as the SunBlade 1000, come with internal Fibre
Channel drives instead of SCSI drives. These drives have a qlc driver that is not
compatible with CDP/NSS. The following instructions explain how to migrate a
system boot disk from the qlc driver to the qla2x00fs driver.
Note: Before attempting to migrate from the qlc driver to the qla2200 driver, make
sure the system disk is archived.
Determine the boot device
1. At an Openboot prompt, determine your boot device by typing:
ok devalias
You will see information similar to the following appear on your screen:
.
.
disk
disk1
.
/pci@8,600000/SUNW,qlc@4/fp@0,0 /disk@1,0
/pci@8,600000/SUNW,qlc@4/fp@0,0 /disk@2,0
The lines you are looking for should all say SUNW,qlc and /fp and /disk.
2. Select the alias that represents the system boot device (disk by default) and
write down the device path.
3. Boot off of the disk.
4. Determine the essential devices by typing:
# df
You will see information similar to the following appear on your screen:
/
.
.
.
(/dev/dsk/c3t1d0s0 ) : 11165506 blocks738360 files
5. Note the root device path, which in this example is: /dev/dsk/c3t1d0s0.
Prepare the primary system to boot using the qla2200 driver
1. Backup the following files:
/etc/driver_aliases
/etc/driver_classes
/etc/name_to_major
/etc/path_to_inst
/etc/system
CDP/NSS Administration Guide
157
Fibre Channel Target Mode
/etc/vfstab
and write down the full symbolic links for the root device, such as:
/dev/dsk/c3t1d0s0 -> /devices/pci@8,600000/SUNW,qlc@4/fp@0,0/ssd@w2100002037e2dc62,0:a
/dev/rdsk/c3t1d0s0 -> /devices/pci@8,600000/SUNW,qlc@4/fp@0,0/ssd@w2100002037e2dc62,0:a,raw
Note: In case of a failure you will need this information to restore the system.
2. Install CDP/NSS with the Fibre Channel option to the system if it is not installed.
3. Enter single user mode via init 1.
4. Copy:
/usr/local/ipstor/drv/qla2200 to /kernel/drv/qla2200
and
/usr/local/ipstor/drv/sparcv9/qla2200 to /kernel/drv/
sparcv9/qla2200
5. Edit /etc/driver_aliases and replace the following line:
qlc pci1077,2200
with:
qla2200 pci1077,2200
6. Edit /etc/path_to_inst and replace every qlc with qla2200
For example, replace:
/pci@8,600000/SUNW,qlc@4 2 qlc
with
/pci@8,600000/SUNW,qlc@4 2 qla2200
7. In /etc/path_to_inst you need to add an sd instance to the root device node.
For example, if your boot device is:
/pci@8,600000/SUNW,qlc@4/fp@0,0 /disk@1,0
You would create an instance:
/pci@8,600000/SUNW,qlc@4/sd@1,0 61 sd
where 61 is the instance number.
Note: You make sure 61 is not in use by any other sd instance. If it is, choose
another number that is not in use by sd. Write down this instance number.
8. In /etc/driver_classes comment out the qlc line, such as:
#qlc fibre_channel
9. In /etc/system add rootdev:<new root device>. For example:
rootdev:/pci@8,600000/SUNW,qlc@4/sd@1,0:a
CDP/NSS Administration Guide
158
Fibre Channel Target Mode
The rootdev is the instance you added into the /etc/path_to_inst files in step B7.
10. Obtain the major number for the sd device by viewing /etc/name_to_major.
The sd device is usually major number 32.
11. Figure out the minor number for the root device node by using the formula:
([instance# * 8] + disk slice#)
Where the instance# is the number you picked in step B7 and the slice# is the
s number you got from step A4.
For example, if the instance# is 61 and slice# is 0 (/dev/dsk/c3t0d0s0), the minor
number will be 488. ([61 * 8] + 0)
12. With the major and minor number, you can now make a device node for the
system. For example:
# cd /devices/pci@8,600000/SUNW,qlc@4/
# mknod sd@1,0:a b 32 488
# mknod sd@1,0:a,raw c 32 488
13. Make links between the device name and the device node.
For example, if your boot device is denoted as /dev/dsk/c3t1d0s0 on the primary
disk, delete the link and re-link it to the newly created device node. Write down
the existing link before you remove it, because you will need to use it to revert
back when failure occurs.
# rm /dev/dsk/c3t1d0s0
# ln is /devices/pci@8,600000/SUNW,qlc@4/sd@1,0:a /dev/dsk/c3t1d0s0
Do not forget the raw device nodes:
# rm /dev/rdsk/c3t1d0s0
# ln is /devices/pci@8,600000/SUNW,qlc@4/sd@1,0:a,raw /dev/rdsk/c3t1d0s0
Note: Make sure you cover all of the devices in /etc/vfstab or you will be
forced into maintenance mode.
14. Reboot the system and reconfigure devices. For example:
# reboot -- -r.
CDP/NSS Administration Guide
159
Fibre Channel Target Mode
HBA failover settings for FC client connectivity
This section provides recommended settings for sustaining failover for various
clients that are connected to CDP/NSS.
For QLogic HBAs, you can modify the BIOS settings using the SANsurfer tool. For
Emulex HBAs, FalconStor supports using the miniport drivers. We do not support
FC port drivers.
For all HBAs that support persistent binding, persistent binding should be
configured. Check with the HBA vendor for persistent binding procedures.
We recommend that you reload the driver (reboot) in order for changes to be made
effective for most operating systems, such as Windows, Linux, and Solaris. It is not
necessary to reboot AIX clients since there are no BIOS settings that need to be
configured. For HP-UX, you will not be required to reboot unless you are using an
Emulex HBA since you will need to recompile the kernel.
Below are charts for different types of HBAs for different types of clients. These
settings apply for cluster and non-cluster environments unless specified. Refer to
the certification matrix on the FalconStor.com website for an up-to-date list of
supported platforms.
Windows
HBA Card Type
Disk Timeout
Value
With DynaPath
Without DynaPath
QLogic
Login Retry Count = 8
Port Down Retry Count = 8
Link Down Count = 30
Enable Target Reset = True
FrameSize = 2048
Disk Timeout Value = 60
Execution Throttle = 256
LUNS per target = 256
Tape mode = Disable
Login Retry Count = 255
Port Down Retry Count = 255
Link Down Count = 30
Enable Target Reset = True
FrameSize = 2048
Disk Timeout Value = 255
Execution Throttle = 256
LUNS per target = 256
Tape mode = Disable
Emulex
Node Timeout = 30
Link Timeout = 30
Disk Timeout Value = N/A
Reset FF = 1 (true)
Node Timeout = 250
Link Timeout = 250
Disk Timeout Value = 300
Reset FF = 1 (true)
The Disk Timeout Value is a value that needs to be modified at the operating system
level. To enter the disk timeout value:
1. Go to Start --> Run.
CDP/NSS Administration Guide
160
Fibre Channel Target Mode
2. Enter regedit and Enter to open the Windows Registry Editor.
3. From the directory tree, go to HKEY_LOCAL_MACHINE --> SYSTEM -->
CURRENTCONTROLSET --> SERVICES --> DISK.
4. In the Disk folder, right-click New and select DWORD Value.
5. Rename from New Value #1 to TimeOutValue.
6. Right-click TimeOutValue and select Modify.
7. Enter the correct value setting based on the above table, select Decimal and
click OK.
LUNS per
target
The LUNS per target should now be set to 64. We can set this value to 256 because
we use Report LUN upstream. However, this is dependent on your requirements
and is based on the number of LUNs.
Clustering
For all Windows clustering and non-clustering configurations, DynaPath should be
installed in order to sustain failover.
In the case where a Windows clustering/Emulex HBA combination is used, the
Reset ff value should be set to true or 1.
CDP/NSS Administration Guide
161
Fibre Channel Target Mode
HP-UX 10, 11, and 11i
HBA Card Type
Tachyon
With DynaPath
Without DynaPath
scsi timeout = 30
scsi timeout = 30
In HP-UX, PVLink is used for multipathing. The scsi_timeout (aka pv_timeout) value
can be modified using the following command:
pvchange t 180 /dev/dsk/c0t0d0o
Note that this command must be executed for each device. The following
procedures should be used to replace PVlink with DynaPath:
1. Unmount filesystem that volume group is mounted to:
umount /dev/<VGname>/<logical vol name>
2. Vary off volume group that currently has pvlinks setup:
vgchange -a n <VGname>
3. Configure and consolidate all paths with DynaPath.
dynapath start - to start DynaPath driver and daemon
dpcli setup - create DynaPath configuration file
dpcli start - start DynaPath
dpcli status - view device and paths status
Refer to your DynaPath User Guide for more details.
4. Automatically detect which volume group is on the physical disk:
vgscan -v
5. Vary on volume group name:
vgchange -a y <VGname>
6. Mount the filesystem on the volume group.
For Tachyon HBAs, you must use port swapping scripts for special switches, such
as the Brocade 3900 / 12000 with firmware 4.1.2b. Cisco switches can detect the
port change automatically so there is no need to use port swapping scripts with
Cisco switches.
CDP/NSS Administration Guide
162
Fibre Channel Target Mode
AIX 4.3 and higher
HBA Card Type
With DynaPath
Without DynaPath
IBM
Retry Timeout = 300
Retry Timeout = 30
Emulex
Retry Timeout = 300
Retry Timeout = 30
Cambex
Retry Timeout = 300
Retry Timeout = 30
There are no BIOS or OS level changes that can be made for AIX. As indicated, AIX
will hold onto the I/O for 30 seconds without DynaPath. With DynaPath, the I/O will
hold for 300 seconds (5 minutes).
In AIX DynaPath, there are certain configurations that do not need to support the
special failover rescan logic. This includes using switches that support portswapping (certain CISCO, Brocade switches), using HBA drivers that support
dynamic port tracking (e.g. using Emulex LPFC driver, Cambex QLogic HBA driver),
and using versions of AIX e.g. (5.2, or 5.3) that support the dynamic port tracking
option.
Linux all versions
HBA Card Type
With DynaPath
Without DynaPath
QLogic
Login Retry Count = 8
Port Down Retry Count = 8
Link Down Count = 30
Enable Target Reset = True
FrameSize = 2048
Execution Throttle = 256
LUNS per target = 256
Tape mode = Disable
Login Retry Count = 255
Port Down Retry Count = 255
Link Down Count = 30
Enable Target Reset = True
FrameSize = 2048
Execution Throttle = 256
LUNS per target = 256
Tape mode = Disable
Emulex
Node Timeout = 30
Link Timeout = 30
Disk timeout value = N/A
Node Timeout = 250
Link Timeout = 250
Disk timeout value = 300
There are no OS level modifications to be made for a Linux client.
Note: Multipath for Linux - Native Linux DM-Multipath is recommended for Linux
systems. If no version of FalconStor DynaPath exists for your Linux kernel, you
must use Linux DM-Multipath. Refer to the Linux DM-Multipath Configuration with
CDP/NSS Best Practice Guide, available on the FalconStor TSFTP site.
CDP/NSS Administration Guide
163
Fibre Channel Target Mode
Solaris 9
HBA Card Type
With DynaPath
Without DynaPath
QLogic
Login Retry Count = 8
Port Down Retry Count = 8
Link Down Count = 30
Enable Target Reset = True
FrameSize = 2048
Execution Throttle = 256
LUNS per target = 256
Tape mode = Disable
Login Retry Count = 255
Port Down Retry Count = 255
Link Down Count = 30
Enable Target Reset = True
FrameSize = 2048
Execution Throttle = 256
LUNS per target = 256
Tape mode = Disable
Emulex
Node Timeout = 30
Link Timeout = 30
Disk timeout value = N/A
Node Timeout = 250
Link Timeout = 250
Disk timeout value = 300
The changes indicated above should be changed in the *.conf files for their
respective HBAs.
Note: For Sun (qlc) drivers, the clients will not be able to sustain failover at all if
DynaPath is not installed.
NetWare all versions
HBA Card Type
QLogic
With DynaPath
N/A
Without DynaPath
Port Down Retry Count = 30
Link Down Retry = 30
/XRetry = 60
/XTimeout = 120
/PortDown = 120
Set Multi-Path Support = ON
DynaPath is not required with NetWare since they have their version of multipathing.
The settings indicated above should be modified at the ql23xx driver line in the
startup.ncf file. The /ALLPATHS and /PORTNAMES options are required if an upper
layer module is going to handle failover (it expects to see all paths).
The Port Down Retry Count and Link Down Retry is configurable in the BIOS
whereas the /XRetry, /XTimeout, and /PortDown values are configured by the driver.
The Port Down Retry Count and the /Portdown values combined will approximately
be the total disk timeout.
CDP/NSS Administration Guide
164
Fibre Channel Target Mode
Enable Fibre Channel target mode
To enable Fibre Channel Target Mode:
1. In the Console, highlight the storage server that has the FC HBAs.
2. Right-click on the Server and select Options --> Enable FC Target Mode.
An Everyone_FC client will be created under SAN Clients. This is a generic
client that you can assign to all (or some) of your SAN resources. It allows any
WWPN not already associated with a Fibre Channel client to have read/write
non-exclusive access to any SAN resources assigned to Everyone.
Disable Fibre Channel target mode
To disable Fibre Channel Target Mode:
1. Unassign all resources from the Fibre Channel client.
2. Remove the Fibre Channel client.
3. Switch all targets to initiator mode.
4. Disable FC mode by right-clicking on the Server and selecting Options -->
Disable FC Target Mode.
5. Run the stop IPStor all command to stop
6. Power off the server.
Optional: Remove the FC cards
7. Run the IPStor Configtgt command and select q for no Fibre Channel
support.
Verify the Fibre Channel WWPN
The World Wide Port Name (WWPN) must be unique for the Fibre Channel initiator,
target, and the client initiator. To verify:
Right-click on the server and select Verify FC WWPN
If duplicate WWPNs are found, a message will display advising you to check your
Fibre Channel configuration to avoid data corruption.
CDP/NSS Administration Guide
165
Fibre Channel Target Mode
Set QLogic ports to target mode
By default, all QLogic point-to-point ports are set to initiator mode, which means they
will initiate requests rather than receive them. Determine which ports you want to
use in target mode and set them to become target ports so that they can receive
requests from your Fibre Channel Clients.
It is recommended that you have at least four Fibre Channel ports per server in
initiator mode, one of which is attached to your storage device.
You need to switch one of those initiators into target mode so your clients will be
able to see the storage server. You will then need to select the equivalent adapter on
the Secondary server and switch it to target mode.
Note: If a port is in initiator mode and has devices attached to it, that port cannot
be set for target mode.
To set a port:
1. In the FalconStor Management Console, expand Physical Resources.
2. Right-click on a HBA and select Options --> Enable Target Mode.
You will get a Loop Up message on your storage server if the port has
successfully been placed in target mode.
3. When done, make a note of all of your WWPNs.
It may be convenient for you to highlight your server and take a screenshot of
the Console.
CDP/NSS Administration Guide
166
Fibre Channel Target Mode
Fibre Channel over Ethernet (FCoE)
NSS supports FCoE using QLogic QLE8152 and QLAE8142 Converged Network
Adapters (CNAs) along with the CISCSO MDS 5010 FCoE switch. The storage
server detects the installed CNAs. The CNA is seen as a regular Fibre Channel
adapter with WWPN association.
QLogic NPIV HBAs
With a N_Port ID Virtualization (NPIV) HBA, each port can be both a target and an
initiator (dual mode). When using a NPIV HBA, there are two WWPNs, the base port
and the alias.
Notes:
You should not use the NPIV driver if you intend to directly connect a
target port to a client host.
With dual mode, clients will need to be zoned to the alias port (called
Target WWPN). If they are zoned to the base port, clients will not see any
devices.
You will only see the alias port when that port is in target mode.
NPIV allows multiple N_Port IDs to share a single physical N_Port. This
allows us to have an initiator, target and standby occupying the same
physical port. This type of configuration is not supported when not using
NPIV.
As a failover setup best practice, it is recommended that you do not put
more than one standby WWPN on a single physical port.
Set NPIV ports to target mode
Each NPIV port can be both a target and an initiator. To use target mode, you must
enable target mode on a port.
In order to use target mode, the port needs to be in NPIV mode. This was set
automatically for you when you loaded the driver (./ipstor configtgt,select
qlogicnpic).
To set target mode:
1. In the Console, expand Physical Resources.
2. Right-click on a NPIV HBA and select Enable Target Mode.
3. Click OK to enable.
You will see two WWPNs listed for the port.
CDP/NSS Administration Guide
167
Fibre Channel Target Mode
Set up your failover configuration
If you will be using the FalconStor Failover option, and you have followed all of the
steps in this Fibre Channel Target Mode section, you are now ready to launch the
Failover Setup Wizard and begin configuration. Refer to The Failover Option for
more details.
HBAs and
failover
Failover with
multiple
switches
Asymmetric failover modes are supported with QLogic HBAs.
When setting up Fibre Channel failover using multiple Fibre Channel switches, we
recommend the following:
Failover
limitations
If the multiple switches are connected via a Fibre Channel port that acts as
a management port for both switches, the primary storage servers Target
Port and the secondary storage servers Standby Port can be on different
switches.
If the switches are not connected, or if they are not "smart" switches that
can be managed, the primary storage servers Target Port and the
secondary storage servers Standby Port must be on the same switch.
When using failover in Fibre Channel environments, it is recommended that you use
the same type of Fibre Channel HBAs for all CDP/NSS client hosts.
CDP/NSS Administration Guide
168
Fibre Channel Target Mode
Install and run client software and/or manually add Fibre Channel
clients
Client software is only required for Fibre Channel clients running a FalconStor
Snapshot Agent or for clients using multiple protocols.
If you do not install the Client software, you must manually add the Client in the
Console. To do this:
1. In the Console, right-click on SAN Clients and select Add.
2. Select Fibre Channel as the Client protocol.
3. Select WWPN initiators. See Associate World Wide Port Names (WWPN) with
clients.
4. Select Volume Set Addressing.
Volume Set Addressing is used primarily for addressing virtual buses, targets,
and LUNs. If your storage device uses VSA, you must enable it. Note that
Volume Set Addressing is selected by default for HP-UX clients.
5. Enter a name for the SAN Client, select the operating system and indicate
whether or not the client machine is part of a cluster.
If the clients machine name is not resolvable, you can enter an IP address and
then click Find to discover the machine.
6. Indicate if you want to enable persistent reservation.
This option allows clustered SAN Clients to take advantage of Persistent
Reserve/Release to control disk access between various cluster nodes.
Note: If you are using AIX SAN Client cluster nodes, this option should be
cleared.
7. Confirm all information and click Finish to add this client.
CDP/NSS Administration Guide
169
Fibre Channel Target Mode
Associate World Wide Port Names (WWPN) with clients
Similar to an IP address, the WWPN uniquely identifies a port in a Fibre Channel
environment. Unlike an IP address, the WWPN is vendor assigned and is hardcoded
and embedded.
Depending upon whether or not you are using a switched Fibre Channel
environment, determining the WWPN for each port may be difficult.
If you are using a switched Fibre Channel environment, CDP/NSS will query
the switch for its Simple Name Server (SNS) database and will display a list
of all available WWPNs. You will still have to identify which WWPN is
associated with each machine.
If you are not using a switched Fibre Channel environment, you can
manually determine the WWPN for each of your ports. There are different
ways to determine it, depending upon the hardware vendor. You may be
able to get the WWPN from the BIOS during bootup or you may have to
read it from the physical card. Check with your hardware vendor for their
preferred method.
To simplify this process, when you enabled Fibre Channel, an Everyone client was
created under SAN Clients. This is a generic client that you can assign to all (or
some) of your SAN resources. It allows any WWPN not already associated with a
Fibre Channel client to have read/write non-exclusive access to any SAN resources
assigned to Everyone.
For security purposes, you may want to assign specific WWPNs to specific clients.
For the rest, you can use the Everyone client.
Do the following for each client for which you want to assign specific virtual devices:
1. Highlight the Fibre Channel Client in the FalconStor Management Console.
2. Right-click on the Client and select Properties.
CDP/NSS Administration Guide
170
Fibre Channel Target Mode
3. Select the Initiator WWPN(s) belonging to your client.
Here are some methods to determine the WWPN of your clients:
- Most Fibre Channel switches allow administration of the switch through an
Ethernet port. These administration applications have utilities to reveal or allow
you to change the following: Configuration of each port on the switch, zoning
configurations, the WWPNs of connected Fibre Channel cards, and the current
status of each connection. You can use this utility to view the WWPN of each
Client connected to the switch.
- When starting up your Client, there is usually a point at which you can access
the BIOS of your Fibre Channel card. The WWPN can be found there.
- The first time a new Client connects to the storage server, the following
message appears on the server screen:
FSQLtgt: New Client WWPN Found: 21 00 00 e0 8b 43 23 52
4. If necessary, click Add to add WWPNs for the client.
You will see the following dialog if there are no WWPNs in the servers list. This
could occur because the client machines were not turned on or because all
WWPNs were previously associated with clients.
Assign virtualized resources to Fibre Channel Clients
For security purposes, you can assign specific SAN resources to specific clients. For
the rest, you can use the Everyone client. This is a generic client that you can assign
to all (or some) of your SAN resources. It allows any WWPN not already associated
with a Fibre Channel client to have read/write non-exclusive access to any SAN
resources assigned to Everyone.
To assign resources, right-click on a specific client or on the Everyone client and
select Assign.
If a client has multiple ports and you are using Multipath software (such as
DynaPath), after you select the virtual device, you will be asked to enter the WWPN
mapping. This WWPN mapping is similar to Fibre Channel zoning and allows you to
provide multiple paths to the storage server to limit a potential point of network
failure.
You can select how the client will see the virtual device in the following ways:
One to One - Limits visibility to a single pair of WWPNs. You will need to
select the clients Fibre Channel initiator WWPN and the servers Fibre
Channel target WWPN.
CDP/NSS Administration Guide
171
Fibre Channel Target Mode
NetWare clients
One to All - You will need to select the clients Fibre Channel initiator
WWPN.
All to One - You will need to select the servers Fibre Channel target
WWPN.
All to All - Creates multiple data paths. If ports are ever added to the client or
server, they will automatically be included in the WWPN mapping.
After you assign a WWPN to the client and assign storage, you will be able to
configure it in several ways depending upon your version of NetWare.
NetWare version 5.x
1. Type nwconfig.
This takes you to the configuration screen.
2. Select Disk Options.
3. Scan for additional devices.
4. Modify disk partitions and Hot Fix.
5. Choose the Falcon IPStor Disk.
6. Initialize the partition table.
7. Create a NetWare disk partition.
CDP/NSS Administration Guide
172
Fibre Channel Target Mode
NetWare version 6.x - web portal
1. Select Disk Partitions.
1. Select Initialize Partition Table (next to the Falcon IPStor Disk).
2. Click Yes when warned about erasing the disk.
3. Select Create.
You will see either a Traditional File System or an NSS volume to create.
4. Enter information for either a Traditional Volume or for an NSS volume.
Once you click Create, the volume will be created and mounted automatically.
NetWare version 5.x or 6.x - ConsoleOne
1. Go to \\public\mgmt\console1\1.2\bin\ConsoleOne.exe.
2. Right-click on the NDS context and select Disk Management.
3. Select to initialize the new storage.
CDP/NSS Administration Guide
173
Fibre Channel Target Mode
If you dont see the new storage, you may have to type scan all at the
command line.
4. Choose either NSS Logical Volumes or Traditional Volumes.
5. Select New and follow the screens to create a NSS volume and pool or a
Traditional volume.
View new devices
In order to see the new devices, after you have finished configuring your Fibre
Channel Clients, you will need to trigger a device rescan or reboot the Client
machine, depending upon the requirements of the operating system.
Install and configure DynaPath on your Client machines
During failover, the storage server is temporarily unavailable. Since the failover
process can take a minute or so, the Clients need to keep attempting to connect so
that when the Server becomes available they can continue normal operations. One
way of ensuring that the Clients will retry the connection is to use FalconStor's
DynaPath Agent. DynaPath is a load-balancing/path-redundancy application that
manages multiple pathways from your Client to the switch that is connected to your
storage servers. Should one path fail, DynaPath will tap the other path for all I/O
operations.
If you are not using the DynaPath Agent, you may be able to use other third-party
multi-pathing software or you may be able to configure your HBA driver to perform
the retries. We recommend that the Clients retry the connection for a minimum of
two minutes.
If you are using DynaPath, it should be installed on each Fibre Channel Client that
will be part of your failover configuration. Refer to your DynaPath User Guide for
more details.
CDP/NSS Administration Guide
174
Fibre Channel Target Mode
Spoofing an HBA WWPN
Your FalconStor software contains a unique feature that can spoof initiator port
WWPNs. This feature can be used to pre-configure HBAs, making the process of
rebuilding a server simpler and less time consuming. This feature can also be useful
when migrating from an existing server to a new one.
This feature can also create a potential problem if not used carefully. If the old HBA
is somehow connected back to the same FC fabric, the result will be two HBAs with
the same WWPN, which can cause in a fabric outage. It is strongly recommended
that you take the following measures to minimize the chance of a WWPN conflict:
1. Physically destroy the old HBA if it was replaced for defect.
2. Use your HBA vendor's tool to reprogram and swap the WWPN of the two HBAs.
3. Avoid spoofing. This can be done if you plan extra time for the zoning change.
Notes:
Each HBA port must be spoofed to a unique WWPN.
Spoofing and un-spoofing are disabled after failover is configured. You
must spoof HBAs and enable target mode before setting up Fibre
Channel failover.
Spoofing can only be performed when QLogic HBAs are in initiator mode.
After a QLogic HBA has been spoofed, and the HBA driver is restarted,
the HBA can then be changed to target mode and have resources
assigned through it.
Since most switch software applications use an "Alias" to represent a
WWPN, you need to change the WWPN of the Alias and all the zones
can be preserved.
To configure HBAs for spoofing:
1. In the FalconStor Management Console, right-click on a specific adapter and
select Spoof WWPN.
2. Enter the desired WWPN for the HBA and click OK.
3. Repeat steps 1-2 for each HBA that needs to be spoofed and exit the Console.
4. Reload the HBA driver by typing:
ipstor restart all
5. Log back into your storage server from the console.
You will notice the WWPN of the initiator port now has the spoofed WWPN.
6. If desired, switch the spoofed HBA to target mode.
CDP/NSS Administration Guide
175
CDP/NSS Administration Guide
SAN Clients
Storage Area Network (SAN) Clients are the file and application servers that access
SAN resources. Since SAN resources appear as locally attached SCSI devices, the
applications, such as file services, databases, web and email servers, do not need
to be modified to utilize the storage.
On the other hand, since the storage is not locally attached, there is some
configuration needed to locate and mount the required storage.
Add a client from the FalconStor Management Console
1. In the console, right-click on SAN Clients and select Add.
2. Enter a name for the SAN Client, select the operating system, and indicate
whether or not the client machine is part of a cluster.
If the clients machine name is not resolvable, you can enter an IP address and
then click Find to discover the machine.
3. Determine if you want to limit the amount of space that can be automatically
assigned to this client.
The quota represents the total allowable space that can be allocated for all of the
resources associated with this client. It is only used to restrict certain types of
resources (such as Snapshot Resource and CDP Resource) that expand
automatically. This prevents them from allocating storage space indefinitely.
Instead, they can only expand if the total size of all the resources associated with
the client does not exceed the pre-defined quota for that client.
4. Indicate if you want to enable persistent reservation.
This option allows clustered SAN Clients to take advantage of Persistent
Reserve/Release to control disk access between various cluster nodes.
Note: If you are using AIX SAN Client cluster nodes, this option should be
cleared.
5. Select the clients protocol(s).
If you select iSCSI, you must indicate if this is a mobile client. You will then be
asked to select the initiator that this client uses and add/select users who can
authenticate for this client. Refer to Add iSCSI clients for more information.
If you select Fibre Channel, you will have to select WWPN initiators. You will
then be asked to select Volume Set Addressing. Refer to Add Fibre Channel
clients for more information.
6. Confirm all information and click Finish to add this client
CDP/NSS Administration Guide
176
SAN Clients
Add a client for FalconStor host applications
If you are using FalconStor client/agent software, such as snapshot agents, or
HyperTrac, refer to the FalconStor Intelligent Management Agent (IMA) User Guide
or the appropriate agent user guide for details regarding adding clients via
FalconStor Intelligent Management Agent (IMA).
FalconStor client/agent software allows you to add a storage server directly in IMA/
SDM or the SAN Client.
For example, if you are using HyperTrac, the first time you start HyperTrac, the
system scans and imports all storage servers identified by IMA/SDM or the SAN
Client. These storage servers are then listed in the HyperTrac the console.
Alternatively, you can add a storage server directly in IMA/SDM or the SAN Client.
CDP/NSS Administration Guide
177
CDP/NSS Administration Guide
Security
CDP/NSS utilizes strict authorization policies to ensure proper access to storage
resources on the FalconStor storage network. Since applications and storage
resources are now separated, and it is possible to transmit storage traffic over a
non-dedicated network, extra measures have been taken to ensure that data is only
accessible to those authorized to use it.
To accomplish this, CDP/NSS safeguards the areas of potential vulnerability:
System management allowing only authorized administrators to modify
the configuration of the CDP/NSS storage system.
Data access authenticating and authorizing the Clients who access the
storage resources.
System management
CDP/NSS protects your system by ensuring that only the proper administrators have
access to the systems configuration. This means that the administrators user name
and password are always verified against those defined on the storage server
before access to the configuration is granted.
While the server verifies the administrators login, the root user is the only one who
can add or delete IPStor administrators. The root user can also change other
administrators passwords and has privileges to the operating system. Therefore,
the servers root user is the key to protecting your server and the root user password
should be closely guarded. It should never be revealed to other administrators.
As best practice, IPStor administrator accounts should be limited to trusted
administrators that can safely modify the server configuration. Improper
modifications of the server configuration can result in lost data if SAN resources are
deleted or modified.
Data access
Just as CDP/NSS protects your system configuration by verifying each administrator
as they login, CDP/NSS protects storage resources by ensuring that only the proper
computer systems have access to the systems resources.
For access by application servers, two things must happen, authentication and
authorization.
Authentication is the process of establishing the credentials of a Client and creating
a trusted relationship (shared-secret) between the client and server. This prevents
other computers from masquerading as the Client and accessing the storage.
CDP/NSS Administration Guide
178
Security
Authentication occurs once per Client-to-Server relationship and occurs the first time
a server is successfully added to a client. Subsequent access to a server from a
client uses the authenticated shared secret to verify the client. Credentials do not
need to be re-established unless the software is re-installed. The authentication
process uses the authenticated Diffie-Hellman protocol. The password is never
transmitted through the network, not even in encrypted form to eliminate security
vulnerabilities.
Authorization is the process of granting storage resources to a Client. This is done
through the console by an IPStor administrator or the servers root user. The client
will only be able to access those storage resources that have been assigned to it.
Account management
Only the root user can manage users and groups or reset passwords. You will need
to add an account for each person who will have administrative rights in CDP/NSS.
You will also need to add a user account for clients that will be accessing storage
resources from a host-based application (such as FalconStor DiskSafe or FileSafe).
To make account management easier, users can be grouped together and handled
simultaneously. To manage users and groups, right-click on the server and select
Accounts. A list of all existing users and administrators are listed on the Users tab
and a list of all existing groups is listed on the Groups tab.
The rights of each are summarized in the table below:
Type of
Administrator
Create/
Delete
Pools
Add/Remove
Storage from
Pools
Create/Modify/
Delete Logical
Resources
Assigns Rights
to IPStor Users
Assign
Storage to
Clients
Root
IPStor
Administrator
x - IPStor Users
can only modify/
delete logical
resources that
they created.
IPStor User
For additional information regarding user access rights, refer to the Manage
accounts section and Manage storage pools and the devices within storage pools.
Security recommendations
In order to maintain a high level of security, a CDP/NSS installation should be
configured and used in the following manner:
CDP/NSS Administration Guide
179
Security
Storage network topology
For optimal performance, CDP/NSS does not encrypt the actual storage data that is
transmitted between the server and clients. Encrypting and decrypting each block of
data transferred involves heavy CPU overhead for both the server and clients. Since
CDP/NSS transmits data over potentially shared network channels instead of a
computers local bus, the storage data traffic can be exposed to monitoring by other
devices on the same network. Therefore, a separate segment should be used for
the storage network if a completely secure storage system is required. Only the
CDP/NSS clients and storage servers should be on this storage network segment.
If the configuration of your storage network does not maintain a totally separate
segment for the storage traffic, it is still possible to maintain some level of security by
using encryption or secure file systems on the host computers running the CDP/
NSS Client. In this case, data written to storage devices is encrypted, and cannot be
read unless you have the proper decryption tool. This is entirely transparent to the
CDP/NSS storage system; these tools can only be used at the CDP/NSS client as
the storage server treats the data as block storage data.
Physical security of machines
Due to the nature of computer security in general, if someone has physical access to
a server or client, the security of that machine is compromised. By compromised, we
mean that a person could copy a password, decipher CDP/NSS or system
credentials, or copy data from that computer. Therefore, we recommend that your
servers and clients be maintained in a secure computer room with limited access.
This is not necessary for the console, because the console does not leave any
shared-secret behind. Therefore, the console can be run from any machine, but that
machine should be a "safe", non-compromised machine, specifically one that you
are sure does not have a Trojan horse-like program hidden that may be monitoring
or recording key strokes. Such a program can collect your password as you type,
thereby compromising your systems security. Of course, this is a general computer
security concern which is not unique to CDP/NSS. In addition, you should be aware
that there is no easy way to detect the presence of such malicious programs, even
by using anti-virus software. Unfortunately, many people with programming
knowledge are capable of creating these types of malicious programs, which will not
have a signature that anti-virus software can identify. Therefore, you should never
type in your password, or any password, in an environment you cannot trust 100%.
Disable ports
Disable all unnecessary ports. The only ports required by CDP/NSS are shown in
Port Usage:
CDP/NSS Administration Guide
180
CDP/NSS Administration Guide
Failover
Overview
To support mission-critical computing, CDP/NSS-enabled technology provides high
availability for the entire storage network, protecting you from a wide variety of
problems, including:
Connectivity failure
Storage device path failure
Storage device failure
Storage server failure (including storage device failure)
The following illustrates a basic CDP/NSS configuration with potential points of
failure and a high availability configuration, where FalconStors high availability
options work with redundant hardware to eliminate the points of failure:
CDP/NSS Administration Guide
181
Failover
The Failover
Option
The FalconStor failover option provides high availability for CDP and NSS
operations by eliminating the down time that can occur should a storage server
(software or hardware) or a storage device fail. There are two modes of failover:
Best Practice
Primary/
Secondary
Storage
Servers
Shared storage failover - Uses a two-node failover pair to provide node level
redundancy. This model requires a shared storage infrastructure and is
typically Fibre Channel based.
Non-shared storage failover (Cross-mirror failover) - Provides high
availability without the need for shared storage. Used with appliances
containing internal storage. Mirroring is facilitated over a dedicated, direct IP
connection. (Available in a Virtual Appliance environment.)
As a failover setup best practice, it is recommended that you do not put more than
one standby WWPN on a single physical port. Both NSS/CDP nodes in a cluster
configuration require the same number of physical Fibre Channel target ports to
achieve best practice failover configurations.
FalconStors Primary and Secondary servers are separate, independent storage
servers that each have their own assigned clients. The primary storage server is the
server that is being monitored by the secondary Storage server. In the event the
primary fails, the secondary takes over. This is referred to as Active-Passive
Failover.
The terms Primary and Secondary are purely from the clients perspective since
these servers may be configured to monitor each other. This is referred to as Mutual
Failover or Failover. In that case, each server is primary to its own clients and
secondary to the others clients. Each server normally services its own clients. In the
event one server fails, the other will take over and serve the failed servers clients.
Failover/
Takeover
Failover/takeover is the process that occurs when the secondary server takes over
the identity of the primary. In the case of cross-mirroring on a virtual appliance,
failover occurs when all disks are swapped to the secondary server. Failover will
occur under the following conditions:
Recovery/
Failback
One or more of the storage server processes goes down.
There is a network connectivity problem, such as a defective NIC or a loose
network cable with which this NIC client is associated.
(Shared storage failover) There is a storage path failure.
The heartbeat cannot be retrieved.
There is a power failure.
One or more Fibre Channel target is down.
Recovery/Failback is the process that occurs when the secondary server releases
the identity of the primary to allow the primary to restore its operation. Once control
has returned to the primary server, the secondary server returns to its normal
monitoring mode.
After recovering a virtual appliance cross-mirror failure, the secondary server swaps
disks back to the primary server after the disks are re-synchronized.
CDP/NSS Administration Guide
182
Failover
Storage Cluster
Interlink
A physical connection between two servers to mirror snapshot and SafeCache
metadata between high-availability (HA) pairs. This enables rapid failover and
reduces the time required to load snapshot and SafeCache data from the disk. Two
Ethernet ports (sci0 and sci1) are reserved for this purpose.
Sync Standby
Devices
This menu option is available from the console (Failover --> Sync Standby Devices)
and is useful when the Storage Cluster Interlink connection in a failover pair is
broken. Select this option to manually synchronize the standby device information
on both servers once the Storage Cluster Interlink is reconnected.
Asymmetric
mode
Swap
(Fibre Channel only) Asymmetric failover requires standby ports on the secondary
server in case a target port on your primary server fails.
For virtual appliances: Swap is the process that occurs with cross-mirroring when
data functions are moved from a failed virtual disk on the primary server to the
mirrored virtual disk on the secondary server. The disks are swapped back once the
problem is resolved.
CDP/NSS Administration Guide
183
Failover
Shared storage failover sample configuration
This diagram illustrates a shared storage failover configuration. In this example,
both servers are monitoring each other. Because both servers are actively serving
their own clients, this configuration is referred to as an active-active or mutual
failover configuration. When server A fails, server B takes over and serves the
clients of server A in addition to its own clients.
CDP/NSS Administration Guide
184
Failover
Failover requirements
The following are the requirements for setting up a failover configuration:
General failover
requirements
You must have two storage servers. The failover pair should be installed
with identical Linux operating system versions.
Version 7.0 and later requires a Storage Cluster Interlink Port for failover
setup. This is a physical connection (also used as a hidden heartbeat IP)
between two servers. If you wish to disable the Storage Cluster Interlink
heartbeat functionality, contact Technical Support.
Note: When USEQUORUMHEALTH is disabled and there are no clientassociated network interfaces, all network interfaces - including the Storage
Cluster Interlink Port - must go down before failover can occur. When the
Storage Cluster Interlink heartbeat functionality is disabled, it is no longer
treated as a heartbeat IP connection for failover.
Both servers must reside on the same network segment, because in the
event of a failover, the secondary server must be reachable by the clients of
the primary server. This network segment must have at least one other
device that generates a network ping (such as a router, switch, or server).
This allows the secondary server to detect the network in the event of a
failure.
You need to reserve an IP address for each network adapter in your primary
failover server. The IP address must be on the same subnet as the
secondary server and is used by the secondary server to monitor the
primary server's health. In a mutual failover configuration, these IP
addresses are used by the servers to monitor each other's health. The
health monitoring IP address remains with the server in the event of failure
so that the servers health can be continually monitored. Note: The storage
server clients and the console cannot use the health monitoring IP address
to connect to a server.
You must use static IP addresses for your failover configuration. It is also
recommended that the IP addresses of your servers be defined in a DNS
server so they can be resolved.
If you will be using Fibre Channel target mode or iSCSI target mode, you
must enable it on both the primary and secondary servers before creating
your failover configuration.
The first time you set up a failover configuration, the secondary server must
not have any replica resources.
You must have at least one device reserved for a virtual device on each
primary server with enough space to hold the configuration repository that
will be created. The main repository should be established on a RAID5 or
RAID1 file system for ultimate reliability.
It is strongly recommended that you use some type of power control
option for failover servers.
If you are using an external hardware power controller for your failover pair,
you should set it up before creating your failover configuration. Refer to
Power Control options for more information.
CDP/NSS Administration Guide
185
Failover
General failover
requirements
for iSCSI clients
(Window iSCSI clients) The Microsoft iSCSI initiator has a default retry period of 60
seconds. You must change it to 300 seconds in order to sustain the disk for five
minutes during failover so that applications will not be disrupted by temporary
network problems. This setting is changed through the registry.
1. Go to Start --> Run and type regedit.
2. Find the following registry key:
HKEY_LOCAL_MACHINE\system\CurrentControlSet\control\class\4D6
E97B-xxxxxxxxx\<iscsi adapter interface>\parameters\
where iscsi adapter interface corresponds to the adapter instance, such as
0000, 0001, .....
3. Right-click Parameters and select Export to create a backup of the parameter
values.
4. Double-click MaxRequestHoldTime.
5. Pick Decimal and change the Value data to 300.
6. Click OK.
7. Reboot Windows for the change to take effect.
Shared storage
failover
requirements
Both servers must have at least one Network Interface Card (NIC) each (on
the same subnet). Unlike other clustering software, the heartbeat co-exists
on the same NIC as the storage network. The heartbeat does not require
and should NOT be on a dedicated heartbeat interface and subnet.
The failover pair must have connections to the same common storage; if
storage cannot be seen by both servers, it cannot be accessed from both
servers. However, the storage does not have to be represented the same
way to both servers. Each server needs at least one path to each
commonly-shared physical storage device, but there is no maximum and
they do not need to be equal (i.e., server A has two paths while server B has
four paths). Make sure to properly configure LUN masking on storage arrays
so both storage server nodes can access the same LUNs.
Storage devices must be attached in a multi-host SCSI configuration or
attached on a Fibre loop or switched fabric. In this configuration, both
servers can access the same devices at the same time (both read and
write).
(SCSI only) Termination should be enabled on each adapter, but not on the
device, in a shared bus arrangement.
If you will be using the FalconStor NIC Port Bonding option, you must set it
up before creating a failover configuration. You cannot change or remove
NIC Port Bonding once failover is set up. If you need to change NIC Port
Bonding, you will have to remove failover first.
CDP/NSS Administration Guide
186
Failover
Cross-mirror
failover
requirements
FC-based
Asymmetric
failover
requirements
Available only for virtual appliances.
Each server must have identical internal storage.
Each server must have at least two network ports (one for the required
network cable). The network ports must be on the same subnet.
Only one dedicated cross-mirror IP address is allowed for the mirror. The IP
address must be 192.168.n.n.
Only virtual devices can be mirrored. Service Enabled Devices and system
disks cannot be mirrored.
The number of physical disks on each machine must match and the disks
must have matching ACSLs (adapter, channel, SCSI ID, LUN).
When failover occurs, both servers may have partial storage. To prevent a
possible dual mount situation, we strongly recommend that you use a
hardware power controller, such as IPMI. Refer to Power Control options
for more information.
Prior to configuration, virtual resources can exist on the primary server as
long as the identical ACSL is unassigned or unowned by the secondary
server. After configuration, pre-existing virtual resources will not have a
mirror. You will need to use the Verify & Repair option to create the mirror.
During failover, the storage server is temporarily unavailable. Since the
failover process can take a minute or so, clients need to keep attempting to
connect so that when the server becomes available they can continue
normal operations. One way of ensuring that clients will retry the connection
is to use a FalconStor multi-pathing agent, such as DynaPath. If you are not
using DynaPath because there is no corresponding Linux kernel, you may
be able to use Linux DM-Multipath or you may be able to configure your
HBA driver to perform the retries. It is recommended that clients retry the
connection for a minimum of two minutes.
Fibre Channel target ports are not required on either server for Asymmetric
mode. However, if Fibre Channel is enabled on both servers, the primary
server MUST have at least one target port and the secondary server MUST
have a standby port. If Fibre Channel is disabled on both servers, neither
server needs to have target/standby ports.
If target ports are configured on a server, you must have at least the same
number of initiators (or aliases depending on the adapter) on the other
server.
Asymmetric failover supports the use of QLogic HBAs.
CDP/NSS Administration Guide
187
Failover
Pre-flight checklist for failover
Prior to configuring failover, follow the steps below for both the primary and
secondary NSS device:
1. Make sure all expected physical LUNs and their paths are detected properly
under the Physical Resources node in the FalconStor Management Console. If
any physical LUNs or paths are missing, rescan the appropriate adapter to
discover all expected devices and paths.
2. Make sure the configuration repository exists. If it does not exist, you create it
using the Enable Configuration Repository option or the Failover Setup Wizard
will prompt you to create it during configuration. Refer to Protect your storage
servers configuration for details.
3. Ensure that Service Enabled Devices have been configured for all physical
LUNs that are reserved for SED.
4. If any physical LUNs are reserved for SED, but SED devices are not yet
configured, change the property of these physical LUNs from "Reserved for
Service Enabled Device" to "Unassigned". Without this step, you will have a
device configuration mismatch in the Failover configuration wizard and will not
be able to proceed.
5. Rescan all existing devices.
6. Make sure unique Storage Cluster Interlink (SCI) IP addresses have been set for
sci0 and sci1 on each server. You can verify/modify the IP addresses from the
console by right-clicking the server and selecting System Maintenance ->
Configure Network. Refer to Network configuration for details.
7. Make sure there is a physical connection between the SCI ports on the HA
servers. One network cable should connect the two sci0 ports and another
network cable should connect the two sci1 ports.
Connectivity failure
A connectivity failure can occur due to a NIC, Fibre Channel HBA, cable, or switch/
router failure. You can eliminate potential points of failure by providing multiple paths
to the storage server with multiple NICs, HBAs, cables, switches/routers.
The client always tries to connect to the server with its original IP address (the one
that was originally set in the client when the server was added to the client). You can
re-direct traffic to an alternate adapter by permitting you to specify the alternate IP
addresses for the storage server. This can be done in the console (right-click on the
server and select Properties --> Server IP Addresses tab).
CDP/NSS Administration Guide
188
Failover
When you set up multiple IP addresses, the clients will attempt to communicate with
the server using an alternate IP address if the original IP address stops responding.
Notes:
In order for failover to occur when there is a failure, the device driver
must promptly report the failure. Make sure you have the latest driver
available from the manufacturer.
In order for the clients to successfully use an alternate IP address, your
subnet must be set properly so that the subnet itself can redirect traffic to
the proper alternate adapter.
The client becomes aware of the multiple IP addresses when it initially
connects to the server. Therefore, if you add additional IP addresses in
the console while the client is running, you must rescan devices
(Windows clients) or restart the client (Unix clients) to make the client
aware of these IP addresses. In addition, if you recover from a network
path failure, you will need to restart the client so that it can use the
original IP address.
Default failover behavior
Default failover behavior is described below:
Fibre Channel
Target failure
Fibre Channel Target failure: If a Fibre Channel target port links down, the
partner server will immediately takeover. This is true regardless of the
number of target ports on the NSS server. For example, the server can use
multiple targets to provide virtual devices to the client. If a target loses
connectivity, the client will still have alternate paths to access those devices.
However, the default behavior is to failover. The default behavior can be
modified by Technical Support.
Network
connection
failure
Network connection failure and iSCSI clients: By default CDP/NSS server
failover will occur when a network connection goes down and that
connection is also associated with the iSCSI target of a client. If multiple
subnets are used to connect to the CDP or NSS server, the default behavior
can be modified by Technical Support so that failover will not occur until all
network connections are down.
CDP/NSS Administration Guide
189
Failover
Storage device path failure
(Shared storage failover) A storage device path failure can occur due to a cable or
switch/router failure.
You can eliminate this potential point of failure by providing a multiple path
configuration, using multiple Fibre Channel switches, and/or multiple adapters, and/
or storage devices with multiple controllers. In a multiple path configuration, all paths
to the storage devices are automatically detected. If one path fails, there is an
automatic switch to another path.
Note: Fibre Channel switches can demonstrate different behavior in a multiple
path configuration. Before using this configuration with CDP or NSS, you must
verify that the configuration can work on your server without the CDP or NSS
software. To verify:
1. Use the hardware vendors utility or Linuxs cat /proc/scsi/scsi. command to
see the devices after the driver is loaded.
2. Use the hardware vendors utility or Linuxs hdparm command to access the
devices.
3. Unplug the cable from one device and use the utilities listed above to verify
that everything is working.
4. Repeat the test by reversing which device is unplugged and verify that
everything is still working.
Storage device failure
The FalconStor Mirroring and Cross-mirror failover options provide high availability
by minimizing the down time that can occur if a physical disk fails.
With mirroring, each time data is written to a designated disk, the same data is also
written to another disk. This disk maintains an exact copy of the primary disk. In the
event that the primary disk is unable to read/write data when requested to by a SAN
Client, the data functions are seamlessly swapped to the mirrored copy disk.
CDP/NSS Administration Guide
190
Failover
Storage server failure (including storage device failure)
The FalconStor failover option provides high availability by eliminating the down time
that can occur should a CDP or NSS appliance (software or hardware) fail.
In the failover design, a storage server is configured to monitor another storage
server. In the event that the server being monitored fails to fulfill its responsibilities to
the clients it is serving, the monitoring server will seamlessly take over its identity so
the clients will transparently fail over to the monitoring server.
A unique monitoring system is used to ensure the health of the storage servers. This
system includes a self-monitor and an intelligent heartbeat monitor.
The self-monitor is part of all CDP/NSS appliances, not just the servers configured
for failover and provides continuous health status of the server. It is part of the
process that provides operational status to any interested and authorized parties,
including the console and supported network management applications through
SNMP. The self-monitor checks all storage server processes and connectivity to the
servers storage devices.
In a failover configuration, FalconStors intelligent heartbeat monitor continuously
monitors the primary server through the same network path that the server uses to
serve its clients.
When the heartbeat is retrieved, the results are evaluated. There are several
possibilities:
All is well and no failover is necessary.
The self-monitor detects a critical error in the IPStor Server processes that
is determined to be fatal, yet the error did not affect the network interface. In
this case, the secondary will inform the primary to release its CDP/NSS
identity and will take over serving the failed servers clients.
The self-monitor detects a storage device connectivity failure but cannot
determine if the failure is local or applies to the secondary also. In that case
the device error condition will be reported through the heartbeat.The
secondary will check to see if it can successfully access the storage. If it
can, it attempts to access all devices. If it can successfully access all
devices, the secondary initiates a failover. If it cannot successfully access all
devices, no failover occurs. If you are using the FalconStor Cross-mirror
feature, a swap will occur.
Because the heartbeat uses the same network path that the server uses to serve its
clients, if the heartbeat cannot be retrieved and there are iSCSI clients associated
with those networks, the secondary server knows that the clients cannot access the
server. This is considered a Catastrophic failure because the server or the network
connectivity is incapacitated. In this case the secondary will immediately initiate a
failover.
CDP/NSS Administration Guide
191
Failover
Failover restrictions
The following information is important to be aware of when configuring failover:
JBODs are not recommended for failover. If you use a JBOD as the storage
device for a storage server (configured in Fabric Loop), certain downstream
failover scenarios, such as SCSI Aliasing, might not function properly. If a
Fibre connection on the storage server is broken, the JBOD might hang and
not respond to SCSI commands. SCSI Aliasing will attempt to connect using
the other Fibre connection; however, since the JBOD is in an unknown
state, the storage server cannot reconnect to the JBOD, causing CDP/NSS
clients to disconnect from their resources.
In a pure Fibre Channel environment, Network failure will not trigger failover.
Failover setup
You will need to know the IP address(es) of the primary server (and the secondary
server if you are configuring a mutual failover scheme). You will also need the health
monitoring IP address(es). It is a good idea to gather this information and find
available IP addresses before you begin the setup.
1. In the console, right-click on an expanded server and select Failover --> Failover
Setup Wizard.
You will see a screen similar to the following that shows you a status of options
on your server.
Any options enabled/installed on the primary storage server must also be
enabled/installed on the secondary storage server.
2. If you have recently made device changes, rescan the servers physical
adapters.
CDP/NSS Administration Guide
192
Failover
Before a failover configuration can be created, the storage system needs to
know the ownership of each physical device for the selected server. Therefore, it
is recommended that you allow the wizard to rescan the servers devices.
If you have recently used the Rescan option to rescan the selected server's
physical adapters, you can skip the server scanning process.
3. Select whether or not you want to use the Cross-mirror feature (available for
virtual appliances only).
4. Select the secondary server and determine if the servers will monitor each other.
Shared storage
failover
Select if you want both
servers to monitor each
other.
CDP/NSS Administration Guide
193
Failover
Cross mirror failover
(non-shared storage)
Click Find or manually
enter IP address for the
secondary server. Both
IP addresses must start
with 192.168.
5. (Cross-mirror only) Select the disks that will be used for the primary server.
System disks will not be listed.
The disks you select will be used as storage by the primary server. The ones that
are not selected will be used as storage by the secondary server.
CDP/NSS Administration Guide
194
Failover
6. (Cross-mirror only) Confirm the disks that will be used for the secondary server.
7. (Cross-mirror only) Confirm the physical device allocation.
8. Follow the wizard to create a configuration repository on this server.
The configuration repository maintains a continuously updated version of your
storage system configuration. For additional security, after your failover
configuration is complete, you can enable mirroring on the configuration
repository. It is also recommended that you create a configuration repository
even if you have a standalone server. Be sure to use a different physical drive for
the mirror.
Note: If you need to recreate the configuration repository for any reason, such as
switching to another physical drive, you can use the Reconfigure option. Refer to
Recreate the configuration repository for details.
9. Determine if there are any conflicts with the server you have selected.
CDP/NSS Administration Guide
195
Failover
If physical disks, pre-existing virtual disks, or service enabled disks cannot be
seen by both primary and secondary storage servers, you will be alerted.
If there are conflicts, a window similar to the following will display:
You will see mismatched devices listed here. For example, if you have a RAID
array and one server sees all eight devices and the other server sees only four
devices, you will see the devices listed here as mismatched.
You must resolve the mismatch before continuing. For example, if the QLogic
driver did not load on one server, you will have to load it before going on.
Note that you can exclude physical devices from failover consideration, if
desired.
10. Determine if you need to rescan this servers physical adapters.
If you fixed any mismatched devices in the last step, you will need to rescan
before the wizard can continue.
If you are re-running the Failover wizard because you made a change to a
physical device on one of the servers, you should rescan before continuing.
If you had no conflicts and have recently used the Rescan option to rescan the
selected server's physical adapters, you can skip the scanning process.
Note: If this is the first time you are setting up a failover configuration, you will
get a warning message if there are any Replica resources on the secondary
server. You will need to remove them and then restart the failover wizard.
11. If this is a mutual failover configuration, follow the wizard to create a
configuration repository on the secondary server.
CDP/NSS Administration Guide
196
Failover
12. Verify the Storage Cluster Interlink Port IP addresses for failover setup.
The IP address fields are automatically populated with the IP address
associated with sci0. If the IP addresses listed are incorrect, you will need to
click Cancel to exit the failover setup wizard and modify the IP address.
You can verify/modify the IP addresses from the console by right-clicking the
server and selecting System Maintenance -> Configure Network. Refer to
Network configuration for details.
13. Select at least one subnet that you want to configure from the list.
If there are multiple subnets, use the arrows to set the order in which the
heartbeat is to be checked.
CDP/NSS Administration Guide
197
Failover
By re-ordering the subnet list, failover can be avoided due to a failure on eth0.
If you are using the Cross-mirror feature, you will not see the 192.168... crossmirror link that you entered earlier listed here.
14. Indicate if you want to use this network adapter.
This is the window you will see for a
non-mutual failover.
Mutual failover configuration.
Select the IP addresses that clients will use to access the storage servers when
using iSCSI, replication and for console communication.
Notes:
If you change the Server IP addresses while the console is
connected using those IP addresses, then the Failover wizard will not
be able to successfully create the configuration.
If you uncheck the Include this Network Adapter for failover box, the
wizard will display the next card it finds. You must choose at least
one.
For SAN resources, because failover can occur at any time, you
should use only those IP addresses that are configured as part of the
failover configuration to connect to the server.
CDP/NSS Administration Guide
198
Failover
15. Enter the health monitoring IP address you reserved for the selected network
adapter.
This is the window you will see for a
non-mutual failover.
You have to enter IP
addresses for both servers in
a mutual failover
configuration.
The health monitoring IP address remains with the server in the event of failure
so that the servers health can be continually monitored. Therefore it is
recommended that you use static IP addresses.
Select health monitoring heartbeat addresses which will be used exclusively by
the storage servers to monitor each others health. These addresses must not
be used for any other purpose.
16. If you want to use additional network adapter cards, repeat the steps above.
CDP/NSS Administration Guide
199
Failover
17. (Asymmetric mode only) For Fibre Channel failover, select the initiator on the
secondary server that will function as a standby in case the target port on your
primary server fails.
For QLogic HBAs, you will need to select a dedicated standby port for each
target port used by clients. You should confirm that the adapter shown is not the
initiator on your secondary server that is connected to the storage array, and
also that it is not the target adapter on your secondary server. You can only pick
a standby port once. The exception to this rule is when you are using NPIV.
If you are configuring a mutual failover, you will need to set up the standby
adapter for the secondary server as well.
18. Select which Power Control option the primary server is using.
Power Control options force the primary server to release its resources after a
failure. Refer to Power Control options for more information.
CDP/NSS Administration Guide
200
Failover
HP iLO - This option will power down the primary server in addition to forcing the
release of the servers resources and IP address. In order to use HP iLO,
several packages must be installed on the server and you must have configured
the controllers IP address to be accessible from the storage servers. In this
dialog, enter the HP iLO ports IP address. Refer to HP iLO for more
information.
For Red Hat 5, the following packages are automatically installed on each server
(if you are using the EZStart USB key) in order to use HP iLO power control:
perl-IO-Socket-SSL-1.01-1.fc6.noarch.rpm
perl-Net-SSLeay-1.30-4.fc6.x86_64.rpm
RPC100 - This option will power down the primary server in addition to forcing
the release of the servers resources and IP address. RPC100 is an external
power controller available in both serial and parallel versions. Select the correct
port, depending upon which version you are using. Refer to RPC100 for more
information.
IPMI - This option will reset the power of the primary server, forcing the release
of the servers resources and IP address. In order to use IPMI, you must have
created an administrative user via your IPMI configuration tool. The IP address
cannot be the virtual IP address that was set for failover. Refer to IPMI for more
information.
APC PDU - This option will reset the power of the primary server, forcing the
release of the servers resources and IP address. The APC PDU external
hardware power controller must be set up before you can use it. In this dialog,
enter the IP address of the APC PDU, the community name that was given
Write+ access, and the port(s) that the failover partner is physically plugged into
on the PDU. Use a space to separate multiple ports. Refer to APC PDU for
more information.
For Red Hat 5, you will need to install the following packages on each server in
order to use APC PDU:
lm_sensors-2.10.7-9.el5.x86_64.rpm
net-snmp-5.3.2.2-9.el5_5.1.x86_64.rpm
net-snmp-libs-5.3.2.2-9.el5_5.1.i386.rpm
net-snmp-libs-5.3.2.2-9.el5_5.1.x86_64.rpm
net-snmp-utils-5.3.2.2-9.el5_5.1.x86_64.rpm
19. Select which Power Control option the secondary server is using.
CDP/NSS Administration Guide
201
Failover
20. Confirm all of the information and then click Finish to create the failover
configuration.
Once your configuration is complete, each time you connect to either server in
the console, you will automatically be connected to the other as well.After
configuring cross-mirror failover, you will see all of the virtual machine disks
listed in the tree, similar to the following:
These are local physical
disks for this server. The
V indicates the disk is
virtualized for this server
and an F indicates a
foreign disk. The Q
indicates a quorum disk
containing the
configuration repository.
These are remote
physical disks for this
server.
Notes:
If the setup fails during the setup configuration stage (for example, the
configuration is written to one server but then the second server is
unplugged while the configuration is being written to it), use the Remove
Failover Configuration option to delete the partially saved configuration.
You can then create a new failover configuration.
Do not change the host name of a server that is part of a failover pair.
CDP/NSS Administration Guide
202
Failover
After a failover occurs, if a client machine is rebooted while either of the failover
servers is powered off, the client must rescan devices once the failover server is
powered back on, but before recovery occurs. If this is not done, the client
machine will need to be rebooted in order to discover the newly restored paths.
Recreate the configuration repository
To recreate the configuration repository for any reason, such as switching to another
physical drive, you can use the Reconfigure option. To do this, follow the steps
below:
1. Make sure the ISCFGREPOSITORYSIZEMB environment variable is set set to
10240 MB.
2. Navigate to Logical Resources --> Configuration Repository.
3. Right-click and select Reconfigure.
4. Follow the instructions on the wizard to select a physical device (10240 MB of
space in one contiguous physical disk segment is required).
5. Click Finish to recreate the configuration repository.
6. Repeat these steps on the second node of the failover pair
Power Control options
At times, a server may become unresponsive, but, because of network or internal
reasons, it may not release its resources or its IP address, thereby preventing
failover from occurring. To allow for a graceful failover, you can use the Power
Control options to force the primary server to release its resources after a failure.
Power Control options are used to prevent clusters from competing for access to the
same storage. They are triggered when a secondary server fails to communicate
with the primary server over both the network and the quorum drive. When this
occurs, the secondary server triggers a forceful take over of the primary server and
triggers the selected Power Control option.
When a partner server has been forcefully taken over, it cannot communicate with
the power control device (i.e. IPMI, HP iLO), and failover will not occur. Howerver,
you may issue a manual takeover from the console, if necessay. This default
behavior (for version 7.00 and later) also occurs if the failover configuration has
been set up with no power control option. Failure to communicate to the power
control devices may be caused by one the following reasons:
Authentication error (password and/or username is incorrect)
Network connectivity issue
Server power cable is unplugged
Wrong information used for power control device such as incorrect IP
CDP/NSS Administration Guide
203
Failover
Power Control is set during failover configuration. To change options, right-click on
either failover server and select Failover --> Power Control.
HP iLO
This option powers down the primary server in addition to forcing the release of the
servers resources and IP address. HP iLO is available on HP servers with the ILO
(Integrated Lights Out) option. In order to use HP iLO, you must have configured the
controllers IP address to be accessible from the storage servers. The console will
prompt you to enter the HP iLO ports IP address of the server.
Note: The HP iLO power control option depends on the storage server being able
to access the HP iLO port through its regular network connection. If the HP iLO
port is inaccessible, this option will not function. Each time the power control
dialog screen is launched, the username/password fields will be blank. The fields
are available for update but the current username and password information is
not revealed for security purposes. You can make changes by re-entering your
username and password.
RPC100
This option will power down the primary server in addition to forcing the release of
the servers resources and IP address. RPC100 is an external power controller
available in both serial and parallel versions. The console will prompt you to select
the serial or parallel port, depending upon which version of the RPC100 you are
using. Note that the RPC100 power controller only controls one power connection. If
the storage server has multiple power supplies there will be a need for a special
power cable to connect them all.
SCSI Reserve/
Release
(Not available in version 7) This option is not an actual Power Control option, but a
storage solution to prevent two storage servers from accessing the same physical
storage device simultaneously. Note that this option is only available on those
storage devices that support SCSI Reserve & Release. This option will not force a
hung storage server to reboot and will not force the hung server to release its IP
addresses or bring down its FC targets. The secondary server will simply reserve
the primary servers physical resources, thereby preventing the possibility of a
double mount. If the primary server is not actually hung and is only temporarily
unable to communicate with the secondary server through normal means, the
triggering of the SCSI Reserve/Release from the secondary server will trigger a
reservation conflict on the primary server. At this point the primary server will release
both its IP addresses and FC targets so the secondary can successfully take over. If
this occurs the primary server will need to be rebooted before the reservation
conflict can be resolved. The commands, ipstor restart and ipstor
restart all will NOT resolve the reservation conflict.
IPMI
This option will reset the power of the primary server, forcing the release of the
servers resources and IP address. Intelligent Platform Management Interface
(IPMI) is a hardware level interface that monitors various hardware functions on a
server. If IPMI is provided by your hardware vendor, you must follow the vendors
instructions to configure it and you must create an administrative user via your IPMI
configuration tool. The IP address cannot be the virtual IP address that was set for
failover.
CDP/NSS Administration Guide
204
Failover
If you are using IPMI, you will see several IPMI options on the servers System
Maintenance menu, Monitor, and Filter. Refer to System maintenance for more
information.
You should check the FalconStor certification matrix for a current list of FalconStor
appliances and server hardware that has been certified for use with IPMI.
APC PDU
This option will reset the power of the primary server, forcing the release of the
servers resources and IP address. The APC PDU is an external hardware power
controller that must be set up before you can use it.
To set up the APC PDU power controller:
1. Connect the APC PDU to your network.
2. Via the COM port on the unit, set an IP address that is accessible from the
storage servers.
3. Launch the APC PDU user interface from the COM port or the Web.
4. Enable SNMP on the APC PDU.
This can be found under Network.
5. Add or edit a Community Name and give it Write+ access.
You will use this Community Name as the password for configuration of the
power control option. For example, if you want to use the password apc, you
have to create a Community Name called apc or change the default Community
Name to APC and give it Write+ access.
6. Connect the power plugs of your storage servers to the APC PDU.
Be sure to note which outlets are used for each server.
CDP/NSS Administration Guide
205
Failover
Check Failover status
You can see the current status of your failover configuration, including all settings,
by checking the Failover Information tab for the server.
Failover
settings,
including which
IP addresses
are being
monitored for
failover.
Current status of
failover
configuration.
The server is highlighted in a specific color indicating the following conditions:
Red - The server is currently in failover mode and has been taken over by
the secondary server.
Green - The server has taken over the primary server's resources.
Yellow - The user has suspended failover on this server. The current server
will NOT take over the primary server's resources even it detects abnormal
condition from the primary server.
Failover events are also written to the primary server's Event Log, so you can check
there for status and operational information, as well as any errors. You should be
aware that when a failover occurs, the console will show the failover partners Event
Log for the server that failed.
For troubleshooting issues pertaining to failover, refer to the Failover
Troubleshooting section.
Failover Information report
The Failover Information Report can be viewed by double clicking on the server
status of the failed server from the console in the General tab.
CDP/NSS Administration Guide
206
Failover
Failover network failure status report
The Network failure status report can be viewed using the sms command on the
failed server when failover has been triggered due to a client associated NIC link
being down.
After failover
When a failed server is restarted, it communicates with the acting primary server
and must receive the okay from the acting primary server in order to recover its role
as the primary server. If there is a communication problem, such as a network error,
and no notification is received, the failed server remains in a 'ready' state but does
not recover its role as the primary server. After the communication problem has
been resolved, the storage server will then be able to recover normally.
If failover is suspended on the secondary server, or if the failover module is stopped,
the primary will not automatically recover until the ipstorsm.sh recovery
command is entered.
If both failover servers go offline and then only one is brought up, type the
ipstorsm.sh recovery command to bring the storage server back online.
Manual recovery
Manual recovery is the process when the secondary server releases the identity of
the primary to allow the primary to restore its operation. Manual recovery can be
triggered by selecting the Stop Takeover option from the FalconStor Management
Console.
CDP/NSS Administration Guide
207
Failover
If the primary server is not ready to recover, and you can still communicate with the
server, a detailed failover screen displays.
If the primary server is not ready to recover, and you cannot communicate with the
server, a warning message displays.
CDP/NSS Administration Guide
208
Failover
Auto recovery
You can enable auto recovery by changing the Auto Recovery option after failover,
when control is returned to the primary server once the primary server has
recovered. Once control has returned to the primary server, the secondary server
returns to its normal monitoring mode.
Fix a failed server
If the primary server fails over to the secondary and hardware changes are made to
the failed server, the secondary server will not be aware of these changes. When
failback occurs, the original configuration parameters will be returned to the primary
server.
To ensure that both servers become synchronized with the new hardware
information, you will need to issue a physical device rescan for the machine whose
hardware has changed as soon as the failback occurs.
CDP/NSS Administration Guide
209
Failover
Recover from a cross-mirror disk failure
For virtual appliances: Whether your cross-mirror disk was brought down for
maintenance or because of a failure requires that you follow the procedure listed
below to properly bring up the cross-mirror appliance.
When powering down both servers in an Active-Active cross-mirror configuration for
maintenance, the server must be properly brought up as follows in order to
successfully recover from failover.
If the cross-mirror environment is in a healthy state, all resources are in sync, and all
storage is local to the server (none have swapped), the procedure would be as
follows.
1. Stop CDP/NSS on the secondary and wait for the primary to take over,.
2. Power down the server.
3. After the primary has successfully taken over, stop CDP/NSS on the primary
server and power it down as well.
Note: This would be considered a graceful way of powering down both
servers for maintenance. After maintenance is complete this would be the
proper way to bring up the servers and put the servers in a healthy and up
state.
4. Power up the primary server.
5. Power up the secondary server.
6. CDP/NSS will automatically start.
7. Verify in the /proc/scsi/scsi that both servers can see their remote storage
(usually identified by having 50 as the adapter number, for example the first LUN
would be 50:0:0:0.) If this is not the case restart the iSCSI initiator or re-login to
the servers respective targets to see the remote storage.
Restarting the iSCSI initiator:
restart"
"/etc/init.d/iscsi
Logging into a target: iscsiadm -m node -p <ipaddress>:3261,0 -T <remote-target-name> -l
Example: "iscsiadm -m node -p 192.168.200.201:3261,0 -T
iqn.2000-03.com.falconstor:istor.PMCC2401 -l"
8. Once you have verified that both servers can access the remote storage, restart
CDP/NSS on both servers. Failure to do so will result in server recovery issues.
9. After CDP/NSS has been restarted, verify that both servers are in a ready state
by using the sms -v command.
Both servers should now be recovered and in a healthy state.
CDP/NSS Administration Guide
210
Failover
Re-synchronize Cross mirror
After recovering from a cross mirror failure, the disks will automatically be resynchronized according to the server properties that have been set up. You can click
on the Performance tab to configure the synchronization options.
The disks must manually re-synchronized if the disk is offline for more than 20
minutes. Right-click on the server and select Cross Mirror --> Synchronize to
manually re-synchronize the disks.
Remove Cross mirror
You can remove cross mirror failover to enable both servers to act as a stand-alone
storage server. To remove the cross mirror failover:
1. Restart both servers from the console.
2. Re-login to the servers and manually remove all mirrors from the virtual devices
left behind after cross-mirror removal.
This can also be done in batch mode by right-clicking SAN resources --> Mirror -->
Remove.
Check resources and swap if possible
Swapping takes place when data functions are moved from a failed disk on the
primary server to the mirrored disk on the secondary server. Afterwards, the system
automatically checks every hour to see if the disks can be swapped back.
If the disk has been replaced/repaired and the cross mirror has been synchronized,
you can force a swap to occur immediately by selecting Cross Mirror --> Check &
Swap. The system verifies that the local mirror disk isusable and that the cross
mirror is synchronized. Once verified, the system swaps the disks. You can verify
the status after the swap operation by selecting the Layout tab for the SAN resource.
Verify and repair a cross mirror configuration
There may be circumstances in which you need to use the Verify & Repair option.
For example:
Use the Verify & Repair option for the following situations:
A physical disk used by the cross mirror has been replaced
A mirror resource was offline when auto expansion occurred
Create a mirror for virtual resources that existed on the primary server prior
to configuration
View the storage exception information that cannot be repaired and requires
further assistance.
CDP/NSS Administration Guide
211
Failover
When replacing local or remote storage, if a mirror needs to be swapped first, a
swapping request will be sent to the server to trigger the swap. Storage can only be
replaced when the damaged segments are part of the mirror, either local or remote.
New storage has to be available for this option.
Note: If you have replaced disks, you should perform a rescan on both servers
before using the Verify & Repair option.
To use the Verify & Repair option:
1. Log into both cross mirror servers.
2. Right-click on the primary server and select Cross Mirror --> Verify & Repair.
3. Click the button for any issue that needs to be corrected.
You will only be able to select a button if that is the scenario where the problem
occurred. The other buttons will not be selectable.
Resources
If everything is working correctly, this option will be labeled Resources and will not
be selectable. The option will be labeled Incomplete Resources for the following
scenarios:
The mirror resource was offline when auto expansion (i.e. Snapshot
resource or CDP journal) occurred but the device is now back online.
You need to create a mirror for virtual resources that existed on the primary
server prior to cross mirror configuration.
1. Right-click on the server and select Cross Mirror --> Verify & Repair.
CDP/NSS Administration Guide
212
Failover
2. Click the Incomplete Resources button.
3. Select the resource to be repaired.
4. When prompted, confirm that you want to repair this resource.
Remote
Storage
If everything is working correctly, this option will be labeled Remote Storage and will
not be selectable. The option will be labeled Damaged or Missing Remote Storage
when a physical disk being used by cross mirroring on the secondary server has
been replaced.
Note: You must suspend failover before replacing the storage.
1. Right-click the primary server and select Cross Mirror --> Verify & Repair.
CDP/NSS Administration Guide
213
Failover
2. Click the Damaged or Missing Remote Storage button.
3. Select the remote device to be repaired.
Local Storage
If everything is working correctly, this option will be labeled Local Storage and will
not be selectable. The option will be labeled Damaged or Missing Local Storage
when a physical disk being used by cross mirroring is damaged on the primary
server and has been replaced.
Note: You must suspend failover before replacing the storage.
1. Right-click the primary server and select Cross Mirror --> Verify & Repair.
CDP/NSS Administration Guide
214
Failover
2. Click the Damaged or Missing Local Storage button.
3. Select the local device to be replaced.
4. Confirm that this is the device to replace.
Storage and
Complete
Resources
If everything is working correctly, this option will be labeled Storage and Complete
Resources and will not be selectable. The option will be labeled Resources with
Missing segments on both Local and Remote Storage when a virtual device spans
multiple physical devices and one physical device is offline on both the primary and
secondary server. This situation is very rare and this option is informational only.
1. Right-click on the server and select Cross Mirror --> Verify & Repair.
CDP/NSS Administration Guide
215
Failover
2. Click the Resources with Missing segments on both Local and Remote Storage
button.
You will see a list of failed devices. Because this option is informational only, no
action can be taken here.
Modify failover configuration
Make changes to the servers in your failover configuration
The first time you set up your failover configuration, the secondary server cannot
have any Replica resources.
In order to make any changes to a mutual failover configuration, you must be
running the console with write access to both servers. CDP/NSS will automatically
log on" to the failover pair when you attempt any configuration on the failover set.
While it is not required that both servers have the same username and password,
the system will try to connect to both servers using the same username and
password. If the servers have different usernames/passwords, it will prompt you to
enter them before you can continue.
Change
physical device
If you make a change to a physical device (such as if you add a network card that
will be used for failover), you will need to re-run the Failover wizard. Be sure to scan
both servers during the wizard.
At that point, the secondary server is permitted to have Replica resources. This
makes it easy for you to upgrade your failover configuration.
CDP/NSS Administration Guide
216
Failover
Change subnet
If you switch IP segments for an existing failover configuration, the following needs
to be done:
1. Remove failover from both storage servers.
2. Delete the current failover servers from the FalconStor Management Console.
3. Make network modifications to the storage servers (i.e. change IP segments).
4. Add the storage servers back to the FalconStor Management Console.
5. Configure failover using the new IP segment.
Convert a failover configuration into a mutual failover configuration
Right-click on the server and select Failover --> Setup Mutual Failover to convert
your failover configuration into a mutual failover configuration where both servers
monitor each other. A configuration repository should be created even if you have on
standalone server. The status of the configuration repository is always displayed on
the console under the General tab. In the case of a configuration repository failure,
the console displays the time of failure along with the last successful update.
Note: If no configuration repository is found on the secondary server, the wizard
to set up mutual failover includes the creation of a configuration repository on the
secondary server. The configuration repository requires 10 GB of free space.
Exclude physical devices from health checking
You can create a storage exception list that will exclude one or more specific
physical devices from being monitored. Devices on this list will not prompt the
system to fail over, even if the device stops functioning.
This is useful when using less reliable storage (for asynchronous mirroring or local
replication), whose temporary loss will not be critical.
When removing failover, this list is reset and cleaned up.
To exclude devices, right-click on the server and select Failover --> Storage
Exception List.
CDP/NSS Administration Guide
217
Failover
Change your failover intervals
Right-click on the server and select Failover --> View/Update Failover Options to
change the intervals (heartbeat, self-checking, and auto recovery) for this
configuration.
Note: We recommend keeping the Self-checking Interval and Heartbeat Interval
set to the default values. Changing the values can result in a significantly longer
failover and recovery process.
The Self-checking Interval determines how often the primary server will check itself.
The Heartbeat Interval determines how often the secondary server will check the
heartbeat of the primary server.
If enabled, Auto Recovery determines how long to wait before returning control to
the primary server once the primary server has recovered.
Verify physical devices match
The Check Consistency tool (right-click on the server and select Failover --> Check
Consistency) helps verify that both nodes can still see the same LUNs or the same
number of LUNs. This is useful when physical storage devices need to be added or
removed. After suspending failover and removing/adding storage to both nodes, you
would first perform a rescan of the resources on both sides to pick up the changes in
configuration. After verifying storage consistency between the two nodes, failover
can be resumed without risking a failover trigger.
CDP/NSS Administration Guide
218
Failover
Start/stop failover or recovery
Force a takeover by a secondary server
On the secondary server, select Failover --> Start Takeover <servername> to initiate
a failover to the secondary server. You may want to do this if you are taking your
primary server offline, such as when you will be performing maintenance on it.
Once failover is complete, a failover message will blink in red at the bottom of the
console and you will be disconnected from the primary server.
Manually start a server
If you cannot connect to a server via the virtual IP, you have the option to bring up
the server by attempting to log into the server from the FalconStor Management
Console. The server must be powered on and have IPStor services running in order
to be forced to an up state. You can verify that a server is in a ready state by
connecting to the server via SSH using the heartbeat address and running the sms
command.
When attempting to force a server up from the console, log into the server you are
attempting to manually start. Do not attempt to log into the server from the console
using the Heartbeat IP address.
The Bring up Primary Server window displays if the server is accessible via the
heartbeat IP address.
Type YES in the dialog box to bring the server to a ready state and then force the
server up via the monitor IP address.
Manually initiate a recovery to your primary server
Select Failover --> Stop Takeover if your failover configuration was not set up to use
the FalconStor Auto Recovery feature and you want to force control to return to your
primary server or if you manually forced a takeover and now want to recover to your
primary server.
Once failback is complete, you will be logged off from the virtual primary server.
CDP/NSS Administration Guide
219
Failover
Suspend/resume failover
Select Failover --> Suspend Failover to stop monitoring its partner server.
In the case of Active-Active failover, you can suspend from either server. However,
the server that you suspend from will stop monitoring its partner and will not take
over for that partner server in the event of failure. It can still fail over itself. For
example, server A and server B are configured for Active-Active failover. If you go to
server B and suspend failover, server A will no longer fail over to server B. However,
server B can still fail over to server A.
Select Failover --> Resume Failover to restart the monitoring.
Notes: If the cross mirror link goes down, failover will be suspended. Use the
Resume Failover option when the cross mirror link comes back up. The disks will
automatically be re-synced at the scheduled interval or you can manually
synchronize using the cross mirror synchronize option.
If you stop the CDP/NSS processes on the primary server after
suspending failover, you must do the following once you restart your
storage server:
1. At a Linux command prompt, type sms to see the failover status.
2. When the system is in a ready state, type the following:
ipstorsm.sh recovery
Once the connection is repaired, the failover status is not cleared until
failover is resumed on both servers.
CDP/NSS Administration Guide
220
Failover
Remove a failover configuration
Right-click on one of your failover servers and select Failover --> Remove Failover
Server to remove the selected server from the failover configuration. In a non-mutual
failover configuration, this eliminates the configuration and returns the servers to
independent storage servers.
If this is a mutual failover configuration and you want to eliminate the failover
relationship from both sides, select the Remove Mutual Failover option.
If this is a mutual failover configuration, and you do not select the Remove Mutual
Failover option, this server (the one you right-clicked on) becomes the secondary
server in a non-mutual configuration.
Select if you want to
eliminate the failover
relationship from both
sides.
If everything is checked, this eliminates the failover relationship and removes the
health monitoring IP addresses from the servers and restores the Server IP
addresses. If you uncheck the IP address(es) for a server, the health monitoring
address becomes the Server IP address.
Note: If you are using cross mirror failover, after removal the cross mirror relationship will be gone but the configuration of your iSCSI initiator will remain and
the disks will still be presented to both primary and secondary servers.
CDP/NSS Administration Guide
221
Failover
Mirroring and Failover
(Shared storage failover) If a physical drive contains only a mirrored resource and
the physical drive fails, the server will not fail over. If the physical drive contained a
primary mirrored resource, the mirror will swap roles so that its mirrored copy
becomes the primary disk. If the physical drive contained a mirrored copy, nothing
will happen to the mirror because there is no need for the mirror to swap. If there are
other virtual devices on the same physical drive and the other virtual devices are not
mirrored resources, the server will fail over. Swapping will only occur if all of the
virtual devices on the physical drive contain mirrored resources.
TimeMark/CDP and Failover
Clients may not be able to access TimeViews during failover.
Throttle and Failover
Setting up throttle on a failover pair requires the following additional considerations:
The failover pair must have matching target site names. (This does not
apply to the target server name)
The failover pair can have different throttle settings, even if they are
replicating to the same server.
During failover, the throttle values of the two partners combine and are used
on the "up" server to maintain throttle settings. In other words, from the
software perspective, each server is still maintaining it's throttle. From
hardware perspective, the "up" server is the combined throttle level of itself
and it's partner.
The failover pair's throttle levels may be combined to equal over 100%.
Example: 80%+80%=160%. Note: This percentage is relative to the link
type. This value is the maximum speed allowed, not the instantaneous
speed.
If one of the throttle levels is set to no limit, then in failover state, both
servers throttle level becomes no limit.
It is highly recommended that you avoid the use of different link types. Using
different link types may cause unexpected results in network traffic while in
a failover state.
HotZone and Failover
Using HotZone with failover improves failover performance as disk read operations
are faster and more efficient. Failover with HotZone on local storage further
improves performance since it is mapped locally.
Local Storage prepared disks can only be used for HotZone using Read Cache.
They cannot be used to create virtual device, mirror, snapshot resource, SafeCache,
CDP journal, replica, or join a storage pool.
CDP/NSS Administration Guide
222
Failover
For failover with HotZone created on local storage, the failover must be setup first.
The local storage cannot be created on standalone server. For additional
information regarding HotZone, refer to HotZone.
Enable HotZone using local storage with failover
Local Storage must be prepared from an individual physical disk, instead of using
the Physical Devices Preparation Wizard to ensure proper mapping of physical disks
to the partner server.
1. Right-click on the physical device and select Properties. Select Reserved for
Local Storage on the Disk Preparation screen.
1. The Disk Preparation screen displays.
2. Select Reserved for Local Storage from the drop-down menu.
Local Storage is only available when devices are detected on both servers. The
servers do not need to be the same size as long as the preparation is initiated
from the smaller size device. For example, if server A has a 1GB disk and server
B has 2GB disk, Local Storage can only be prepared/initiated from server A.
3. Right click on SAN Resources and select HotZone --> Enable.
The Enable HotZone Resources for SAN Resources wizard launches.
CDP/NSS Administration Guide
223
Failover
4. On the Storage Option screen, select the Allocate from Local Storage option to
allocate space from the high performance disks.
Note: If you need to remove failover setup, it is recommended that you unassign
the physical disks so they can be re-used as virtual devices or SED devices after
failover has been removed.
CDP/NSS Administration Guide
224
CDP/NSS Administration Guide
Performance
FalconStor offers several options that can dramatically increase the performance of
your SAN.
SafeCache - Allows the storage server to make use of high-speed storage
devices as a staging area for write operations, thereby improving the overall
performance.
HotZone - Offers two methods to improve performance, Read Cache and
Prefetch.
SafeCache
The FalconStor SafeCache option improves the overall performance of CDP/NSSmanaged disks (virtual and/or service-enabled) by making use of high-speed
storage devices, such as RAM disk, NVRAM, or solid-state disk (SSD), as a
persistent (non-volatile) read/write cache.
In a centralized storage environment where a large set of database servers share a
smaller set of storage devices, data tends to be randomly accessed. Even with a
RAID controller that uses cache memory to increase performance and availability,
hard disk storage often cannot keep up with application servers I/O requests.
SafeCache, working in conjunction with high-speed devices (RAM disk, NVRAM or
SSDs) to front slower real disks, can significantly improve performance. Since
these high-speed devices are 100% immune to random access, SafeCache can
write data blocks sequentially to the cache and then move (flush) them to the data
disk (random write) as a separate process once the writes have been
acknowledged, effectively accelerating the performance of the slower disks.
The SafeCache default throttle speed is 10,240 KB/s, which can be adjusted
depending on your client IO pattern.
Regardless of the type of high-speed storage device being used as persistent cache
(RAM disk, NVRAM, or SSD), the persistent cache can be mirrored for added
protection using the FalconStor Mirroring option. In addition, SSDs and NVRAM
have a built-in power supply to minimize potential downtime.
CDP/NSS Administration Guide
225
Performance
SafeCache is fully compatible with the NSS failover option, which allows one server
to automatically fail over to another without any data loss and without any cache
write coherency problems. It is highly recommended that you use a Solid State disk
as SafeCache.
Configure SafeCache
To set up SafeCache for a SAN Resource you must create a cache resource. You
can create a cache resource for a single SAN resource or you can use the batch
feature to create cache resources for multiple SAN resources.
To enable SafeCache:
1. Navigate to Logical Resources --> SAN Resources and right-click on a SAN
resource.
2. Select SafeCache --> Enable.
The Create Cache Resource wizard displays to guide you through creating the
cache resource and allocating space for the storage.
Note: If Cache is enabled, up to 256 unflushed TimeMarks are supported.
Once the Cache has 256 unflushed TimeMarks, new TimeMarks cannot be
created.
Create a cache resource
1. For a single SAN resource, right-click on a SAN Resource and select SafeCache
--> Enable.
CDP/NSS Administration Guide
226
Performance
For multiple SAN resources, right-click on the SAN Resources object and select
SafeCache --> Enable.
2. Select how you want to create the cache resource.
Note that the cache resource cannot be expanded. Therefore, you should
allocate enough space for your cache resource, taking into account future
growth. If you outgrow your cache resource, you will need to disable it and then
recreate it.
Custom lets you select which physical device(s) to use and lets you designate
how much space to allocate from each.
Express automatically creates the cache resource using the criteria you select:
Select different drive - CDP/NSS will look for space on another hard disk.
Select drives from different adapter/channel - CDP/NSS will look for space
on another hard disk only if it is on a separate adapter/channel.
Select any available drive - CDP/NSS will look for space on any disk,
including the original. This option is useful if you have mapped a device
(such as a RAID device) that appears as a single physical device.
CDP/NSS Administration Guide
227
Performance
If you select Custom, you will see the following windows
Select either an entirely unallocated or
partially unallocated disk.
Only one disk can be selected at a
time from this dialog. To create a
cache resource from multiple physical
disks, you will need to add the disks
one at a time. After selecting the
parameters for the first disk, you will
have the option to add more disks.
Indicate how much space to
allocate from this disk.
Click Add More if
you need to add
more space to this
cache resource.
If you select to add
more disks, you
will go back to the
physical device
selection screen
where you can
select another
disk.
CDP/NSS Administration Guide
228
Performance
3. Configure when and how the cache should be flushed.
These parameters can be used to further enhance performance.
Flush cache when data reaches n% of threshold - Specify what
percentage of the cache resource can be used before the cache is
flushed. The default value is 50%.
Flush cache after n milliseconds of inactivity - Specify how many
milliseconds of inactivity should pass before the cache is flushed even if
the threshold from the above is not met. The default value is 0
milliseconds.
Flush cache up to the speed of - Specify the flush speed / number of KB/s
to flush at a time. The default value is 256,000 KB/s.
Skip Duplicate Write Commands - This option prevents the system from
writing more than once to the same block during the cache flush.
Therefore, when the cache flushes data to the underlying virtual device, if
there is more than one write to the same block, it skips all except the most
recent write. Leave this option unchecked if you are using asynchronous
mirroring through a WAN or an unreliable network.
4. Confirm that all information is correct and then click Finish to create the cache
resource.
You can now mirror your cache resource by highlighting the SAN resource and
selecting SafeCache --> Mirror --> Add.
Note: If you take a snapshot manually (via the Console or the command line) of a
SafeCache-enabled resource, the snapshot will not be created until the cache
has been flushed. If failover should occur before the cache is empty, the snapshot
will be inserted into the cache. The snapshot will be created after the snapshot
marker has flushed.
CDP/NSS Administration Guide
229
Performance
Global Cache
Global SafeCache can be viewed from the FalconStor Management Console by
selecting the Global SafeCache node under Logical Resources.
You can choose to create a global or private cache resource. A global cache allows
you to share the cache with up to 128 resources. To create a global cache, select
Use Global Cache Resource in the Create Cache Resource Wizard.
Notes:
Global Cache can be enabled in batch mode by selecting Logical
Resources --> Global SafeCache --> Enable. Otherwise, Global Cache
must be enabled for each device one at a time.
To enable Global Cache for multiple resources, navigate to Logical
Resources --> Global SafeCache and select Enable
Each server can only have one Global Cache.
If the Global Cache is suspended, resumed or its properties is changed
on a virtual device, it also affects on the rest of the members.
Disabling the Global Cache only removes the Global Cache on that
specific device.
Importing the Global Cache from one server to another server is not
supported.
CDP/NSS Administration Guide
230
Performance
SafeCache for groups
If you want to preserve the write order across SAN resources, you should create a
group and enable SafeCache for the group. This is useful for large databases that
span over multiple devices. In such situations, the entire group of devices is acting
as one huge device that contains the database. When changes are made to the
database, it may involve different places on different devices, and the write order
needs to be preserved over the group of devices in order to preserve database
integrity. Refer to Groups for more information about creating a group.
Check the status of your SafeCache resource
You can see the current status of your cache resource by checking the SafeCache
tab for a cached resource.
Unlike a snapshot resource that continues to grow, the cache resource is cleared
out after data blocks are moved to the data disk. Therefore, you can see the Usage
Percentage decrease, even return to 0% if there is no write activity.
For troubleshooting issues pertaining to SafeCache operations, refer to the
SafeCache Troubleshooting section.
SafeCache properties
You can update the parameters that control how and when data will get flushed from
the cache resource to the CDP/NSS-managed disk. To update these parameters:
1. Right-click on a SAN resource that has SafeCache enabled and select
SafeCache --> Properties.
2. Type a new value for each parameter you want to change.
Refer to the SafeCache configuration section for more details about these
parameters.
Disable your SafeCache resource
The SafeCache --> Disable option causes the cache to be flushed, and once
completely flushed, removes the cache resource.
Because there is no dynamic free space expansion when the cache resource is full,
you can use this option to disable your current cache resource and then manually
create a larger one.
If you want to temporarily suspend the SafeCache, use the SafeCache --> Suspend
option instead. You will then need to use the SafeCache --> Resume option to begin
using the SafeCache again.
CDP/NSS Administration Guide
231
Performance
HotZone
The FalconStor HotZone option offers two methods to improve performance, Read
Cache and Prefetch.
Read Cache
Read Cache is an intelligent, policy-driven, disk-based staging mechanism that
automatically remaps "hot" (frequently used) areas of disks to high-speed storage
devices, such as RAM disks, NVRAM, or Solid State Disks (SSDs). This results in
enhanced read performance for the applications accessing the storage. It also
allows you to manage your storage network with a minimal number of high-speed
storage devices by leveraging their performance capabilities.
When you configure the Read Cache method, you must divide your virtual or
Service-Enabled disk into zones of equal size. The HotZone storage is then
automatically created on the specified high-speed disk. This HotZone storage is
divided into zones equal in size to the zones on the virtual or service-enabled disk
(e.g.,32 MB), and is provisioned to the disk.
Reads/writes to each zone are monitored on the virtual or service-eServiceEnablednabled disk. Based on the statistics collected, the application determines
the most frequently accessed zones and re-maps the data from these hot disk
segments to the HotZone storage (located on the high-speed disk) resulting in
enhanced read performance for the application accessing the storage. Using the
continually collected statistics, if it is determined that the corresponding hot disk
segment is no longer hot, the data from the high performance disk is moved back
to its original zone on the virtual or service-enabled disk.
Prefetch
Prefetch enables pre-fetching of data for clients. This allows clients to read ahead
consecutively, which can result in improved performance because the data is ready
from the anticipatory read as soon as the next request is received from the client.
This will reduce the latency of the command and improve the sequential read
benchmarks in most cases.
Prefetch may not be helpful if the client is already submitting sequential reads with
multiple outstanding commands. However, the stop-and-wait case (with one read
outstanding) can often be improved dramatically by enabling Prefetch.
Prefetch does not affect writing, or random reading.
Applications that copy large files (i.e. video streaming) and applications that back up
files are examples of applications that read sequentially and might benefit from
Prefetch.
CDP/NSS Administration Guide
232
Performance
Configure HotZone
1. Right-click on a SAN resource and select HotZone --> Enable.
For multiple SAN resources, right-click on the SAN Resources object and select
HotZone --> Enable.
2. Select the HotZone method to use.
3. (Prefetch only) Set Prefetch properties.
CDP/NSS Administration Guide
233
Performance
These properties control how the prefetching (read ahead) is done. While you
may need to adjust the default settings to enhance performance, FalconStor has
determined that the defaults shown here are best suited for most disks/
applications.
Maximum prefetch chains - Number of locations from the disk to read
from.
Maximum read ahead - The maximum per chain. This can override the
Read ahead option.
Read ahead - How much should be read ahead at a time. No matter how
this is set, you can never read more than the Maximum read ahead setting
allows.
Chain Timeout - Specify how long the system should wait before freeing
up a chain.
4. (Read Cache only) Select the storage pool or physical device(s) from which to
create this HotZone.
5. (Read Cache only) Select how you want to create the HotZone.
Note that the HotZone cannot be expanded. Therefore, you should allocate
enough space for your SAN resource, taking into account future growth. If you
outgrow your HotZone, you will need to disable it and then recreate it.
Custom lets you select which physical device(s) to use and lets you designate
how much space to allocate from each.
Express automatically creates the HotZone storage using the criteria you select:
Select different drive - CDP/NSS will look for space on another hard disk.
CDP/NSS Administration Guide
234
Performance
Select drives from different adapter/channel - CDP/NSS will look for space
on another hard disk only if it is on a separate adapter/channel.
Select any available drive - CDP/NSS will look for space on any disk,
including the original. This option is useful if you have mapped a device
(such as a RAID device) that appears as a single physical device.
6. (Read Cache only) Select the disk to use for the HotZone storage.
If you selected Custom, you can piece together space from one or more disks.
CDP/NSS Administration Guide
235
Performance
7. (Read Cache only) Enter configuration information about the zones.
Size of each zone - Indicate how large each zone should be. Reads/writes
to each zone on the disk are monitored. Based on the statistics collected,
The application determines the most frequently accessed zones and remaps the data from these hot zones to the HotZone storage. You should
check with your application server to determine how much data is read/
written at one time. The block size used by the application should ideally
match the size of each zone.
Minimum stay time - Indicate the minimum amount of time data should
remain in the HotZone before being moved back to its original zone once it
is determined that the zone is no longer hot.
CDP/NSS Administration Guide
236
Performance
8. (Read Cache only) Enter configuration information about zone access.
Access type - Indicate whether the zone should be monitored for reads,
writes, or both.
Access intensity - Indicate how to determine if a zone is hot. Number of
IOs performed at the site uses the amount of data transferred (read/write)
as a determining factor for each zone.
9. Confirm that all information is correct and then click Finish to enable HotZone.
Check the status of HotZone
You can see the current status of your HotZone by checking the HotZone tab for a
configured resource.
CDP/NSS Administration Guide
237
Performance
Note that if you manually suspend HotZone from the Console when the device
configured with the HotZone option is running normally, the Suspended field will
display Yes.
You can also see statistics about the zone by checking the HotZone Statistics tab:
The information displayed is initially for the current interval (hour, day, week, or
month). You can go backward (and then forward) to see any particular interval. You
can also view multiple intervals by moving backward to a previous interval and then
clicking the Play button to see everything from that point to the present interval.
Click the Detail View button to see more detail. There you will see the information
presented more granularly, for smaller amounts of the disk.
If HotZone is being used in conjunction with Fibre Channel or iSCSI failover and a
failover has occurred, the HotZone Statistics will not be displayed while in a failover
state. The reason for this is because the server that took over does not contain the
failed servers information on the HotZone Statistics. As a result, the Console will
display empty statistics for the primary server while the secondary has taken over.
Once the failed server is restored, the statistics will display properly. This does not
affect the functionality of the HotZone option while in a failover state.
CDP/NSS Administration Guide
238
Performance
HotZone Properties
You can configure HotZone properties by right-clicking on the storage server and
selecting HotZone. If HotZone has already been enabled, you can select the
properties option to configure the Zone and Access policies if the HotZone was set
up using the Read Cache method. Alternatively, you will be able to set the Prefetch
Properties if your HotZone has been set up using the Prefetch method.
For additional information on these parameters, see Configure HotZone.
Disable HotZone
The HotZone --> Disable option permanently stops HotZone for the specific SAN
resource.
Because there is no dynamic free space expansion when the HotZone is full, you
can use this option to disable your current HotZone and then manually create a
larger one.
If you want to temporarily suspend HotZone, use the HotZone --> Suspend option
instead. You will then need to use the HotZone --> Resume option to begin using
HotZone again
CDP/NSS Administration Guide
239
CDP/NSS Administration Guide
Mirroring
Mirroring provides high availability by minimizing the down time that can occur if a
physical disk fails. The mirror can be defined with disks that are not necessarily
identical to each other in terms of vendor, type, or even interface (SCSI, FC, iSCSI).
With mirroring, the primary disk is the disk that is used to read/write data for a SAN
Client and the mirrored copy is a copy of the primary. Both disks are attached to a
single storage server and are considered a mirrored pair. If the primary disk fails, the
disks swap roles so that the mirrored copy becomes the primary disk.
There are two Mirroring options, Synchronous Mirroring and Asynchronous
Mirroring.
Synchronous mirroring
FalconStors Synchronous Mirroring option offers the ability to define a synchronous
mirror for any CDP/NSS managed disk (virtualized or service-enabled).
In the Synchronous Mirroring design, each time data is written to a designated disk,
the same data is simultaneously written to another disk. This disk maintains an exact
copy of the primary disk. In the event that the primary disk is unable to read/write
data when requested to by a SAN Client, CDP/NSS seamlessly swaps data
functions to the mirrored copy disk.
CDP/NSS Administration Guide
240
Mirroring
Asynchronous mirroring
FalconStors Asynchronous Mirroring Option offers the ability to define a near realtime mirror for any CDP/NSS-managed disk (virtual or service-enabled) over long
distances between data centers.
When you configure an asynchronous mirror, you create a dedicated cache
resource and associate it to a CDP/NSS-managed disk. Once the mirror is created,
the primary and secondary disks are synchronized if the Start initial synchronization
when mirror is added option is enabled in global settings. This process does not
involve the application server. After the synchronization is complete, all writerequests from the associated application server are sequentially delivered to the
dedicated cache resource. This data is then committed to both the primary and its
mirror as a separate background process. For added protection, the cache resource
can also be mirrored.
STAGING AREA
Data blocks are
written sequentially to
the cache resource to
provide enhanced
write performance.
For read operations, the
cache resource is
checked first in case a
newly written block has
not yet been moved to the
data disk.
IPStor
WRITES
ACKNOWLEDGEMENT
Blocks are moved to the primary
disk and mirror disk (random write)
as a secondary operation, after
writes have been acknowledged
from the cache resource.
STAGING
AREA
PRIMARY
MIRROR
5
Primary Site
Remote Site
PRIMARY DISK
9
7
MIRROR DISK
CDP/NSS Administration Guide
241
Mirroring
Mirror requirements
The following are the requirements for setting up a mirroring configuration:
The mirrored devices must be composed of one or more hard disks.
The mirrored devices must both be accessible from the same storage
server.
The mirrored devices must be the same size. If you try to expand the
primary disk, CDP/NSS will also expand the mirrored copy to the same size.
A mirror of a Thin Provisioned disk is another Thin Provisioned disk.
Mirror setup
You can enable mirroring for a single SAN resource or you can use the batch feature
to enable mirroring for multiple SAN resources. You can also enable mirroring for an
existing snapshot resource, cache resource, or incoming replica resource.
Note: For asynchronous mirroring, if you want to preserve the write order of data
that is being mirrored asynchronously, you should create a group for your SAN
resources and enable SafeCache for the group. This is useful for large databases
that span over multiple devices. In such situations, the entire group of devices is
acting as one huge device that contains the database. When changes are made
to the database, it may involve different places on different devices, and the write
order needs to be preserved over the group of devices in order to preserve
database integrity. Refer to Groups for more information about creating a group.
1. For a single SAN resource, right-click on the resource and select Mirror --> Add.
For multiple SAN resources, right-click on the SAN Resources object and select
Mirror --> Add.
For an existing snapshot resource or cache resource, right-click on the SAN
resource and select Snapshot Resource or Cache Resource --> Mirror --> Add.
CDP/NSS Administration Guide
242
Mirroring
2. (SAN resources only) Select the type of mirrored copy you are creating.
3. Select the storage pool or physical device(s) from which to create the mirror.
CDP/NSS Administration Guide
243
Mirroring
4. Select how you want to create this mirror.
Custom lets you select which physical device(s) to use and lets you designate
how much space to allocate from each.
Express automatically creates the Mirrored Copy using the criteria you select:
Select different drive - Look for space on another hard disk.
Select drives from different adapter/channel - Look for space on another
hard disk only if it is on a separate adapter/channel.
Select any available drive - Look for space on any disk, including the
original. This option is useful if you have mapped a device (such as a
RAID device) that looks like a single physical device.
CDP/NSS Administration Guide
244
Mirroring
If you select Custom, you will see the following windows:
Select either an entirely unallocated
or partially unallocated disk.
Only one disk can be selected at a
time from this dialog. To create a
mirrored disk from multiple physical
disks, you will need to add the disks
one at a time. After selecting the
parameters for the first disk, you will
have the option to add more disks.
Indicate how much space
to allocate from this disk.
Click Add More
if you need to
add more space
to this mirrored
disk.
If you select to
add more disks,
you will go back
to the physical
device selection
screen where
you can select
another disk.
CDP/NSS Administration Guide
245
Mirroring
5. (SAN resources only) Indicate if you want to use synchronous or asynchronous
mirroring.
If a cache resource already exists, mirroring will automatically be set to
asynchronous mode.
If no cache resource exists, you can use either synchronous or asynchronous
mode. However, if you select asynchronous mode, you will need to create a
cache resource. The wizard will guide you through creating it.
If you select synchronous mode for a resource without a cache and later create
a cache, the mirror will switch to asynchronous mode.
Note: If you are enabling asynchronous mirroring for multiple resources that
are being used by the same application (for example, your Oracle database
spans three disks), to ensure write order consistency you must first create a
group. You must enable SafeCache for this group and add all of the related
resources to it before enabling asynchronous mirroring for each resource. By
doing this, all of the resources will share the same read/write cache and will
be flushed at the same time, thereby guaranteeing the consistency of the
data.
CDP/NSS Administration Guide
246
Mirroring
6. Determine if you want to monitor the mirroring process.
If you select to monitor the mirroring process, the I/O performance is evaluated
to decide if I/O to the mirror disk is lagging beyond an acceptable limit. If it is,
mirroring will be suspended so it does not impact the primary storage.
Note: Mirror monitoring settings are retained when a mirror is enabled on the
same device.
Monitor mirroring process every n seconds - Specify how frequently the
system should check the lag time (delay between I/O to the primary disk
and the mirror). Checking more or less frequently will not impact system
performance. On systems with very low I/O, a higher number may help get
a more accurate representation.
Maximum lag time for mirror I/O - Specify an acceptable lag time.
Suspend mirroring when the failure threshold reaches n% - Specify what
percentage of I/O must pass the lag time test. For example, you set the
percentage to 10% and the maximum lag time to 15 milliseconds. During
the test period, 100 I/O occurred and 20 of them took longer than 15
milliseconds to update the mirror disk. With a 20% failure rate, mirroring
would be suspended.
Note: If a mirror becomes out of sync because of a disk failure or an I/O error
(rather than having too much lag time), the mirror will not be suspended.
Because the mirror is still active, re-synchronization will be attempted based
on the global mirroring properties that are set for the server. Refer to Set
global mirroring options for more information.
CDP/NSS Administration Guide
247
Mirroring
7. If mirroring is suspended, specify when re-synchronization should be attempted.
Re-synchronization can be started based on time (every n minutes/hours default is every five minutes) and/or I/O activity (when I/O is less than n KB/MB).
If you select both, the time will be applied first before the I/O activity level. If you
do not select either, the mirror will stay suspended until you manually
synchronize it.
If you select one or both re-synchronization methods, you must also specify how
many times the system should retry the re-synchronization if it fails to complete.
When the system initiates re-synchronization, it does not check lag time and
mirroring will not be suspended if there is too much lag time.
If you manually resume mirroring, the system will monitor the process during
synchronization and check lag time. Depending upon your monitoring policy,
mirroring will be suspended if the lag time gets above the acceptable limit.
Note: If CDP/NSS is restarted or the server experiences a failover while
attempting to re-synchronize, the mirror will remain suspended.
8. Confirm that all information is correct and then click Finish to create the mirroring
configuration.
CDP/NSS Administration Guide
248
Mirroring
Create cache resource
The cache resource wizard will be launched automatically when you configure
Asynchronous Mirroring but you do not have a cache resource. You can also create
a cache resource by right-clicking on a SAN resource and selecting SafeCache -->
Enable. For multiple SAN resources, right-click on the SAN Resources object and
select SafeCache --> Add.
1. Select how you want to create the cache resource.
Note that the cache resource cannot be expanded. Therefore, you should
allocate enough space for your SAN resource, taking into account future growth.
If you outgrow your cache resource, you will need to disable it and then
recreate it.
Custom lets you select which physical device(s) to use and lets you designate
how much space to allocate from each.
Express automatically creates the cache resource using the criteria you select:
Select different drive - Look for space on another hard disk.
Select drives from different adapter/channel - Look for space on another
hard disk only if it is on a separate adapter/channel.
Select any available drive - Look for space on any disk, including the
original. This option is useful if you have mapped a device (such as a
RAID device) that looks like a single physical device.
2. Confirm that all information is correct and then click Finish to create the cache
resource.
You can now mirror your cache resource by highlighting the SAN resource and
selecting SafeCache --> Mirror --> Add.
CDP/NSS Administration Guide
249
Mirroring
Check mirroring status
You can see the current status of your mirroring configuration by checking the
General tab for a mirrored resource.
Synchronized - Both disks are synchronized. This is the normal state.
Not synchronized - A failure in one of the disks has occurred or
synchronization has not yet started. If there is a failure in the Primary Disk,
the Primary Disk is swapped with the Mirrored Copy.
If the synchronization is occurring, you will see a progress bar along with the
percentage that is completed.
Note: In order to update the mirror synchronization status, refresh the Console
screen (View --> Refresh).
Swap the primary disk with the mirrored copy
Right-click on the SAN resource and select Mirror --> Swap to reverse the roles of
the primary disk and the mirrored copy. You will need to do this if you are going to
perform maintenance on the primary disk or if you need to remove the primary disk.
Promote the mirrored copy to become an independent virtual drive
Right-click on the mirrored drive and select Mirror --> Promote to break the mirrored
pair and convert the mirrored copy into an independent virtual drive. The new virtual
drive will have all of the properties of a regular virtual drive.
CDP/NSS Administration Guide
250
Mirroring
This feature is useful as a safety net when you perform major system maintenance
or upgrades. Simply promote the mirrored copy and you can perform maintenance
on the primary disk without worrying about anything going wrong. If there is a
problem, you can use the newly promoted virtual drive to serve your clients.
Notes:
Before promoting a mirrored drive, all clients should first detach or
unmount from the drive. Promoting a drive while clients are attached or
mounted may cause the file system to become corrupt on the promoted
drive.
If you are copying files over in Windows to a SAN resource that has a
mirror, you need to wait for the cache to flush out before promoting the
mirrored drive on the SAN resource. If you do not wait for the cache to
flush, you may see errors in the files.
If you are using asynchronous mirroring, you can promote the mirror only
when the SafeCache option is suspended and there is no data in the
cache resource that needs to be flushed.
When you promote the mirror of a replica resource, the replication
configuration is maintained.
Depending upon the replication schedule, when you promote the mirror
of a replica resource, the mirrored copy may not be an identical image of
the replication source. In addition, the mirrored copy may contain corrupt
data or an incomplete image if the last replication was not successful or if
replication is currently occurring. Therefore, it is best to make sure that
the last replication was successful and that replication is not occurring
when you promote the mirrored copy.
CDP/NSS Administration Guide
251
Mirroring
Recover from a mirroring hardware failure
Replace a
failed disk
If one of the mirrored disks has failed and needs to be replaced:
1. Right-click on the SAN resource and select Mirror --> Remove to remove the
mirroring configuration.
2. Physically replace the failed disk.
The failed disk is always the mirrored copy because if the Primary Disk fails, the
primary disk is swapped with the mirrored copy.
Important: To replace the disk without having to reboot your storage server, refer
to Replace a failed physical disk without rebooting.
3. Run the Create SAN Resource Mirror Wizard to create a new mirroring
configuration.
Fix a minor disk
failure
If one of the mirrored disks has a minor failure, such as a power loss:
1. Fix the problem (turn the power back on, plug the drive in, etc.).
2. Right-click on the SAN resource and select Mirror --> Synchronize.
This re-synchronizes the disks and restarts the mirroring.
Replace a disk that is part of an active mirror configuration
If you need to replace a disk that is part of an active mirror configuration:
1. If you need to replace the Primary Disk, right-click on the SAN resource and
select Mirror --> Swap to reverse the roles of the disks and make it a Mirrored
Copy.
2. Select Mirror --> Remove to cancel mirroring.
3. Replace the disk.
Important: To replace the disk without having to reboot your storage server, refer
to Replace a failed physical disk without rebooting.
4. Run the Create SAN Resource Mirror Wizard to create a new mirroring
configuration.
CDP/NSS Administration Guide
252
Mirroring
Replace a failed physical disk without rebooting
Do the following if you need to replace a failed physical disk without rebooting your
storage server:
1. If you are not sure which physical disk to remove, execute the following to
access the drive and cause the disks light to blink:
hdparm -t /dev/sd#
where # represents a,b,c,d, depending on the order of the disks.
2. You MUST remove the SCSI device from the Linux OS.
Type the following for Linux (2.4 kernel):
echo scsi remove-single-device A:C:S:L > /proc/scsi/scsi
A C S L stands for: Adapter, Channel, SCSI, and LUN. This can be found in the
Console.
Type the following for Linux (2.6 kernel):
echo "1" > /sys/class/scsi_device/DeviceID/device/delete
Where DeviceID is obtained from ls /sys/class/scsi-device
For example:
echo "1" > /sys/class/scsi_device/1:0:0:0/device/delete
3. Execute the following to re-add the device so that Linux can recognize the drive:
echo "scsi add-single-device x x x x">cat /proc/scsi/scsi.
where x x x x stands for A C S L numbers: Adapter, Channel, SCSI, and LUN
number
4. Rescan the adapter to which the device has been added.
In the Console, right-click on AdaptecSCSI Adapter.x and select Rescan, where
x is the adapter number the device is on.
CDP/NSS Administration Guide
253
Mirroring
Expand the primary disk
The mirrored devices must be the same size. If you want to enlarge the primary disk,
you will need to enlarge the mirrored copy to the same size. When you use the
Expand SAN Resource Wizard, it will automatically launch the Create SAN
Resource Mirror Wizard so that you can enlarge the Mirrored Copy as well.
Notes:
As you expand the primary disk, the wizard only shows half the available
disk space as available because it reserves an equal amount of space for
the mirrored drive.
On a Thin Provisioned disk, if the mirror is offline, it will be removed when
storage is being added automatically. If this occurs, you must recreate
the mirror.
Manually synchronize a mirror
The Synchronize option re-synchronizes a mirror and restarts the mirroring process
once it is synchronized. This is useful if one of the mirrored disks has a minor failure,
such as a power loss.
1. Fix the problem (turn the power back on, plug the drive in, etc.).
2. Right-click on the resource and select Mirror --> Synchronize.
During the synchronization, the system will monitor the process and check lag time.
Depending upon your monitoring policy, mirroring will be suspended if the lag time
gets above the acceptable limit.
Note: If your mirror disk is offline, storage cannot be added to the thin disk
manually.
CDP/NSS Administration Guide
254
Mirroring
Set mirror throttle
The default throttle speed is 10,240 KB/s, which can be adjusted depending on your
client IO pattern. To set the mirror throughput speed/throttle for mirror
synchronization, select Mirror --> Throttle.
Select the Enable Mirror Throttle checkbox and enter the throughput speed for
mirror synchronization. This option is disabled by default. If this option is disabled for
an individual device, the global settings will be followed. Refer to Set global
mirroring options.
The synchronization speed can go up to the specified value, but the actual
throughput depends upon the storage environment.
Note: The mirror throttle settings are retained when the mirror is enabled on the
same device.
The throughput speed can also be set for multiple devices (in batch mode) by rightclicking on Logical Resources in the console and selecting Set Mirror Throttle.
CDP/NSS Administration Guide
255
Mirroring
Set alternative read mirror
To set the alternative read mirror for mirror synchronization, select Mirror -->
Alternative read mirror.
Enable this option to have the I/O alternatively read from both the primary resource
and the mirror.
The alternative read mirror can also be set in batch mode by right-clicking on Logical
Resources in the console and selecting Set Alternative Read Mirror.
Set mirror resynchronization priority
To set the resynchronization priority for pending mirror synchronization, select Mirror
--> Priority.
The Mirror resynchronization priority screen displays, allowing you to prioritize the
order that device/group will begin mirroring if scheduled to start at the same time.
This option can be set for a single resource or a single group via the Mirror
submenu.
CDP/NSS Administration Guide
256
Mirroring
The resynchronization priority can also be set in batch mode by right-clicking on
Logical Resources in the console and selecting Set Mirror Priority.
CDP/NSS Administration Guide
257
Mirroring
Rebuild a mirror
The Rebuild option rebuilds a mirror from beginning to end and starts the mirroring
process once it is synchronized. The rebuild feature is useful if the mirror disk you
want to synchronize is from a different storage server
A rebuild might be necessary if your disaster recovery site has been servicing clients
due to some type of issue, such as a storm or power outage, at your primary data
center. Once the problem is resolved, the mirror is out of sync. Because the mirror
disk is located on a different storage server in a remote location, the local storage
server must rebuild the mirror from beginning to end.
Before you rebuild a mirror, you must stop all client activity. After rebuilding the
mirror, swap the mirror so that the primary data center can service clients again.
To rebuild the mirror, right-click on a resource and select Mirror --> Rebuild.
You can see the current settings by checking the Mirror Synchronization Status field
on the General tab of the resource.
Suspend/resume mirroring
You can suspend mirroring for an individual resource or for multiple resources.
When you manually suspend a mirror, the system will not attempt to re-synchronize,
even if you have a re-synchronization policy. You will have to resume the mirror in
order to synchronize.
When mirroring is resumed, if the mirror is not synchronized, a synchronization will
be triggered immediately. During the synchronization, the system will monitor the
process and check lag time. Depending upon your monitoring policy, mirroring will
be suspended if the lag time gets above the acceptable limit.
To suspend/resume mirroring for an individual resource:
1. Right-click on a resource and select Mirror --> Suspend (or Resume).
You can see the current settings by checking the Mirror Synchronization Status
field on the General tab of the resource.
To suspend/resume mirroring for multiple resources:
1. Right-click on the SAN Resources object and select Mirror --> Suspend (or
Resume).
2. Select the appropriate resources.
3. If the resource is in a group, select the checkbox to include all of the group
members enabled with mirroring.
CDP/NSS Administration Guide
258
Mirroring
Change your mirroring configuration options
Set global
mirroring
options
You can set global mirroring options that affect system performance during
mirroring. While the default settings should be optimal for most configurations, you
can adjust the settings for special situations.
To set global mirroring properties for a server:
1. Right-click on the server and select Properties.
2. Select the Performance tab.
Throttle [n] KB/s (Range 128 - 1048576, 0 to disable) - The throttle
parameter allows you to set the maximum allowable mirror
synchronization speed, thereby minimizing potential impact to
performance for your devices. This option is set at 10 MB per second by
default. If disabled, throughput is unlimited.
Note: Actual throughput depends upon your storage environment.
Select the Start Initial Synchronization when mirror check box to have the
mirror sync when added. By default, the mirror will not automatically
synchronize when added. If this option is not selected, the mirror will not
sync until the next synchronization interval or until a manual
synchronization operation is performed. This option is not applicable for
Near-line recovery and thin disk relocation.
Synchronize Out-of-Sync Mirrors - Indicate how often the system should
check and attempt to re-synchronize active out-of-sync mirrors. The
default is every five minutes and up to two mirrors at each interval. These
settings are also used for the initial synchronization during creation or
loading of the mirror. Manual synchronizations can be performed at any
time and are not included in the number of mirrors at each interval set
here.
Enter the retry value to indicate how often synchronization should be
retried if it fails to complete. The default is to retry 20 times. These settings
will only be used for active mirrors. If a mirror is suspended because the
lag time exceeds the acceptable limit, that re-synchronization policy will
apply instead.
Indicate whether or not to include replica mirrors in the re-synchronization
process by selecting the Include replica mirrors in the automatic
synchronization process checkbox. This is unchecked by default.
Change
properties for a
specific
resource
You can change the following mirroring configuration for a resource:
Policy for monitoring the mirroring process
Conditions for re-synchronization
To change the configuration:
1. Right-click on the primary disk and select Mirror --> Properties.
2. Make the appropriate changes and click OK.
CDP/NSS Administration Guide
259
Mirroring
Remove a mirror configuration
Right-click on the SAN resource and select Mirror --> Remove to delete the mirrored
copy and cancel mirroring. You will not be able to access the mirrored copy
afterwards.
Mirroring and failover
If mirroring is in progress during failover/recovery, mirroring will restart from where it
left off once the failover/recovery is complete.
If the mirror is synchronized but there is a Fibre disconnection between the server
and storage, the mirror may become unsynchronized. It will re-synchronize
automatically after failover/recovery.
A synchronized mirror will always remain synchronized during a recovery process.
CDP/NSS Administration Guide
260
CDP/NSS Administration Guide
Snapshot Resource
TimeMark snapshots allow you to create point-in-time delta snapshot copies of
data volumes. The concept of performing a snapshot is similar to taking a picture.
When we take a photograph, we are capturing a moment in time and transferring
this moment in time to a photographic medium, even while changes are occurring to
the object we focused our picture on. Similarly, a snapshot of an entire device allows
us to capture data at any given moment in time and move it to either tape or another
storage medium, while allowing data to be written to the device.
The basic function of the snapshot engine is to allow images to be created of data
volumes (virtual drives) using minimal storage space. The snapshot initially uses no
disk space. As new data is written to the source volume, the old data blocks are
moved to a temporary snapshot storage area. By combining the snapshot storage
with the source volume, the data can be recreated exactly at it appeared at the time
the snapshot was taken. For added protection, a Snapshot Resource can also be
mirrored.
A trigger is an event that notifies the application when it is time to perform a
snapshot of a virtual device. FalconStors Replication, TimeMark/CDP, Snapshot
Copy, and ZeroImpact Backup options all trigger snapshots.
Create a Snapshot Resource
Each SAN resource can have one Snapshot Resource. The Snapshot Resource
supports up to 64 TB and is shared by all of the FalconStor options that use
Snapshot (Replication, TimeMark/CDP, Snapshot Copy, and ZeroImpact backup).
Each snapshot initially uses no disk space. As new data is written to the source
volume, the old data blocks are moved to the Snapshot Resource. Therefore, it is
not necessary to have 100% of the size of the SAN resource reserved as a
Snapshot Resource. The amount of space initially reserved for each Snapshot
Resource is calculated as follows:
Size of SAN Resource
Reserved for Snapshot Resource
Less than 500 MB
100%
500 MB or more but less than 2 GB
50%
2 GB or more
20%
Using the table above, if you create a 10 GB SAN resource, your initial Snapshot
Resource will be 2 GB but you can set the Snapshot Resource to expand
automatically, as needed.
If you create a SAN resource that is less than 500 MB, the amount of space
reserved for the Snapshot Resource will be 100% of the virtual drive size. This is
because a smaller-sized volume can overfill quickly, leaving no time for the auto-
CDP/NSS Administration Guide
261
Snapshot Resource
expansion to take effect. By reserving a Snapshot Resource equal to 100% of the
SAN resource, the snapshot is able to free up enough space so normal write
operations can continue.
If you do not create a Snapshot Resource for your SAN resource, when you
configure Replication, TimeMark/CDP, Snapshot Copy, or backup, the Snapshot
wizard will launch first, allowing you to create it.
You can create a Snapshot Resource for a single SAN resource or you can use the
batch feature to create snapshot resources for multiple SAN resources:
1. For a single SAN resource, right-click on the resource and select Snapshot
Resource --> Create.
For multiple SAN resources, right-click on the SAN Resources object and select
Snapshot Resource --> Create.
2. Select the storage pool or physical device that should be used to create this
Snapshot Resource.
CDP/NSS Administration Guide
262
Snapshot Resource
3. Select how you want to create this Snapshot Resource.
Custom lets you select which physical device(s) to use and lets you designate
how much space to allocate from each.
Express lets you designate how much space to allocate and then automatically
creates a Snapshot Resource using an available device.
Select different drive - The storage server will look for space on another
hard disk.
Select drives from different adapter/channel - The storage server will look
for space on another hard disk only if it is on a separate adapter/channel.
Select any available drive - The storage server will look for space on any
disk, including the original. This option is useful if you have mapped a
device (such as a RAID device) that looks like a single physical device to
your storage server.
CDP/NSS Administration Guide
263
Snapshot Resource
If you select Custom, you will see the following windows:
Select either an entirely
unallocated or partially unallocated
device.
Indicate how much
space to allocate from
this device.
Click Add More if you
need to add another
physical disk to this
Snapshot Resource.
You will go back to
the physical device
selection screen
where you can select
another disk.
4. Verify the physical devices you have selected.
CDP/NSS Administration Guide
264
Snapshot Resource
5. Determine whether the storage server should expand your Snapshot Resource if
it runs low and how it should be expanded.
Specify a threshold as a percentage of the space used. The threshold is used to
determine if more space is needed for the snapshot resource. The default is
50%.
If you want your storage server to automatically expand the Snapshot Resource
when space is running low, set the threshold level and make sure the option
Automatically allocate more space for the Snapshot Resource is selected. The
default expansion size is 20%. Make sure not to set this expansion increment
too low, otherwise the snapshot resource may go offline if the snapshot
expansion cannot be complete on time. However, if you have a very large
snapshot resource, you can set this value as a small percentage size.
Then, determine the amount of space to be allocated for each expansion. You
can set this to be a specific size (in MB) or a percentage of the size of the
Snapshot Resource. There is no limit to the number of times a Snapshot
Resource can be expanded.
Once the low space threshold is triggered, the system will attempt to expand the
resource by allocating additional space. The time required to accomplish this
may be in milliseconds or even seconds, depending on how busy the system is.
If expansion fails, old TimeMarks will be deleted until enough space is reclaimed
so that the Snapshot Resource does not run out of space.
To prevent this from happening, we recommend that you allow enough time for
expansion after the low space threshold is reached. We recommend that your
safety margin be at least five seconds. This means that from the time the low
space threshold is reached, while data is being written to the drive at maximum
CDP/NSS Administration Guide
265
Snapshot Resource
throughput, it will take a minimum of five seconds to fill up the rest of the drive.
Therefore, if the maximum throughput is 50 MB/s, the threshold should be set for
when the space is below 250 MB. Of course if the throughput is lower, the
allowance can be lowered accordingly.
The Maximum size allowed for the Snapshot Resource can be set to limit
automatic expansion. Specify 0 for no limit.
Note: If you do not select automatic expansion, old TimeMarks will be deleted
to prevent the Snapshot Resource from running out of space.
6. Configure what your storage server should do if your Snapshot Resource runs
out of space.
The default is to Always maintain write operations. If you are setting the
Snapshot Resource policy on a near-line mirror or replica, the default is to
Preserve all TimeMarks.
This will only occur if you have reached the maximum allowable size for your
Snapshot Resource or if you have chosen not to expand it. Once the maximum
is reached, the earliest TimeMarks will be deleted.
If a Snapshot Resource is associated with a member of a group enabled with
TimeMark, the earliest TimeMark will be deleted for all of the resources.
If you select Preserve all TimeMarks or Preserve recent TimeMarks, the system
will prevent any new writes from getting to the disk once the Snapshot Resource
runs out of space and it cannot allocate any more. As a result, clients can
CDP/NSS Administration Guide
266
Snapshot Resource
experience write errors. If the client is a production machine, this may not be
desirable.
If you select Enable MicroScan, the data block will be analyzed and only the
changed data will be copied.
7. Determine if you want to use Snapshot Notification.
Snapshot Notification works with the Snapshot Agents to initiate a snapshot
request to a SAN client. When used, the system notifies the client to quiet
activity on the disk before a snapshot is taken. Using Snapshot Notification
guarantees that you will get a transactionally consistent image of your data.
8. Confirm that all information is correct and then click Finish.
You will now see a new Snapshot tab for this SAN resource.
CDP/NSS Administration Guide
267
Snapshot Resource
Check status of a Snapshot Resource
You can see how much of your Snapshot Resource is currently being used and your
expansion methods by checking the Snapshot tab for a SAN resource.
Because Snapshot Resources record block-level changes, not file-level, you may
not see the Usage Percentage decrease when you delete files. This is because
deleted files really still exist on the disk.
The Usage Percentage bar colors indicate usage percentage in relation to the
threshold level:
The usage percentage is displayed in green as long as the available sectors are
greater than 120% of the threshold (in sectors). It is displayed in blue when available
sectors are less than 120% of threshold (in sectors) but still greater than the
threshold (in sectors). The usage percentage is displayed in red when the available
sectors are less than the threshold (in sectors).
Note that Snapshot resources will be marked off-line if the physical resource from
which they have been created is disconnected from a single server in a failover set
prior to a failing over to the secondary server.
CDP/NSS Administration Guide
268
Snapshot Resource
Protect your Snapshot Resources
If the physical disk that contains a snapshot resource fails, you will still be able to
access your SAN resource, but the snapshot data already in the Snapshot Resource
will become invalid. This means that you will not be able to roll back to a point-intime image of your data.
However, you can protect your snapshot resources by using the Mirroring option.
With Mirroring, each time data is written to the Snapshot Resource, the same data is
also written to another disk which maintains an exact copy of the Snapshot
Resource. If the primary Snapshot Resource disk fails, the storage server
seamlessly swaps to the mirrored copy.
To mirror a Snapshot Resource, right-click on the SAN resource and select
Snapshot Resource --> Mirror --> Add.
Refer to the Mirroring section for more information.
Options for Snapshot Resources
When you right-click on a logical resource that has a Snapshot Resource, you will
see a Snapshot Resource menu with the following options:
Reinitialize
Expand
Shrink
Reinitialize allows you to refresh your Snapshot Resource and start over. You will
only need to reinitialize your Snapshot Resource if you are not mirroring it and it has
gone offline but is now back online.
Expand allows you to manually expand the size of your Snapshot Resource.
The Shrink Policy allows you to reduce the size of your Snapshot Resource. This is
useful if your snapshot resource does not need all of the space currently allocated to
it.
Based on current usage, when you select the Shrink option, the system calculates
the maximum amount of space that can be used to shrink the Snapshot Resource.
The amount of disk space saved by this operation is calculated from the last block of
data where data is written. If there are gaps between blocks of data, the gaps are not
included in the amount of space saved.
Note: Be sure to stop all I/O to the source resource before starting this operation.
If you have I/O occurring during the shrinking process, the space used for the
Snapshot Resource may increase and the operation may fail.
Delete
Properties
Mirror
Delete allows you to delete the Snapshot Resource for this logical resource.
Properties allows you to change the snapshot resource automatic expansion policy
and snapshot notification policies.
Mirror allows you to protect your Snapshot Resource by creating a mirror of it.
CDP/NSS Administration Guide
269
Snapshot Resource
Reclaim
Reclaim allows you to free available space in the snapshot resource. Enable the
reclamation policy to automatically free up space when a TimeMark Snapshot is
deleted. Once the snapshot is deleted, space will be reclaimed at the next
scheduled reclamation.
Snapshot Resource shrink and reclamation policies
The reclamation policy allows you to save space by reclaiming previously used
storage areas. In the regular course of running your business, TimeMarks are added
and deleted. However, the amount of space used up by the deleted TimeMark does
not automatically return to the available resource pool until the space is reclaimed.
Space can be reclaimed automatically by setting a schedule or manually. For
manual reclamation, you can select a TimeMark to be reclaimed one at a time. For
scheduled reclamation, you can reclaim all the deleted TimeMarks on that device.
Scheduling allows you to set the reclamation policy to automatically free up space
when a TimeMark Snapshot is deleted.
Enable Reclamation Policy
The global reclamation policy is enabled by default and scheduled to run at 12:00
a.m. every seven days, automatically removing obsolete TimeView data and
conserving space. You can also enable the reclamation option for an individual SAN
resource.
While setting the reclamation policy for automatic reclamation works to conserve
space in most instances, there are some cases where you may need to manually
reclaim space. For example, if you delete a TimeMark Snapshot other than the first
or the last one, space will not automatically be available.
In this case, you can manually reclaim the space by right-clicking on the SAN
resource in the FalconStor Management Console and selecting Snapshot Resource
--> Reclaim --> Start.
CDP/NSS Administration Guide
270
Snapshot Resource
Highlight the TimeMark(s) to start the reclamation process and click OK.
Notes:
If auto-expansion occurs on the Snapshot Resource while the reclamation
process is in progress, the reclamation operation will not succeed. The autoexpansion will be skipped as well.
Delete TimeMark and Rollback TimeMark operations are not supported during
reclamation.You must stop reclamation before attempting either operation.
You can stop a reclaim process by right-clicking on the SAN resource in the
FalconStor Management Console and selecting Snapshot Resource --> Reclaim
--> Stop.
To enable a reclamation policy for a particular SAN resource:
1. Right-click on the SAN resource in the FalconStor Management Console and
select Snapshot Resource --> Reclaim --> Enable.
The Enable Reclamation Policy screen displays.
2. Enter the following reclamation policy parameters:
Set the Reclaim threshold - Reclaim space from deleted TimeMarks is
there is at least 2 MB of data to be reclaimed. The default is 2 MB,
however you can set your own threshold (in MB or percentage) for the
minimum amount of space to be reclaimed per TimeMark.
Set the Reclaim schedule - Enter the date and time to start the reclamation
schedule, along with the repeat interval.
CDP/NSS Administration Guide
271
Snapshot Resource
Set the maximum processing time for reclamation - Specify the maximum
time for the reclamation process. Once this threshold is reached, the
reclamation process will stop. Specify 0 to set an unlimited processing
time. It is recommended that you schedule lengthy reclamation processing
during non-peak operation periods.
Global reclamation policy and retention schedule
You can set and/or edit the global reclamation policy and the TimeMark retention
schedule via server properties by right-clicking on the server, and selecting
Properties --> TimeMark Maintenance tab.
Note: If reclamation is in progress and failover occurs, the reclamation will fail
gracefully. After failover, the global reclamation policy will use the setting on the
primary server. For example, if the global reclamation schedule has been disabled on primary server, and it is enabled on the secondary server (failover pair).
After failover, the global reclamation schedule will not be triggered on the
device(s) on the primary server.
CDP/NSS Administration Guide
272
Snapshot Resource
Once the reclamation policy has been configured, at-a-glance information regarding
reclamation settings can be obtained from the FalconStor Management Console -->
Snapshot Resource tab.
Disable Reclamation
To disable the reclamation policy, right-click on the SAN resource in the FalconStor
Management Console and select Snapshot Resource --> Reclaim --> Disable.
Note: If the global reclamation schedule is disabled on the primary server, and it
is enabled on the secondary server (failover pair). After failed over, no global reclamation schedule should trigger on the device(s) on the primary server.
CDP/NSS Administration Guide
273
Snapshot Resource
Check reclamation status
You can check the status of a reclaim process from the console by highlighting the
appropriate node under San Resources in the console.
Shrink Policy
Just as you can set your snapshot resource to automatically expand when it requires
more space; and you can also set it to "shrink" when it can reclaim unused space.
Setting the shrink policy for your snapshot resources is another way to conserve
space.
The shrink policy allows you to shrink the size of a Snapshot Resource after each
successful scheduled reclamation. The shrink policy can be set for multiple SAN
resources as well as for individual resources.
In order to set a shrink policy, a global or individual reclamation policy must be
enabled for the SAN resource. Shrinkage amounts depend upon the minimum
amount of disk space you set to trigger the shrink policy. When the shrink policy is
triggered, the system calculates the maximum amount of space that can be used to
shrink the snapshot resource. The amount of disk space saved by this operation is
calculated from the last block of data where data is written. When the specified
amount of space to be gained is equal to, or greater than the number entered,
shrinkage occurs. The snapshot resource can shrink down to the minimum size you
set for the resource.
CDP/NSS Administration Guide
274
Snapshot Resource
To set the shrink policy:
1. Right-click on SAN Resources and select Snapshot Resource -- > Properties.
2. Click the Advanced button.
3. Set the minimum amount of disk space and the minimum snapshot resource size
that will trigger the shrink policy.
When the amount of space to be reclaimed is equal to, or greater than the
minimum disk space specified here and the minimum Snapshot Resource size is
reached, the shrink policy will be triggered. By default, the Enable this Snapshot
Resource to Shrink option is disabled. The minimum Amount of Disk Space to
Trigger Policy is set to 1 GB.
4. Set the minimum Snapshot Resource size. Enter the amount of space to keep.
The Snapshot Resource will remain equal to or greater this size. The minimum
Snapshot Resource size is 1 GB by default.
Once the shrink policy has been enabled, at-a-glance information regarding shrink
policy settings can be obtained from the FalconStor Management Console -->
Snapshot Resource tab.
CDP/NSS Administration Guide
275
Snapshot Resource
Shrink a snapshot resource
1. Highlight the Replication node in the navigation tree.
2. Right-click on the replica resource that needs shrinking and select Snapshot
Resource -- > Shrink.
The shrink option will be unavailable if the TimeMark option is not enabled on the
replica resource.
3. Enter the amount of space to be reclaimed from the current snapshot space and
enter YES to confirm.
By default, the maximum amount of space that can be reclaimed within the
snapshot resource will be calculated.
If there are no TimeMarks on the replica resource, the size will automatically
calculated at 50% of the actual snapshot resource space.
Use Snapshot to copy a SAN resource
FalconStors Snapshot Copy option allows you to create a duplicate, independent
point-in-time copy of a SAN resource without impacting application servers. The
entire resource is copied to another drive, overwriting any data on the target drive.
The source must have a Snapshot Resource in order to create a Snapshot Copy. If it
does not have one, you will be prompted to create one. Refer to Create a Snapshot
Resource for more information.
Note: We recommend that if a Snapshot Copy is being taken of a large database
without the use of a FalconStor Snapshot Agent, the database should reside on a
journaling file system (JFS). Otherwise, under heavy I/O, there is a slight possibility that the file system could be changed, resulting in the need to run a file system
check (fsck) in order to repair the file system.
1. Right-click on the SAN resource that you want to copy and select Copy.
CDP/NSS Administration Guide
276
Snapshot Resource
2. Select how you want to create the target resource.
Custom lets you select which physical device(s) to use and lets you
designate how much space to allocate from each.
Express automatically creates the target for you from available hard disk
segments.
Select Existing lets you select an existing resource. There are several
restrictions as to what you can select:
- The target must be the same type as the source.
- The target must be the same size as or larger than the source.
Note: All data on the target will be overwritten.
CDP/NSS Administration Guide
277
Snapshot Resource
If you select Custom, you will see the following windows:
Only one disk can be selected at a time
from this dialog. To create a target
resource from multiple physical disks,
you will need to add the disks one at a
time. After selecting the parameters for
the first disk, you will have the option to
add more disks. You will need to do this
if the first disk does not have enough
space.
Indicate how much space to
allocate from this disk.
Click Add More if
you need to add
another physical
disk to this target
resource.
You will go back to
the physical device
selection screen
where you can
select another disk.
CDP/NSS Administration Guide
278
Snapshot Resource
If you selected Select Existing in step 2, you will see the following window from
which you can select an existing resource:
3. Enter a name for the target resource.
The name is not case sensitive.
4. Confirm that all information is correct and then click Finish to perform the
Snapshot Copy.
Note: If a failover or recovery occurs when snapshot copy is taking place, the
snapshot copy will fail. You must resubmit the snapshot copy afterwards.
CDP/NSS Administration Guide
279
Snapshot Resource
5. Assign the snapshot copy to a client.
Note: If you attempt to assign a snapshot copy of a virtual disk multiple times to
the same Windows SAN Client, the snapshot copy will fail to import. This is
because the import of the foreign disk uses the same disk group name as that of
the current computer's disk group. This is a problem with Dynamic Disks; Basic
Disks will not have this issue.
Check Snapshot Copy status
You can see the current status of your Snapshot Copy by checking the General tab
of both the virtual drive you copied from or copied to.
Snapshot Copy events are also written to the servers Event Log, so you can check
there for status information, as well as any errors.
CDP/NSS Administration Guide
280
Snapshot Resource
Groups
The Group feature allows virtual drives and service-enabled drives to be grouped
together. Groups can be created for different reasons, for CDP purposes, for
snapshot synchronization, for organizational purposes, or for caching using the
SafeCache option.
Snapshot synchronization builds on FalconStors snapshot technology, which
ensures point-in-time consistency for data recovery purposes. Snapshots for all
resources in a group are taken at the same time whenever a snapshot is triggered.
Working in conjunction with the database-aware Snapshot Agents, groups ensure
transactional integrity for database or messaging files that reside on multiple disks.
You can create up to 64 groups. When you create a group, you can configure
TimeMark/CDP, Backup, Replication, and SafeCache (and, indirectly, asynchronous
mirroring) for the entire group. All members of the group get configured the same
way.
Create a group
To create a group:
1. In the FalconStor Management Console, right-click on Groups and select New.
Depending upon which options you enable, the subsequent screens will let you
set group policies for those options. Refer to the appropriate section(s)
(Replication, ZeroImpact Backup, TimeMarks and CDP, or SafeCache) for
details on configuration.
Note that you cannot enable CDP and SafeCache for the same group.
2. Indicate if you would like to add SAN resources to this group.
CDP/NSS Administration Guide
281
Snapshot Resource
Refer to the following sections for limitations as to which SAN resources can/
cannot join a group.
Groups with TimeMark/CDP enabled
The following notes affect groups configured for TimeMark/CDP:
You cannot add a resource to a group configured for either TimeMark or
CDP if the resource is already configured for CDP.
You cannot add a resource to a group configured for CDP if the resource is
already configured for SafeCache.
CDP can only be enabled for an existing group if members of the group do
not have CDP or SafeCache enabled.
TimeMark can be enabled for an existing group if members of the group
have TimeMark enabled.
The group will have only one CDP journal. You will not see a CDP tab for the
individual resources.
If you want to remove a resource from a group with CDP enabled, you must
first suspend the CDP journal for the entire group and wait until it finishes
flushing.
Groups with TimeMark/CDP enabled: If a member of a group has its own
TimeMark that needs to be updated, it must leave the group, make the
TimeMark updates individually, and then rejoin the group.
Groups with SafeCache enabled
The following notes affect groups configured for SafeCache:
You cannot add a resource to a group configured for SafeCache if the
resource is already configured for SafeCache.
SafeCache can only be enabled for an existing group if members of the
group do not have CDP or SafeCache enabled.
The group will have only one SafeCache resource. You will not see a
SafeCache tab for the individual resources.
If you want to remove a resource from a group with SafeCache enabled,
you must first suspend SafeCache for the entire group.
Groups with replication enabled
The following notes affect groups configured for replication:
When you create a group on the primary server, the target server gets a
group also.
When you add resources to a group configured for replication, you can
select any resource that is already configured for replication on the target
server or any resource that does not have replication configured at all. You
cannot select a resource if it is configured for replication to a different server.
CDP/NSS Administration Guide
282
Snapshot Resource
If a watermark policy is used for replication, the retry delay value configured
affects each group member individually rather than the group as a whole.
For example, if replication starts for the group and a group member fails
during the replication process, the retry delay value will take effect. In the
meantime, if another resource in the group reaches its watermark, a group
replication will be triggered for all group members and the retry delay will
become irrelevant.
If you are using continuous replication, the group will have only one
Continuous Replication Resource.
If a group is configured for continuous replication, you cannot add a
resource to the group if the resource has continuous replication enabled.
Similarly, continuous replication can only be enabled for an existing group if
members of the group do not have continuous replication enabled.
If you add a resource to a group that is configured for continuous replication,
the system switches to periodic replication mode until the next regularlyscheduled replication takes place.
Grant access to a group
By default, only the root user and IPStor administrators can manage SAN resources,
groups, or clients. While IPStor users can add new groups, if you want a CDP/NSS
user to manage an existing group, you must grant that user access. To do this:
1. Right-click on a group and select Access Control.
2. Select which user can manage this group.
Each group can only be assigned to one IPStor user. This user will have rights to
perform any function on this group, including assigning, joining, and configuring
storage services.
Add resources to a group
Each group can be comprised of multiple SAN resources. Each resource can only
join one group and you cannot have both types of resources in the same group.
Note: There is a limit of 128 resources per group. If the group is enabled for replication, the recommended limit is 50.
There are several ways to add resources to a group. After you create a group, you
will be prompted to add resources. At any time afterwards, you can:
1. Right-click on any group and select Join.
You can also right-click on any SAN resource and select Group --> Join.
2. Select the type of resources that will join this group.
If this is a group with existing members, you will see a list of members instead.
CDP/NSS Administration Guide
283
Snapshot Resource
3. Determine if you want to use Express Mode.
If you select Express Mode, you will be able to select multiple resources to join
this group at one time. After you finish selecting resources, they will
automatically be synchronized with the options and settings configured for the
group.
If you do not select Express Mode, you will need to select resources one-by-one.
For each resource, you will be taken through the applicable Replication and/or
Backup wizard(s) and you will have to manually configure each option.
(TimeMark is always configured automatically.)
4. Select resources to join this group.
CDP/NSS Administration Guide
284
Snapshot Resource
If you started the wizard from a SAN resource instead of from a group, you will
see the following window and you will select a group, instead of a resource:
When you click Next, you will see the options that must be activated. You will be
taken through the applicable Replication and/or Backup wizard(s) so you can
manually configure each option. (TimeMark is always configured automatically.)
5. Confirm all information and click Finish to add the resource(s) to the group.
Each resource will now have a tab for each configured option except CDP and
SafeCache which share a CDP journal or SafeCache resource as a group.
By default, group members are not automatically assigned to clients. You must
still remember to assign your group members to the appropriate client(s).
Remove resources from a group
Note that if you want to remove a resource from a group with CDP or SafeCache
enabled, you must first suspend the CDP journal for the group and wait for it to finish
flushing or suspend SafeCache. To suspend the CDP journal, right-click on the
group and select TimeMark/CDP --> CDP Journal --> Suspend. Afterwards, you will
need to resume the CDP journal. To suspend SafeCache, right-click on the group
and select SafeCache --> Suspend.
To remove resources from a group:
1. Right-click on any group and select Leave.
CDP/NSS Administration Guide
285
Snapshot Resource
2. Select resources to leave this group.
For groups enabled with Backup or Replication, leaving the group does not
disable Backup or Replication for the resource.
CDP/NSS Administration Guide
286
CDP/NSS Administration Guide
TimeMarks and CDP
Overview
FalconStors TimeMark and CDP options protect your mission critical data, enabling
you to recover data back from a previous point-in-time.
TimeMarks are point-in-time images of any SAN virtual drive. Using FalconStors
Snapshot technology, TimeMarks track multiple virtual images of the same disk
marked by "time". If you need to retrieve a deleted file or "undo" data corruption, you
can recreate/restore the file instantly based on any of the existing TimeMarks.
While the TimeMark option allows you to track changes to specific points in time,
with Continuous Data Protection (CDP) you can roll back data to any point-in-time.
TimeMark/CDP guards against soft errors, non-catastrophic data loss, including the
accidental deletion of files and software/virus issues leading to data corruption.
TimeMark/CDP protects where high availability configurations cannot, since in
creating a redundant set of data, high availability configurations also create a
duplicate set of soft errors by default. TimeMark/CDP protects data from your slipups, from the butter fingers of employees, unforeseen glitches during backup, and
from the malicious intent of viruses.
The TimeMark/CDP option also provides an undo button for data processing.
Traditionally, when an administrator performed operations on a data set, a full
backup was required before each dangerous step, as a safety net. If the step
resulted in undesirable effects, the administrator needed to restore the data set and
start the process all over again. With FalconStor's TimeMark/CDP option, you can
easily rollback (restore) a drive to its original state.
FalconStors TimeView feature is an extension of the TimeMark/CDP option and
allows you to mount a virtual drive as of a specific point-in-time. Deleted files can be
retrieved from the drive or the drive can be assigned to multiple application servers
for concurrent, independent processing, all while the original data set is still actively
being accessed/updated by the primary application server. This is useful for what if
scenarios, such as testing a new payroll application on your actual, but not live,
data.
Configure TimeMark properties by right-clicking on the TimeMark/CDP option and
selecting Properties.
CDP/NSS Administration Guide
287
TimeMarks and CDP
Setup
You will need a Snapshot Resource for the logical resource you are going to
configure. If you do not have one, you will create it through the wizard. Refer to
Create a Snapshot Resource for more information.
1. Right-click on a SAN resource, incoming replica resource, or a Group and select
TimeMark/CDP --> Enable.
For multiple SAN resources, right-click on the SAN Resources object and select
TimeMark/CDP --> Enable.
The Enable TimeMark/CDP Wizard launches.
2. Indicate if you want to enable CDP. Select the checkbox to enable CDP.
CDP enhances the benefits of using TimeMark by recording all changes made to
data, allowing you to recover to any point in time.
Note: If you enable CDP on the replica, it is recommended that you perform
replication synchronization. CDP journaling will not begin until the next
successful replication. You can wait until the next scheduled replication
synchronization or manually trigger synchronization. To manually trigger
replication synchronization, Right-click on the primary server and select
Replication --> Synchronization.
CDP/NSS Administration Guide
288
TimeMarks and CDP
3. (CDP only) Select the storage pool or physical device that should be used to
create the CDP journal.
4. (CDP only) Select how you want to create the CDP journal.
The minimum size required for the journal is 1 GB, which is the default size.
Custom lets you select which physical device(s) to use and lets you designate
how much space to allocate from each.
Express lets you designate how much space to allocate and then automatically
creates a CDP journal using an available device.
Select different drive - look for space on another hard disk.
Select drives from different adapter/channel - look for space on another
hard disk only if it is on a separate adapter/channel.
CDP/NSS Administration Guide
289
TimeMarks and CDP
Select any available drive - look for space on any disk, including the
original. This option is useful if you have mapped a device (such as a
RAID device) that looks like a single physical device.
Note: The CDP Journal performance level is set to Moderate by default. You
can modify this setting (to aggressive) by right-clicking on the SAN resource
and selecting TimeMark/CDP --> CDP Journal --> Performance.
5. Determine how often TimeMarks should be created.
CDP/NSS Administration Guide
290
TimeMarks and CDP
6. Select the Retention Policy.
You can select the following retention policies:
Keep the maximum number of TimeMarks. This number can vary
depending upon your system resources and license. The maximum
number of TimeMarks allowed set in the previous screen will limit this
number
Keep the ___ most recent TimeMarks. (The maximum can be up to 1000,
depending on your license. The default is 8.)
Keep TimeMarks based on the following rule:
Keep all TimeMarks for the past [1-168 hours or 1-365 days] hours /
days. (default is 1 day)
Keep hourly TimeMarks for the past [0 - 365] days and use the
TimeMark closest to [0 - 59] as the hourly TimeMark. (default is 1 day
and 0 as the hour)
Keep daily TimeMarks for the past [0 - 365] days and use the
TimeMark closest to [0 - 23] as the daily TimeMark.
Keep weekly TimeMarks for the past [0 - 110] weeks and use the
TimeMark closest to [Monday - Sunday] as the weekly TimeMark.
Keep monthly TimeMarks for the past [0 - 120] months and use the
TimeMark closest to [1 - 31 as the monthly TimeMark.
Specifying the number of TimeMarks to keep for each level allows you to define
snapshot preserving patterns to organize your TimeMarks. By indicating the
number of TimeMarks to keep at each level, you can specify how many
TimeMarks to keep for any or all of the categories. The categories are hourly,
daily, weekly, and monthly.
CDP/NSS Administration Guide
291
TimeMarks and CDP
This feature allows you to save a pre-determined number of TimeMarks and
delete the rest. The TimeMarks that are preserved are the result of the pruning
process. This method allows you to keep only meaningful snapshots.
When defining the TimeMark retention policy, you are prompted to specify the
offset of the moment to keep, i.e. Use the TimeMark closest to___. For example,
for daily TimeMarks, you are asked to specify which hour of the day to use for
the TimeMark. For weekly TimeMarks, you are asked which day of the week to
keep. If you set an offset for which there is no TimeMark, the closest one to that
time is taken.
The default offset values correspond to typical usage based on the fact that the
older the information, the less valuable it is. For instance, you can take
TimeMarks every 20 minutes, but keep only those snapshots taken at the minute
00 each hour for the last 24 hours.
7. Select the Trigger replication after TimeMark is taken checkbox if TimeMark and
Replication are both enabled for this device/group in order to have replication
triggered automatically after each TimeMark event.
If TimeMark is enabled for a group, replication must also be enabled at the group
level. You should manually suspend the replication schedule when using this
option to avoid a scheduling conflict.
8. Confirm that all information is correct and then click Finish to enable TimeMark/
CDP.
You now have a TimeMark tab for this resource or group. If you enabled CDP,
you also have a separate CDP tab. If you are using CDP, the TimeMarks will be
points within the CDP journal.
In order for a TimeMark to be created, you must select Create an initial
TimeMark on... policy. Otherwise, you will have enabled TimeMark, but not
created any. You will then need to manually create them using TimeMark/CDP
--> Create.
CDP/NSS Administration Guide
292
TimeMarks and CDP
If you are configuring TimeMark for an incoming replica resource, you cannot
select the Create an initial TimeMark on... policy. Instead, a TimeMark will be
created after each scheduled replication job finishes.
Depending upon the version of your system, the maximum number of
TimeMarks that can be maintained is 1000. The maximum does not include the
snapshot images that are associated with TimeView resources. Once the
maximum is reached, the earliest TimeMarks will be deleted depending upon
priority. Low priority TimeMarks are deleted first, followed by Medium, High, and
then Critical. When a TimeMark is deleted, journal data is merged together with
a previous TimeMark (or a newer TimeMark, if no previous TimeMarks exist).
Note:
If CDP is enabled, only 256 TimeMarks are supported. This is
because CDP can only allow 256 snapshot markers, regardless of
whether they are flushed or not.
A temporary TimeMark does not count toward the maximum
TimeMark count Within list.
The first TimeMark that is created when CDP is used will have a Medium priority.
Subsequent TimeMarks will have a Medium priority by default, but can be
changed manually. Refer to Add a comment or change priority of an existing
TimeMark for more information.
Note:
A TimeView cannot be created from the CDP journal if TimeMarks
already have TimeView data or is a VSS TimeMark.
When a TimeView is created from the CDP journal, it is
recommended that you change the default 32 MB setting to a larger
size to accommodate the large amount of data.
Snapshot Notification works with FalconStor Snapshot Agents to initiate a
snapshot request to a SAN client. When used, the system notifies the client to
quiet activity on the disk before a snapshot is taken. Using snapshot notification
guarantees that you will get a transactionally consistent image of your data.
This might take some time if the client is busy. You can speed up processing by
skipping snapshot notification if you know that the client will not be updating data
when a TimeMark is taken. Use the Trigger snapshot notification for every n
scheduled TimeMark(s) option to select which TimeMarks should use snapshot
notification.
Note: Once you have successfully enabled CDP on the replica, perform
Replication synchronization.
CDP/NSS Administration Guide
293
TimeMarks and CDP
Check TimeMark status
You can see a list of TimeMarks for this virtual drive, along with your TimeMark
policies, by clicking the TimeMark tab.
TimeMarks displayed in orange are pending, meaning there is unflushed data in the
CDP journal. Unflushed TimeMarks cannot be selected for rollback or TimeView.
To re-order the list of TimeMarks, click on a column heading to sort the list.
The Quiescent column indicates whether or not snapshot notification occurred when
the TimeMark was created. When a device is assigned to a client, the initial value is
set to No. A Yes in the Quiescent column indicates there is an available agent on the
client to handle the snapshot notification, and the snapshot notification was
successful.
If a device is assigned to multiple clients, such as nodes of a cluster, the Quiescent
column displays Yes only if the snapshot notification is successful on all clients; if
there is a failure on one of the clients, the column displays No.
However, in the case of a VSS cluster, the Quiescent column displays Yes with VSS
when the entire VSS process has successfully completed on the active node and
the snapshot has been created.
If you are looking at this tab for a replica resource, the status will be carried from the
primary resource. For example, if the TimeMark created on the primary virtual
device used snapshot notification, Quiescent will be set to Yes for the replica.
The TimeView Data column indicates whether TimeView data or a TimeView
resource exists on the TimeMark.
The Status column indicates the TimeMark state.
Note: A vdev expanded TimeMark is created automatically when a source
device with CDP is expanded.
CDP/NSS Administration Guide
294
TimeMarks and CDP
Right-click on the virtual drive and select Refresh to update the TimeMark Used Size
and other information on this tab. To see how much space TimeMark is using, check
the Snapshot Resource tab.
Check CDP journal status
You can see the current size and status of your CDP journal by checking the CDP
tab.
CDP/NSS Administration Guide
295
TimeMarks and CDP
Protect your CDP journal
This section applies only to CDP.
You can protect your CDP journal by using FalconStors Mirroring option. With
Mirroring, each time data is written to the journal, the same data is also written to
another disk which maintains an exact copy of the journal. If the primary journal disk
fails, CDP seamlessly swaps to the mirrored copy.
To mirror a journal, right-click on the SAN resource and select TimeMark/CDP -->
CDP Journal --> Mirror --> Add.
Add a tag to the CDP journal
You can manually add a tag to the CDP journal. The tag will be used to notate the
journal when the next I/O occurs. Adding a tag with a meaningful comment is useful
for marking special situations, such as system maintenance or software upgrades.
With these tags, it is easy to find the point just prior to when the system maintenance
or software upgrade began, making rollback easy and accurate.
1. Highlight a SAN resource and select TimeMark/CDP --> CDP Journal
--> Add tag.
2. Type in a tag and click OK.
Add a comment or change priority of an existing TimeMark
You can add a comment to an existing TimeMark to make it easy to identify later. For
example, you might add a known good recovery point, such as an application
checkpoint to identify a TimeMark for easy recovery.
You can also change the priority of a TimeMark. Priority eases long term
management of TimeMarks by allowing you to designate importance, aiding in the
preservation of critical point-in-time images.
CDP/NSS Administration Guide
296
TimeMarks and CDP
Priority affects how TimeMarks will be deleted once the maximum number of
TimeMarks to keep has been reached. Low priority TimeMarks are deleted first,
followed by Medium, High, and then Critical.
Note: Groups with TimeMark/CDP enabled: If a member of a group has its own
TimeMark that needs to be updated, it must leave the group, make the TimeMark
updates individually, and then rejoin the group.
1. Right-click on the TimeMarked SAN resource that you want to update and select
TimeMark/CDP --> Update
2. Click in the Comment or Priority field to make/change entries.
3. Click Update when done.
Manually create a TimeMark
1. To create a TimeMark that is not scheduled, select TimeMark/CDP --> Create.
2. If desired, add a comment for the TimeMark that will make it easily identifiable
later if you need to locate it.
3. Set the priority for this TimeMark.
CDP/NSS Administration Guide
297
TimeMarks and CDP
Once the maximum number of TimeMarks to keep has been reached, the
earliest TimeMarks will be deleted depending upon priority. Low priority
TimeMarks are deleted first, followed by Medium, High, and then Critical.
4. Indicate if you want to use Snapshot Notification for this TimeMark.
Snapshot Notification works with FalconStor Snapshot Agents to initiate a
snapshot request to a SAN client. When used, the system notifies the client to
quiet activity on the disk before a snapshot is taken. Using snapshot notification
guarantees that you will get a transactionally consistent image of your data.
This might take some time if the client is busy. You can speed up processing by
skipping snapshot notification if you know that the client will not be updating data
when this TimeMark is taken.
The use of this option overrides the Snapshot Notification setting in the snapshot
policy.
Copy a TimeMark
The Copy feature works similarly to FalconStors Snapshot Copy option. It allows
you to take a TimeMark image of a drive (for example, how your drive looked at 9:00
this morning) and copy the entire drive image to another virtual drive or SAN
resource. The virtual drive or SAN resource can then be assigned to clients for use
and configured for FalconStor storage services.
1. Right-click on the TimeMarked SAN resource that you want to copy and select
TimeMark/CDP --> Copy.
Note: Do not initiate a TimeMark Copy while replication is in progress. Doing
so will result in the failure of both processes.
2. Select the TimeMark image that you want to copy.
CDP/NSS Administration Guide
298
TimeMarks and CDP
To copy the TimeMark and TimeView data, select the Copy the TimeMark and
TimeView data checkbox at the bottom left of the screen.
This option is only available if there is TimeView data available. This option is not
available if the TimeView data is in use/mounted or if there is no TimeView. In
this case, you will only be able to create a copy of the disk image at the time of
the timestamp (without new data that has been written to the TimeView). To
capture the new data in this case, see the example below.
For example, if you have assigned a TimeView to a disaster recovery (DR) host
and have started writing new data to the TimeView, when you use TimeMark
Copy you will have a copy of the point in time without the "new" data that was
written to the TimeView. In order to create a full disk copy to include the data in
the TimeView, you will need to unassign the TimeView from the DR host, delete
the TimeView and select the keep the TimeView data persistent option.
Afterwards, TimeMark Copy will include the new data. You can recreate the
TimeView again with the new data and assign back to the DR host.
To revert back to the original TimeMark, you must delete the TimeView again,
but do not select the keep the TimeView data persistent option. This will remove
the new data from the TimeMark.
3. Select how you want to create the target resource.
Custom lets you select which physical device(s) to use and lets you designate
how much space to allocate from each.
Express automatically creates the target for you from available hard disk
segments. You will only have to select the storage pool or physical device that
should be used to create the copy.
Select Existing lets you select an existing resource. There are several
restrictions as to what you can select:
CDP/NSS Administration Guide
299
TimeMarks and CDP
The target must be the same type as the source.
The target must be the same size as, or larger than, the source.
The target cannot have any Clients assigned or attached.
Note: All data on the target will be overwritten.
4. Enter a name for the target resource.
5. Confirm that all information is correct and then click Finish to perform the
TimeMark Copy.
You can see the current status of your TimeMark Copy by checking the General
tab of either virtual drive. You can also check the servers Event Log for status
information.
Recover data using the TimeView feature
TimeView allows you to mount a virtual drive as of a specific point-in-time, based on
your existing TimeMarks or your CDP journal.
Use TimeView if you need to restore individual files from a drive but you do not want
to rollback the entire drive to a previous point in time. Simply use TimeView to mount
the virtual drive and then copy the files you need back to your original virtual drive.
TimeView also enables you to perform what if scenarios, such as testing a new
payroll application on your actual, but not live, data. After mounting the virtual drive,
it can be assigned to an application server for independent processing without
affecting the original data set. A TimeView cannot be configured for any of
FalconStors storage services.
Why should you use TimeView instead of Copy? Unlike Copy, which creates a new
virtual drive and requires disk space equal to or larger than the original disk, a
TimeView requires very little disk space to mount. It is also quicker to create a
TimeView than to copy data to a new virtual drive.
Note: Clients may not be able to access TimeViews during failover.
1. Highlight a SAN resource and select TimeMark/CDP --> TimeView.
CDP/NSS Administration Guide
300
TimeMarks and CDP
The Create TimeView Wizard displays.
Move the slider to select
any point in time. You
can also type in the date
and time down to the
millisecond and
microsecond.
Zoom In to see
greater detail for
the selected time
period.
Click to select a
CDP journal tag
that was
manually added.
If this resource has CDP enabled, the top section contains a graph with marks
that represent TimeMarks.
The graph is a relative reflection of the data changing between TimeMarks within
the available journal range. The vertical y axis represents data usage per
TimeMark; the height of each mark represents the Used Size of each TimeMark.
The horizontal x axis represents time. Each mark on the graph indicates a single
TimeMark. You will not see TimeMarks that have no data.
Because the graph is a relative reflection of data, and the differences in data
usage can be very large, the proportional height of each TimeMark might not be
very obvious.
For example, if you have one TimeMark with a size of 500 MB followed by
several much smaller TimeMarks, the 500 MB TimeMark will be much more
visible.
Similarly, if the maximum number of TimeMarks has been reached and older
TimeMarks have been deleted to make way for newer ones, journal data is
merged together with a previous TimeMark (or a newer TimeMark, if no previous
exist). Therefore, it is possible that you will see one large TimeMark containing
all of the merged data.
Also, since the length of the x axis can reflect a range as small as one hour to 30
days, the location of an actual data point is approximate. Zooming in and using
the Search button will allow you to get a more accurate location of a particular
data point.
If CDP is enabled, you can use the visual slider to create a TimeView from any
point in the CDP journal or you can create a TimeView from a scheduled
TimeMark.
CDP/NSS Administration Guide
301
TimeMarks and CDP
You can also click the Select Tag button to select a CDP journal tag that was
manually added or was automatically added by CDP after a rollback occurred.
Note that you will only see the tags for which there was subsequent I/O.
If CDP is not enabled, you will only be able to create a TimeView from a
scheduled TimeMark.
2. To create a TimeView from a scheduled TimeMark, select Create TimeView from
TimeMark Snapshots, highlight the correct TimeMark, and click OK.
If this is a replica server, the timestamp of a TimeMark is the timestamp of the
source (not the replicas local time).
3. To create a TimeView from the CDP journal, use the slider or type in an
approximate time.
For example, if you are trying to find a deleted file, select a time prior to when the
file was deleted. If this was an active file, aim for a time just prior to when the file
was deleted so that you can recover the most up-to-date version.
If you are positive that the time you selected is correct, you can click OK to
create a TimeView. If you are unsure of the exact time, you can zoom into an
approximate time period to see greater detail, such as seconds, milliseconds,
and even microseconds.
4. If you need to see greater detail, click Zoom In.
TimeMark period
Five minute range within
this TimeMark
You can see the I/O that occurred during this five minute time frame displayed in
seconds.
CDP/NSS Administration Guide
302
TimeMarks and CDP
If you zoomed in and dont see what you are looking for, you can click the Scroll
button. It will move forwards or backwards by five minutes within the period of
this TimeMark.
You can also click the Search button to locate data or a period with limited or no
I/O.
At any point, if you know what time you want to select, you can click OK to return
to the main dialog so that you can click OK to create a TimeView. Otherwise, you
can zoom in further to see greater detail, such as milliseconds and
microseconds.
You can then use the slider to select a time just before the file was deleted.
It is best to select a quiet time without I/O to get the most stable version of the
file.
CDP/NSS Administration Guide
303
TimeMarks and CDP
5. After you have selected the correct point in time, click OK to return to the main
dialog and then click OK to create a TimeView.
6. Select the physical resource for SAN TimeView Resource.
CDP/NSS Administration Guide
304
TimeMarks and CDP
7. Select a method for TimeView Creation.
Notes:
The Timeview only uses physical space when I/O is written to the TimeView
device. New write I/O may trigger expansion to allocate more physical
space for the TimeView when no more space is available. Read I/O does
not require additional physical space.
The maximum size to which a TimeView device can be allocated is 5%
more than the primary device. For example: Maximum TimeView body size
= 1.05 X primary device size. The allocated size will be checked for both
policy and user triggers to expand when necessary.
The formula for allocating the initial size of the physical space for the
TimeView is as follows:
If the primary device size is less than 5GB, the initial TimeView
size = primary size X 1.05 (the maximum TimeView size)
If the primary device size is greater than 5GB, the initial TimeView
size = 5GB
If creating a TimeView from a VSS TimeMark, the initial TimeView
size = 32MB (as shown in the screen above)
For best performance, it is recommended that you do not lower the default
initial size of the TimeView if you intend to write to the TimeView device (i.e.
when using HyperTrac).
Once the TimeView is deleted, the space becomes available. TimeViews
cannot be shrunk once the space is allocated.
CDP/NSS Administration Guide
305
TimeMarks and CDP
8. Enter a name for the TimeView and click OK to finish.
The Set TimeView Storage Policy screen displays.
9. Verify and create the TimeView resource.
10. Assign the TimeView to a client.
The client can now recover any files needed.
CDP/NSS Administration Guide
306
TimeMarks and CDP
Remap a TimeView
With TimeViews, you can mount a virtual drive as of a specific point-in-time, based
on your existing TimeMarks or your CDP journal. If you are finished with one
TimeView but need to create another for the same virtual device, you can remap the
TimeView to another point-in-time. When remapping, a new TimeView is created
and all of the client connections are retained. To remap a TimeView, follow the steps
below:
Note: It is recommend that you disable the TimeView from the client (via the
Device Manager on Windows machines) before remapping it.
1. Right-click on an existing TimeView and select Remap.
You must have at least one additional TimeMark available.
2. Select a TimeMark or a point in the CDP journal.
3. Enter a name for the new TimeView and click Finish.
Delete a TimeView
Deleting a TimeView also involves deleting the SAN resource. To delete a
TimeView:
1. Right-click on the TimeView and select Delete.
2. Select whether you want to Keep the TimeView data to be persistent when recreated with the same TimeMark.
This option allows you to save the TimeView data on the TimeMark and restore
the data when it is recreated with the same TimeMark.
3. Type Yes in the box and click OK to confirm the deletion.
CDP/NSS Administration Guide
307
TimeMarks and CDP
Remove TimeView Data
Obsolete TimeView data is automatically removed for all devices after a successful
scheduled reclamation.
Use the Remove TimeView Data option to manually delete obsolete data. You may
want to use this option after you have deleted a TimeMark and you want to clean up
TimeView data.
This option can be triggered in batch mode, by right-clicking on the Logical
Resources node in the FalconStor Management Console and selecting Remove
TimeView Data.
To remove TimeView data on an individual device, right-click on the SAN resource
and select Remove TimeView Data.
The first option allows you to remove all TimeView data from selected
virtual device(s)
The second option allows you to select specific TimeView data for
deletion.
CDP/NSS Administration Guide
308
TimeMarks and CDP
Set TimeView Policy
TimeView uses its own storage, separate from the snapshot resource. The
TimeView Storage Policy can be set during TimeView creation. After a TimeView is
created, the storage (auto-expansion) policy can be modified from the properties
option.
To set the TimeView storage policy:
1. Right-click on a TimeView device and select Properties.
2. Select the storage policy to be used when space starts to run low.
Specify the threshold as a percentage of the space used (1 - 99%). The
default is the same value as the snapshot resource threshold. Once the
specified threshold is met, automatic expansion is triggered.
Automatically allocate more space for the TimeView device. Check this
option to allow the system to allocate additional space (according to the
following settings) once the threshold is met.
Enter the percentage to Increment space by. The default is the same
value as the snapshot resource threshold.
Enter the maximum size (in MB) allowed for the TimeView device. This
is the maximum size limit used by automatic expansion. The default is
0, which means maximum TimeView size.
CDP/NSS Administration Guide
309
TimeMarks and CDP
Rollback or roll forward a drive
Rollback restores your drive to a specific point in time, based on your existing
TimeMarks, TimeViews, or your CDP journal. After rollback, your drive will look
exactly like it did at that point in time.
After rolling a drive back, TimeMarks made after that point in time will be deleted but
all of the CDP journal data will be available, if CDP is enabled. Therefore it is
possible to perform another rollback and select a journal date ahead of the previous
time, essentially rolling forward.
Group rollback allows you to rollback up to 32 (the default) disks to a TimeMark or
CDP data point. To perform a group rollback, right-click on the group and select
Rollback. TimeMarks that are common to all devices in a group will display in the
wizard.
1. Unassign the Client(s) from the virtual drive before rollback.
For non-Windows Clients, type ./ipstorclient stop from /usr/local/
ipstorclient/bin.
Note: To avoid the need to reboot a Windows 2003 client, unassign the SAN
resource from the client now and then reassign it just before re-attaching your
client using the FalconStor Management Console.
2. Right-click on the virtual drive and select TimeMark/CDP --> Rollback.
To enable preservation of all timestamps, check Preserve all TimeMarks with
more recent timestamps.
CDP/NSS Administration Guide
310
TimeMarks and CDP
Do not initiate a TimeMark rollback to a raw device while data is currently being
written to the raw device. The rollback will fail because the device will fail to
open.
If you have already created a TimeView from the CDP journal and want to roll
back your virtual device to that point in time, right-click on the TimeView and
select Rollback to.
3. Select a specific point in time or select the TimeMark to which you want to
rollback.
If CDP is enabled and you have previously rolled back this drive, you can select
a future journal date.
If you selected a TimeView in the previous step, you will not have to select a
point in time or a TimeMark.
4. Confirm that you want to continue.
A TimeMark will be taken automatically at the point of the rollback and a tag will
be added into the journal. The TimeMark will have the description !!XX-- POST
CDP ROLLBACK --XX!! This way, if you later need to create a TimeView, it will
contain data from the new TimeMark forward to the TimeView time. This means
you will see the disk as it looked immediately after rollback plus any data written
to the disk after the rollback occurred until the time of the TimeView.
It is recommended that you remove the POST CDP ROLLBACK after a
successful rollback because it counts towards the TimeMark count for that
member.
5. When done, re-attach your Client(s).
Note: If DynaPath is running on a Windows client, reboot the machine after rollback.
Change your TimeMark/CDP policies
You can change your TimeMark schedule, and enable/disable CDP on single
devices.
Note: You cannot enable/disable CDP by updating TimeMark properties in batch
mode.
To change a policy:
1. Right-click on the virtual drive and select TimeMark/CDP --> TimeMark -->
Properties.
CDP/NSS Administration Guide
311
TimeMarks and CDP
2. Make the appropriate changes and click OK.
Note: If you uncheck the Enable Continuous Data Protection box, this will
disable CDP and will delete the CDP journal. It will not delete TimeMarks. If
you want to disable TimeMark and CDP, refer to the Disable TimeMark and
CDP section below.
In addition, you can update TimeMark properties in batch mode.
To update multiple SAN resources:
1. Right-click on the SAN resources object and select Properties.
The Update TimeMark Properties screen displays.
2. Select all of the resources you want to update and click Next.
3. Make the desired policy changes and click OK.
TimeMark retention policy
The TimeMark retention policy allows you to specify which TimeMark snapshots to
keep over time.
The number of TimeMarks you keep and the frequency with which you take them
determines how far back you can retrieve data. For example, if you limit the number
of TimeMarks to 24, and you take a TimeMark every day, you can retrieve any data
from the past 24 days. If you take TimeMarks once a month, you can retrieve any
data from the past two years.
CDP/NSS Administration Guide
312
TimeMarks and CDP
To set the retention policy, right-click on the SAN resource and select TimeMark/
CDP --> Properties. Then select the TimeMark Retention tab.
You can select the following retention policies:
Keep the maximum number of TimeMarks. This number can vary depending
upon your system resources and license.
Keep the ___ most recent TimeMarks. (The maximum is 1000. The default
is 8.)
Keep TimeMarks based on the following rule:
Keep all TimeMarks for the past [1-168 hours or 1-365 days] hours /
days. (default is 1 day)
Keep hourly TimeMarks for the past [0 - 365] days and use the
TimeMark closest to [0 - 59] as the hourly TimeMark. (default is 1 day
and 0 as the hour)
Keep daily TimeMarks for the past [0 - 365] days and use the TimeMark
closest to [0 - 23] as the daily TimeMark.
Keep weekly TimeMarks for the past [0 - 110] weeks and use the
TimeMark closest to [Monday - Sunday] as the weekly TimeMark.
Keep monthly TimeMarks for the past [0 - 120] months and use the
TimeMark closest to [1 - 31 as the monthly TimeMark.
Specifying the number of TimeMarks to keep for each level allows you to define
snapshot preserving patterns to organize your TimeMarks. By indicating the number
of TimeMarks to keep at each level, you can specify how many TimeMarks to keep
for any or all of the categories. The categories are hourly, daily, weekly, and monthly.
CDP/NSS Administration Guide
313
TimeMarks and CDP
This feature allows you to save a pre-determined number of TimeMarks and delete
the rest. The TimeMarks that are preserved are the result of the pruning process.
This method allows you to keep only meaningful snapshots.
When defining the TimeMark retention policy, you are prompted to specify the offset
of the moment to keep, i.e. Use the TimeMark closest to___. For example, for daily
TimeMarks, you are asked to specify which hour of the day to use for the TimeMark.
For weekly TimeMarks, you are asked which day of the week to keep. If you set an
offset for which there is no TimeMark, the closest one to that time is taken.
The default offset values correspond to typical usage based on the fact that the
older the information, the less valuable it is. For instance, you can take TimeMarks
every 20 minutes, but keep only those snapshots taken at the minute 00 each hour
for the last 24 hours.
Delete TimeViews in batch mode
You can delete multiple TimeViews in a device. To do this, select the SAN device to
delete.
Suspend/resume CDP
[For CDP only]
You can suspend/resume CDP for an individual resource. If the resource is in a
group, you can suspend/resume CDP at the group level. Suspending CDP does not
delete the CDP journal and it does not delete any TimeMarks. When CDP is
resumed, data resumes going to the journal.
CDP/NSS Administration Guide
314
TimeMarks and CDP
To suspend/resume CDP, right-click on the resource or group and select TimeMark/
CDP --> CDP Journal --> Suspend (or Resume).
Delete TimeMarks
The Delete option lets you delete one or more TimeMark images for a virtual drive.
Depending upon which TimeMark(s) you delete, this may or may not free up space
in your Snapshot Resource. A general rule is that you will only free up Snapshot
Resource space if the earliest TimeMark is deleted. If other TimeMarks are deleted,
you will need to run reclamation to free up space. Refer to Snapshot Resource
shrink and reclamation policies.
1. Right-click on the virtual drive and select TimeMark/CDP --> Delete.
2. Highlight one or more TimeMarks and click Delete.
3. Type yes to confirm and click OK to finish.
Disable TimeMark and CDP
If you ever need to disable TimeMark and CDP, you can select TimeMark/CDP -->
Disable. In addition to disabling TimeMark and CDP, this will delete the CDP journal
and all existing TimeMarks.
For multiple SAN resources, right-click on the SAN Resources object and select
TimeMark/CDP --> Disable.
If you only want to disable CDP and delete the CDP resource, refer to the Change
your TimeMark/CDP policies section.
Replication and TimeMark/CDP
The timestamp of a TimeMark on a replica is the timestamp of the source.
You cannot manually create any TimeMarks on the replica, even if you
enable TimeMark/CDP on the replica.
If you are using TimeMark with CDP, you must use Continuous Mode
replication (not Delta Mode).
CDP/NSS Administration Guide
315
CDP/NSS Administration Guide
NIC Port Bonding
NIC Port Bonding is a load-balancing/path-redundancy feature available for Linux.
This feature enables you to configure your storage server to load-balance network
traffic across two or more network connections creating redundant data paths
throughout the network.
NIC Port Bonding offers a new level of data accessibility and improved performance
for storage systems by eliminating the point of failure represented by a single input/
output (I/O) path between servers and storage systems and permits I/O to be
distributed across multiple paths.
NIC Port Bonding allows you to group up to eight network interfaces into a single
group.
NIC Port Bonding supports the following scenarios:
2 port bond
4 port bond
Dual 2 port bond
8 port bond
Dual 4 port bond
You can think of this group as a single virtual adapter that is actually made up of
multiple physical adapters. To the system and the network, it appears as a single
interface with one IP address. However, throughput is increased by a factor equal to
the number of adapters in the group. Also, NIC Port Bonding detects faults
anywhere from the NIC out into the network and provides dynamic failover in the
event of a failure.
You can define a virtual network interface (NIC) which sends and receives traffic to/
from multiple physical NICs. All interfaces that are part of a bond have SLAVE and
MASTER definitions.
Enable NIC Port Bonding
To enable NIC Port Bonding with less than four NICs:
1. Right click on the server.
2. Select System Maintenance --> Bond NIC Port.
The NIC Port Bonding screen displays.
3. Enter the IP Address and Netmask for the bonded interfaces: eth0 and eth1.
Then click OK.
A bonding interface bond0 with slaves eth0 and eth1 is created.
CDP/NSS Administration Guide
316
NIC Port Bonding
To enable NIC Port Bonding with four or more NICs:
1. Right click on the server.
2. Select System Maintenance --> Bond NIC Port.
The NIC Port Bonding screen displays.
3. Select the number of bonded teams you are setting up.
You can choose to bond the ethernet interfaces in one group, two groups, or you
can bond the first two interfaces into one group.
For one team containing four to eight NICs, enter the IP Address and
Netmask of the master and click OK.
For two teams, enter the IP Address and Netmask of each Master and
click OK.
CDP/NSS Administration Guide
317
NIC Port Bonding
For one team containing only eth0 and eth1, enter the IP Address and
Netmask of the master and click OK.
NIC Port Bonding can be configured to use round robin load-balancing, so the
first frame is sent on eth0, the second on eth1, the third on eth0 and so on. The
bonding choices are shown below:
Bonding choices:
No Bonding
Eth0/Eth1
(1 group), 2 port
Eth0/Eth1/Eth2/Eth3
(1 group), 4 port
Eth0/Eth1/Eth2/Eth3/Eth4/Eth5/Eth6/Eth7
(1 group), 8 port
Eth0/Eth2,Eth1/Eth3
(2 group), 4 port
Eth0/Eth2/Eth4/Eth6, Eth1/Eth3/Eth5/Eth7
(2 group), 8 port
Mode=0 (Sequential) transmission of data in Round-Robin mode, (mode=0)
is the default mode option. There is no switch involved.
Mode=4 (Link Aggregation) transmission of data in a more dedicated, tuned
mode where the NIC ports work together with switches. This mode requires
an LACP (802.3.ad) capable switch.
CDP/NSS Administration Guide
318
NIC Port Bonding
Remove NIC Port Bonding
To remove NIC Port Bonding, right click on the server, select System Maintenance,
and click Yes to confirm the NIC Port Bonding removal.
Change IP address
During the bonding process, you will have the option to select a new IP address.
CDP/NSS Administration Guide
319
CDP/NSS Administration Guide
Replication
Overview
Replication is the process by which a SAN resource maintains a copy of itself either
locally or at a remote site. The data is copied, distributed, and then synchronized to
ensure consistency between the redundant resources. The SAN resource being
replicated is known as the primary disk. The changed data is transmitted from the
primary to the replica disk so that they are synchronized. Under normal operation,
clients do not have access to the replica disk.
If a disaster occurs and the replica is needed, the administrator can promote the
replica to become a SAN resource so that clients can access it. Replica disks can be
configured for CDP or NSS storage services, including backup, mirroring, or
TimeMark/CDP, which can be useful for viewing the contents of the disk or
recovering files.
Replication can be set to occur continuously or at set intervals (based on a schedule
or watermark). For performance purposes and added protection, data can be
compressed or encrypted during replication.
Remote
replication
Remote replication allows fast, data synchronization of storage volumes from one
CDP or NSS appliance to another over the IP network.
With remote replication, the replica disk is located on a separate CDP or NSS
appliance, called the target server.
Local
replication
Local replication allows fast, data synchronization of storage volumes within one
CDP or NSS appliance. It can be used within metropolitan area Fibre Channel
SANs, or can be used with IP-based Fibre Channel extenders.
CDP/NSS Administration Guide
320
Replication
With local replication, the replica disk is connected to the CDP or NSS appliance via
a gateway using edge routers or protocol converters. Because there is only one
CDP or NSS appliance, the primary and target servers are the same server.
How replication works
Replication works by transmitting changed data from the primary disk to the replica
disk so that the disks are synchronized. How frequently replication takes place
depends on several factors.
Delta
replication
With standard, delta replication, a snapshot is taken of the primary disk at prescribed
intervals based on the criteria you set (schedule and/or watermark value).
Continuous
replication
With FalconStors Continuous Replication, data from the primary disk is continuously
replicated to a secondary disk unless the system determines it is not practical or
possible, such as when there is insufficient bandwidth. In these types of situations
the system automatically switches to delta replication. After the next regularlyscheduled replication takes place, the system automatically switches back to
continuous replication.
For continuous replication to occur, a Continuous Replication Resource is used to
stage the data being replicated from the primary disk. Similar to a cache, as soon as
data comes into the Continuous Replication Resource, it is written to the replica
disk. The Continuous Replication Resource is created during the replication
configuration.
There are several events that will cause continuous replication to switch back to
delta replication, including when:
The Continuous Replication Resource is full due to insufficient bandwidth
The CDP or NSS appliance is restarted
After failover occurs
You perform the Replication --> Scan option
You add a resource to a group configured for continuous replication
Continuous Replication Resource is offline
The target server IP address is changed
CDP/NSS Administration Guide
321
Replication
Replication configuration
Requirements
The following are the requirements for setting up a replication configuration:
(Remote replication) You must have two storage servers.
(Remote replication) You must have write access to both Servers.
You must have enough space on the target server for the replica and for the
Snapshot Resource.
Both clocks should be synchronized so that the timestamp matches.
In order to replicate to a disk with Thin Provisioning, the size of the SAN
resource must be equal to or greater than 10GB (the minimum permissible
size of a thin disk).
Setup
You can enable replication for a single SAN resource or you can use the batch
feature to enable replication for multiple SAN resources.
You need Snapshot Resources for the primary and replica disks. If you do not have
them, you can create them through the wizard. Refer to Create a Snapshot
Resource for more information.
1. For a single SAN resource, right-click on the resource and select Replication -->
Enable.
For multiple SAN resources, right-click on the SAN Resources object and select
Replication --> Enable.
The Enable Replication for SAN resources wizard launches. Each primary disk
can only have one replica disk. If you do not have a Snapshot Resource, the
wizard will take you through the process of creating one.
CDP/NSS Administration Guide
322
Replication
2. Select the server that will contain the replica.
For local replication, select the Local Server.
For remote replication, select any server but the Local Server.
If the server you want does not appear on the list, click the Add button.
3. (Remote replication only) Confirm/enter the target servers IP address.
CDP/NSS Administration Guide
323
Replication
4. Specify if you want to use Continuous Replication mode or Delta mode.
Note: If you are using TimeMark with CDP, you must use Continuous Mode
replication.
Continuous Mode - Select if you want to use FalconStors Continuous
Replication. After the replication wizard completes, you will be prompted to
create a Continuous Replication Resource for the primary disk.
The TimeMark options listed below for continuous mode are primarily used for
devices assigned to a VSS-enabled client to maintain the TimeMark
synchronization on both the primary and replica disks.
Create Primary TimeMark - This option allows you to create the primary
TimeMark when a replica TimeMark is created by a user of the replication
schedule and the primary TimeMark option is enabled.
Synchronize Replica TimeMark - This option allows you to synchronize
the replica TimeMark when a primary TimeMark is created by a user or
TimeMark schedule.
Delta Mode - Select if you want replication to occur at set intervals (based on
schedule or watermark).
The TimeMark options for delta mode are as follows:
Use existing TimeMark - Determine if you want to use the most current
TimeMark on the primary server when replication begins or if the
replication process should create a TimeMark specifically for the
replication. In addition, using an existing TimeMark reduces the usage of
your Snapshot Resource. However, the data being replicated may not be
the most current.
CDP/NSS Administration Guide
324
Replication
Preserve Replication TimeMark - If you did not select the Use Existing
TimeMark option, a temporary TimeMark is created when replication
begins. This TimeMark is then deleted after the replication has completed.
Select Preserve Replication TimeMark to create a permanent TimeMark
that will not be deleted when replication has completed (if the TimeMark
option is enabled). This is convenient way to keep all of the replication
TimeMarks without setting up a separate TimeMark schedule.
Notes about using an existing TimeMark:
While using an existing TimeMark reduces the usage of your Snapshot
Resource, the data being replicated may not be the most current.
For example, Your replication is scheduled to start at 11:15 and your most recent
TimeMark was created at 11:00. If you have selected Use Existing TimeMark,
the replication will occur with the 11:00 data, even though additional changes
may have occurred between 11:00 and 11:15.
Therefore, if you select Use Existing TimeMark, you must coordinate your
TimeMark schedule with your replication schedule.
Even if you select Use Existing TimeMark, a new TimeMark will be created
under the following conditions:
The first time replication occurs.
Each existing TimeMark will only be used once. If replication occurs
multiple times between the creation of TimeMarks, the TimeMark will be
used once; a new TimeMark will be created for subsequent replications
until the next TimeMark is created.
The most recent TimeMark has been deleted, but older TimeMarks exist.
After a manual rescan.
5. Configure how often, and under what circumstances, replication should occur.
CDP/NSS Administration Guide
325
Replication
An initial replication for individual resources begins immediately upon setting the
replication policy. Then replication occurs according to the specified policy.
You must select at least one policy but you can have multiple. You must specify
a policy even if you are using continuous replication so that if the system
switches to delta replication, it can automatically switch back to continuous
replication after the next regularly-scheduled replication takes place.
Any number of continuous replication jobs can run concurrently. However, by
default, 20 delta replication jobs can run, per server, at any given time. If there
are more than 20, the highest priority disks begin replication first while the
remaining disks wait in the queue in the order of their priority. As soon as one of
the jobs finishes, the disk with the next highest priority in the queue begins.
Note: Contact Technical Support for information about changing this value
but note that additional replication jobs will increase the load and bandwidth
usage of your servers and network and may be limited by individual hardware
specifications.
Start replication when the amount of new data reaches - If you enter a
watermark value, when the value is reached, a snapshot will be taken and
replication of that data will begin. If additional data (more than the watermark
value) is written to the disk after the snapshot, that data will not be replicated
until the next replication. If a replication that was triggered by a watermark fails,
the replication will be re-started based on the retry value you enter, assuming the
system detects any write activity to the primary disk at that time. Future
watermark-triggered replications will not start until after a successful replication
occurs.
If you are using continuous replication and have set a watermark value, make
sure that it is a value that can actually be reached; otherwise snapshots will
rarely be taken. Continuous replication does not take snapshots, but you will
need a recent, valid snapshot if you ever need to rollback the replica to an earlier
TimeMark during promotion.
If you are using SafeCache, replication is triggered when the watermark value of
data is moved from the cache resource to the disk.
Start initial replication on mm/dd/yyyy at hh:mm and then every n hours/
minutes thereafter - Indicate when replication should begin and how often it
should be repeated.
If a replication is already occurring when the next time interval is reached, the
new replication request will be ignored.
Note: if you are using the FalconStor Snapshot Agent for Microsoft
Exchange 5.5, the time between each replication should be longer than the
time it takes to stop and then re-start the database.
6. Select a replication protocol: TCP or RUDP.
Note: All new installations of CDP or NSS default to TCP.
CDP/NSS Administration Guide
326
Replication
7. Indicate which options you want to use for this device.
The Compress Data option provides enhanced throughput during replication by
compressing the data stream. This leverages machines with multi-processors by
using more than one thread for processing data compression/decompression
during replication. By default, two (2) threads are used. The number can be
increased to eight (8).
This reduces the size of the transmission, thereby maximizing network
bandwidth.
Note: Compression requires 64K of contiguous memory. If the memory in the
storage server is very fragmented, it will fail to allocate 64K. When this
happens, replication will fail.
The Encrypt Data option provides an additional layer of security during
replication by securing data transmission over the network. Initial key distribution
is accomplished using the authenticated Diffie-Hellman exchange protocol.
Subsequent session keys are derived from the master shared secret, making it
very secure.
Enable MicroScan - MicroScan analyzes each replication block on-the-fly during
replication and transmits only the changed sections on the block. This is
beneficial if the network transport speed is slow and the client makes small
random updates to the disk. If the global MicroScan option is turned on, it
overrides the MicroScan setting for an individual virtual device. Also, if the virtual
devices are in a group configured for replication, group policy always overrides
the individual devices policy. This option is selected by default.
CDP/NSS Administration Guide
327
Replication
8. Select how you want to create the replica disk on the target server.
Custom lets you select which physical device(s) to use and lets you designate
how much space to allocate from each.
Express automatically creates the replica for you from available hard disk
segments. You will only have to select the storage pool or physical device that
should be used to create the replica resource. This is the default setting.
Select Existing lets you select an existing resource. There are several
restrictions as to what you can select:
The target must be the same type as the primary.
The target must be the same size as the primary.
The target can have Clients assigned to it but they cannot be connected
during the replication configuration.
Note: All data on the target will be overwritten.
CDP/NSS Administration Guide
328
Replication
If you select Custom, you will see the following windows:
Indicate the type of replica disk
you are creating.
Select the storage pool
or device to use to create
the replica resource.
Only one disk can be
selected at a time from
this dialog. To create a
replica disk from multiple
physical disks, you will
need to add the disks one
at a time. After selecting
the first disk, you will have
the option to add more
disks. You will need to do
this if the first disk does
not have enough space
Indicate how much
space to allocate from
this disk.
Click Add More if
you need to add
another physical
disk to this replica
disk.
You will go back to
the physical device
selection screen
where you can
select another disk.
CDP/NSS Administration Guide
329
Replication
9. Enter a name for the replica disk.
The name is not case sensitive.
10. Confirm that all information is correct and then click Finish to create the
replication configuration.
Note: Once you create your replication configuration, you should not change the
hostname of the source (primary) server. If you do, you will need to recreate your
replication configuration.
When will
replication
begin?
If you have configured replication for an individual resource, the system will begin
synchronizing the disks immediately after the configuration is complete if the disk is
attached to a client and is receiving I/O activity.
Replication for
a group
If you have configured replication for a group, synchronization will not start until one
of the replication policies (time or watermark) is triggered. If replication fails for one
group member, it is skipped and replication continues for the rest of the group. After
successful replication, group members will have a TimeMark created on their
replica. In order for the group members that were skipped to have the same
TimeMark on its replica, you will need to remove them from the group, use the same
TimeMark to replicate again, and then re-join the group.
If you
configured
continuous
replication
If you are using continuous replication, you will be prompted to create a Continuous
Replication Resource for the primary disk and a Snapshot Resource for the replica
disk. If you are not using continuous replication, the wizard will only ask you to
create a Snapshot Resource on the replica.
Because old data blocks are moved to the Snapshot Resource as new data is
written to the replica, the Snapshot Resource should be large enough to handle the
amount of changed data that will be replicated. Since it is not always possible to
CDP/NSS Administration Guide
330
Replication
know how much changed data will be replicated, it is a good idea for you to enable
expansion on the target servers Snapshot Resource. You then need to decide what
to do if your Snapshot Resource runs out of space (reaches the maximum allowable
size or does not have expansion enabled). The default is to preserve all TimeMarks.
This option stops writing data to the source SAN resource if there is no more space
available or there is a disk failure in order to preserve all TimeMarks.
Protect your
replica
resource
For added protection, you can mirror or TimeMark an incoming replica resource by
highlighting the replica resource and right-clicking on it.
Create a Continuous Replication Resource
This is needed only if you are using continuous replication.
1. Select the storage pool or physical device that should be used to create this
Continuous Replication Resource.
CDP/NSS Administration Guide
331
Replication
2. Select how you want to create this Continuous Replication Resource.
Custom lets you select which physical device(s) to use and lets you designate
how much space to allocate from each.
Express lets you designate how much space to allocate and then automatically
creates the resource using an available device.
Note: The Continuous Replication Resource maximum size is 1 TB and
cannot be expanded. Therefore, you should allocate enough space for the
resource. By default, the size will be 256 MB or 5% of the size of your primary
disk (or 5% of the total size of all members of this group), whichever is larger.
If the primary disk regularly experiences a large number of writes, or if the
connection to the target server is slow, you may want to increase the size,
because if the Continuous Replication Resource should become full, the
system switches to delta replication mode until the next regularly-scheduled
replication takes place. If you outgrow your resource, you will need to
disable continuous replication and then re-enable it.
3. Verify the physical devices you have selected, confirm that all information is
correct, and then click Finish.
On the Replication tab, you will notice that the Replication Mode is set to Delta.
Replication must be initiated once before it switches to continuous mode. You
can either wait for the first scheduled replication to occur or you can right-click
on your SAN resource and select Replication --> Synchronize to force replication
to occur.
CDP/NSS Administration Guide
332
Replication
Check replication status
There are several ways to check replication status:
The Replication tab on the primary disk displays information about a specific
resource.
The Incoming and Outgoing objects under the Replication object display
information about all replications to or from a specific server.
The Event Log displays a list of replication information and errors.
The Delta Replication Status Report provides a centralized view for
displaying real-time replication status for all drives enabled for replication.
Replication tab
The following are examples of what you will see by checking the Replication tab for
a primary disk:
With Continuous
Replication enabled
With Delta
Replication
CDP/NSS Administration Guide
333
Replication
All times shown on the Replication tab are based on the primary servers clock.
Accumulated Delta Data is the amount of changed data. Note that this value will not
display accurate results after a replication has failed. The information will only be
accurate after a successful replication.
Replication Status / Last Successful Sync / Average Throughput - You will only see
these fields if you are connected to the target server.
Transmitted Data Size is based on the actual size transmitted after compression or
with MicroScan performed.
Delta Sent represents the amount of data sent (or processed) based on the
uncompressed size.
If compression and MicroScan are not enabled, the Transmitted Data Size will be
the same as Delta Sent and the Current/Average Transmitted Data Throughput will
be the same as Instantaneous/Average Throughput.
If compression or MicroScan is enabled and the data can be compressed or blocks
of data have not changed and will not be sent, the Transmitted Data Size is going to
be different from Delta Sent and both Current/Average Transmitted Data Throughput
will be based on the actual size of data (compressed or Micro-scanned) sent over
the network.
Event Log
Replication events are also written to the primary servers Event Log, so you can
check there for status and operational information, as well as any errors.
Replication object
The Incoming and Outgoing objects under the Replication object display information
about each server that replicates to this server or receives replicated data from this
server. If the servers icon is white, the partner server is "connected" or "logged in". If
the icon is yellow, the partner server is "not connected" or "not logged in".
CDP/NSS Administration Guide
334
Replication
Delta Replication Status Report
The Delta Replication Status Report can be run from the Reports object. It provides
a centralized view for displaying real-time replication status for all drives enabled for
replication. It can be generated for an individual drive, multiple drives, source server
or target server, for any range of dates. This report is useful for administrators
managing multiple servers that either replicate data or are the recipients of
replicated data.
This report only provides statistics for delta replication activity. Continuous
Replication statistics are not available from the report but can be monitored in realtime within the FalconStor Management Console. The report can display information
about existing replication configurations only or it can include information about
replication configurations that have been deleted or promoted (you must select to
view all replication activities in the database).
The following is a sample Delta Replication Status Report:
CDP/NSS Administration Guide
335
Replication
Replication performance
Set global
replication
options
You can set global replication options that affect system performance during
replication. While the default settings should be optimal for most configurations, you
can adjust the settings for special situations.
To set global replication properties for a server:
1. Right-click on the server and select Properties.
2. Select the Performance tab.
Click the Configure Throttle button to configure by target site(s)/server(s) to limit
the maximum replication speed thus minimizing potential impact to network
traffic.
Enable MicroScan - MicroScan analyzes each replication block on-the-fly during
replication and transmits only the changed sections on the block. This is
beneficial if the network transport speed is slow and the client makes small
random updates to the disk. This global MicroScan option overrides the
MicroScan setting for each individual virtual device.
Tune replication
parameters
You can run a test to discover maximum bandwidth and latency for remote
replication within your network.
1. Right-click on a server under Replication --> Outgoing and select Replication
Parameters.
2. Click the Test button to see information regarding the bandwidth and latency of
your network.
While this option allows you to measure the bandwidth and latency of the
network between the two servers (replication source and target), it is not a tool to
test the connectivity of the network. Therefore, if there is a network connection
issue or connection failure, the Test button will not work (and should not be used
for testing the network connection between the servers).
CDP/NSS Administration Guide
336
Replication
Assign clients to the replica disk
You can assign Clients to the replica disk in preparation for promotion or reversal.
Clients will not be able to connect to the replica disk and the Clients operating
system will not see the replica disk until after the promotion or reversal. After the
replica disk is promoted or a reversal is performed, you can restart the SAN Client to
see the new information and connect to the promoted disk.
To assign Clients:
1. Right-click on an incoming replica resource under the Replication object and
select Assign.
2. Select the Client to be assigned.
If the Client you want to assign does not appear in the list, you will need to exit
the wizard and add the client by right-clicking on SAN Client and selecting Add.
3. Confirm all of the information and then click Finish to assign the Client.
Switch clients to the replica disk when the primary disk fails
Because the replica disk is used for disaster recovery purposes, clients do not have
access to the replica. If a disaster occurs and the replica is needed, the
administrator can promote the replica to become the primary disk so that clients can
access it. The Promote option promotes the replica disk to a usable resource. Doing
so breaks the replication configuration. Once a replica disk is promoted, it cannot
revert back to a replica disk.
You must have a valid replica disk in order to promote it. For example, if a problem
occurred (such as a transmission problem or the replica disk failing) during the first
and only replication, the replicated data would be compromised and therefore could
not be promoted to a primary disk. If a problem occurred during a subsequent
replication, the data from the Snapshot resource will be used to recreate the replica
from its last good state.
Note:
You cannot promote a replica disk while a replication is in progress.
If you are using continuous replication, you should not promote a replica
disk while write activity is occurring on the replica.
If you just need to recover a few files from the replica, you can use the
TimeMark/TimeView option instead of promoting the replica. Refer to
Use TimeMark/TimeView to recover files from your replica for more
information.
To promote a replica:
1. In the Console, right-click on an incoming replica resource under the Replication
object and select Replication --> Promote.
If the primary server is not available, you will be prompted to roll back the replica
to the last good TimeMark, assuming you have TimeMark enabled on the
CDP/NSS Administration Guide
337
Replication
replica. When this occurs, the wizard will not continue with the promotion and
you will have to check the Event Log to make sure the rollback completes
successfully. Once you have confirmed that it has completed successfully, you
need to re-select Replication --> Promote to continue.
2. Confirm the promotion and click OK.
3. Assign the appropriate clients to this resource.
4. Rescan devices or restart the client to see the promoted resource.
Recreate your original replication configuration
Your original primary disk became unusable due to a disaster and you have
promoted the replica disk to a primary disk so that it can service your clients. You
have now fixed, rebuilt, or replaced your original primary disk. Do the following to
recreate your original replication configuration:
1. From the current primary disk, run the Replication Setup wizard and create a
configuration that replicates from the current resource to the original primary
server.
Make sure a successful replication has been performed to synchronize the data
after the configuration is completed. If you select the Scan option, you must wait
for this to complete before running another scan or replication.
2. Assign the appropriate clients to the new replica resource.
3. Detach all clients from the current primary disk.
For Unix clients, type ./ipstorclient stop from /usr/local/
ipstorclient/bin.
4. Right-click on the appropriate primary resource or replica resource and select
Replication --> Reversal to switch the roles of the disks.
Afterwards, the replica disk becomes the new primary disk while the original
primary disk becomes the new replica disk. The existing replication configuration
is maintained but clients will be disconnected from the former primary disk.
For more information, refer to Reverse a replication configuration.
CDP/NSS Administration Guide
338
Replication
Use TimeMark/TimeView to recover files from your replica
While the main purpose of replication is for disaster recovery purposes, the
TimeMark feature allows you to access individual files on your replica without
needing to promote the replica. This can be useful when you need to recover a file
that was deleted from the primary disk. You can simply create a TimeView of the
replica, assign it to a client, and copy back the needed file.
Using TimeMark with a replica is also useful for what if scenarios, such as testing a
new application on your actual, but not live, data.
In addition, using HyperTrac Backup with Replication and TimeMark allows you to
back up your replica at your disaster recovery site without impacting any application
servers.
For more information about using TimeMark and HyperTrac, refer to your HyperTrac
Backup Accelerator User Guide.
Change your replication configuration options
You can change the following for your replication configuration:
Static IP address of a remote target server
Policies that trigger replication (watermark or schedule)
Replication protocol
Use of compression, encryption, or MicroScan
Replication mode
To change the configuration:
1. Right-click on the primary disk and select Replication --> Properties.
CDP/NSS Administration Guide
339
Replication
The Replication Setup Options screen displays.
2. Select the appropriate tab to make the desired changes:
The Target Server Parameters tab allows you to modify the host name or
IP address of the Target server.
The Replication Policy tab allows your modify the policies that trigger
replication.
The Replication Protocol tab allows your modify the replication protocol.
The Throughput Control tab allows you to enable throughput control.
The Data Transmission Options allows you to select the following options:
Compress Data
Encrypt Data
Enable MicroScan
The Replication Transfer Mode and TimeMark tab allows you to modify the
Continuous mode and TimeMark options for the replication.
3. Make the appropriate changes and click OK.
Notes:
If you are using continuous replication and you enable or disable
encryption, the change will take effect after the next delta replication.
If you are using continuous replication and you change the IP
address of your target server, replication will switch to delta
replication mode until the next regularly-scheduled replication takes
place.
CDP/NSS Administration Guide
340
Replication
Suspend/resume replication schedule
You can suspend future replications from automatically being triggered by your
replication policies (watermark, interval, time) for an individual virtual device. Once
suspended, all of the devices replication policies will be put on hold, preventing any
future policy-triggered replication from starting. This will not stop a replication that is
currently in progress and you can still manually start the replication process while
the schedule is suspended.
When replication is resumed, replication will start at the normally scheduled interval
based on the devices replication policies.
To suspend/resume replication, right-click on the primary disk and select Replication
--> Suspend (or Resume).
You can see the current settings by checking the Replication Schedule field on the
Replication tab of the primary disk.
Stop a replication in progress
You can stop a replication that is currently in progress.
To stop a replication, right-click on the primary disk and select Replication --> Stop.
Manually start the replication process
To force a replication that is not scheduled, select Replication --> Synchronize.
Note: If replication is already occurring, this request will fail.
CDP/NSS Administration Guide
341
Replication
Set the replication throttle
Configuring the throttle allows you to limit the amount of bandwidth replication will
use. This is useful when the WAN is shared among many applications and you do
not want replication traffic to dominate the link. Setting this parameter affects the
server to server relationship, which includes remote delta and remote continuous
replication. Throttle does not affect local replication.
Throttle configuration involves three factors:
Percentage - the amount of throttle relative to the selected Link-Type.
leaving the Throttle field set to 0 (zero) means the throttle is disabled. If you
change the field to 100 percent, this means that the maximum bandwidth
available with the selected link type will be used. Besides 0, valid input is 1 100 %.
Link-Type - the link type to be used.
Window - the window can be scheduled by hours in the day.
Setting the throttle instructs the application to keep the network speed constant.
Although network traffic bursts may still occur, depending on the environment, the
throttle tries to remain at the set speed.
Throttle configuration settings are retained for each server even after replication has
been disabled. When replication is enabled again, the previous throttle settings will
be present.
Once you have set up replication and/or a target site, you can configure your throttle
settings.
The throttle can be set and edited from various locations in the console as well as
from the command line interface.
To set the throttle, navigate to the Replication node in the console, right-click
on Outgoing and select Throttle --> Configure.
CDP/NSS Administration Guide
342
Replication
Set the throttle via Server Properties --> Performance tab --> click the
Configure Throttle button.
Highlight the server or target site that you want to edit and click the Edit
button. (Target sites are indicated by a T icon.)
Add a Target Site
Another way to set the throttle is by adding a target site. A target site is a group of
sites that share the same throttle configuration. Target sites can contain existing
target servers or can be empty.
CDP/NSS Administration Guide
343
Replication
Navigate to the Replication node in the console, right-click on Outgoing and select
Target Site --> Add.
Enter a name for the target site.
Select the target servers by checking the boxes next to their host name.
Optional: You can also add a target server for future use by clicking the Add
button and entering the new target server name. Any throttle configuration
existing on the new target server will be replaced with the Target Site throttle
configuration settings.
Link Types: Select the link type for this target site. To add a custom link type,
refer to Manage Link Types.
The default throttle is zero (disabled). You may change the default throttle to
a percentage (1 - 100) of the link type. This setting takes effect immediately
when the default throttle is in use. If the window throttle is in use, the new
default setting takes effect the next time throttle is triggered outside of the
window.
The Throttle Window contains the throttle schedule for business hours and
the backup window. You can select one of these built-in schedules or add a
custom window via Throttle --> Manage Throttle Window.
Once a target site has been added, it displays, along with the individual servers, in
the FalconStor Management Console under the Replication --> Outgoing node. You
can right-click on the target site in the console to delete, edit or export it.
CDP/NSS Administration Guide
344
Replication
Manage Throttle windows
Throttle windows allow you to limit read activity to the primary disk during peak
processing times to avoid significant performance impact. Two throttle windows
have been pre-populated for you - Business Hours and Backup Window. You can
edit the pre-defined times to fit your business needs. You can also add custom
throttle windows as needed. Throttle configuration settings persist when replication
is disabled and re-enabled on the same server to server relationship.
Throttle windows can be defined to limit read activity to the primary disk so that
performance is not significantly impacted during peak hours.
For example, if you have a production server disk with replication enabled, that
experiences heavy I/O between 9:00AM and 5:00PM, replication adds to the read/
write load since replication requires to read from the primary disk. Since this may
impact disk performance when replication is accessing the disk, you can resolve this
issue with a throttle window between 9:00AM and 5:00PM to throttle the replication
speed down. With a lower replication speed, the need to access the disk for
replication is lessened, resulting in a reduced read load on the disk.
To manage throttle windows, navigate to the Replication node in the console, rightclick on Outgoing and select Throttle --> Manage Throttle Windows.
CDP/NSS Administration Guide
345
Replication
Edit a Throttle window
To edit throttle windows times, navigate to the Replication node in the console, rightclick on Outgoing and select Throttle --> Manage Throttle Windows. Then click the
Edit button.
Add a throttle window
To add a new throttle window, click the Add button.
The Add Throttle Window screen displays, allowing you to add a unique name, start
time and end time.
Time is entered as a 24 hour time period. For example, 5:00 p.m. would be entered
as 17:00. Make sure the times do not overlap with an existing window. For example,
if one window has an end time of 12:00, the next window must start at 12:01.
Delete a throttle window
You can also delete any custom (user-created) throttle window to cancel the
schedule. Built-in throttle windows cannot be deleted. To delete a custom throttle
window, navigate to the Replication node in the console, right-click on Outgoing and
select Throttle --> Manage Throttle Windows. Then click the Delete button.
Throttle tab
Right-click on the target server or target site and click the Throttle tab for information
on Link Type, default throttle, and any selected throttle windows.
Throttle and failover
Setting up throttle on a failover pair requires some additional considerations. Refer
to the Throttle and Failover section for details.
CDP/NSS Administration Guide
346
Replication
Manage Link Types
To manage link types and speed, navigate to the Replication node in the console,
right-click on Outgoing and select Throttle --> Manage Link Types.
The Manage Link Types screen displays all link types, along with the description and
speed.
Throttle speed displays the maximum speed, not necessarily the actual speed. For
example, a throttle speed of 30 Mbps indicates a speed of 30 Mbps or less. The
speed is determined by multiplying the throttle percentage to the link type speed.
For example, a default throttle of 30% of a 100Mbps Link Type would be (30%) x
(100Mbps) = 30Mbps.
Actual speed may or may not be evenly distributed across all Target Sites and
servers. Actual speed depends on many factors, such as disk performance, network
traffic, functions enabled (encryption, compression, MicroScan), and other
processes in progress (TimeMark, Mirror, etc).
CDP/NSS Administration Guide
347
Replication
Add link types
If your link type is not listed in the pre-populated/build-in list, you can add a custom
link type by navigating to the Replication node in the console, right-clicking on
Outgoing and selecting Throttle --> Manage Link Types. Then click the Add button.
Then enter the Link Type, a brief description, and the speed in Megabytes per
second (Mbps).
Edit link types
Custom link types can be modified by clicking the Edit button. However, built-in link
types cannot be edited.
To edit a custom link type, navigate to the Replication node in the console, right-click
on Outgoing and select Throttle --> Manage Link Types. Then click the Edit button
Delete link
types
Link Types can be deleted as long as they are not currently in use by any target site
or server. Custom link types can be deleted when no longer needed. Built-in link
types cannot be deleted.
To delete a custom link type, navigate to the Replication node in the console, rightclick on Outgoing and select Throttle --> Manage Link Types. Then click the Delete
button.
CDP/NSS Administration Guide
348
Replication
Set replication synchronization priority
To set the synchronization priority for pending replications, select Replication -->
Priority.
This allows you to prioritize the order that device/group will begin replicating if
scheduled to start at the same time. This option can be set for a single resource or a
single group via the Replication submenu or for multiple resources or group from the
context menu of Replication Outgoing node.
Reverse a replication configuration
Reversal switches the roles of the replica disk and the primary disk. The replica disk
becomes the new primary disk while the original primary disk becomes the new
replica disk. The existing replication configuration is reset to the default. After the
reversal, clients will be disconnected from the former primary disk.
To perform a role reversal:
1. , right-click on the appropriate primary resource or replica resource and select
Replication --> Reversal.
Notes:
The primary and replica must be synchronized in order to reverse a
replica. If needed, you can manually start the replication from the
Console and re-attempt the reversal after the replication is
completed.
If you are using continuous replication, you have to disable it before
you can perform the reversal.
If you are performing a role reversal on a group, we recommend that
the group have 40 or fewer resources. If there are more than 40
resources in a group, we recommend that multiple groups be
configured to accomplish this task.
2. Enter the New Target Server host name or IP address to be used by the new
primary server to connect to the new target server for replication.
CDP/NSS Administration Guide
349
Replication
Reverse a replica when the primary is not available
Replication can be reversed from the replica server side even if the primary server is
offline or is not accessible. When you reverse this type of replica, the replica disk will
be promoted to become the primary disk and the replication configuration will be
removed.
Afterwards, when the original primary server becomes available, you must repair the
replica in order to re-establish a replication configuration.The original replication
policy will be used /maintained after repair.
Notes:
If a primary disk is in a group but the group doesnt have replication
enabled, the primary resource should leave the group first before the
repair replica can be performed.
If you have CDP enabled on the replica and you want to perform a
rollback, you can roll back before or after reversing the replica.
Forceful role reversal
When the primary server is down and the replica is up. Or if the primary server is up
but corrupted and the replica is not synchronized, you can force a role reversal as
long as there are no replication processes running.
Notes:
The forceful role reversal operation can be performed even if the CDP
journal has unflushed data.
The forceful role reversal operation can be performed even if data is not
synchronized between the primary and replica server.
The snapshot policy, TimeMark/CDP, and throttle control policy settings
are not swapped after the repair operation for replication role reversal.
To perform a forceful role reversal:
1. Suspend the replication schedule.
If you are using Continuous Mode, disable it by right-clicking on the disk and
selecting Replication --> Properties and uncheck Continuous Mode in the
replication transfer Mode and TimeMark tab under the Replication Setup
Options.
2. Right-click on the primary or replica server and select Replication --> Forceful
Reversal.
3. Type YES to confirm the operation and then click OK.
4. Once the forceful role reversal is done, Repair the promoted replica to establish
the new connection between the new primary and replica server.
CDP/NSS Administration Guide
350
Replication
The replication repair operation must be performed from the NEW primary
server.
Note: If the SAN resource is assigned to a client in the original primary server,
it must be unassigned in order to perform the repair on the new primary.
5. Confirm the IP address and click OK.
The current primary disk remains as the primary disk and begins replicating to
the recovered server.
After the repair operation is complete, replication will synchronize again either by
schedule or manual trigger. A full synchronization is performed if the replication
was not synchronized prior the forceful role reversal and the replication policy
from the original primary server will be used/update on the new primary server.
If you want to recreate your original replication configuration, you will need to
perform another reversal so that your original primary becomes the primary disk
again.
Repair a replica
When performing a repair, the following status conditions may display:
Repair status - after forceful role reversal
Status
Description
Valid
The server performing the repair has verified that the
server is OK for repair. If there is a problem with the
replica server, respective errors will show after repair is
initiated.
Invalid
The server performing the repair has reported that the
repair cannot be processed. Make sure all devices
involved are online and have no missing segments.
TimeMark Rollback
in Progress
The repair cannot be processed because one of the
devices involved in the repair is currently performing a
rollback.
Not Configured for
Replication
The repair cannot be processed because there is a
problem with a device which is a member of a group.
Make sure there are no extra members or missing
members of the group.
Relocate a replica
The Relocate feature allows replica storage to be moved from the original replica
server to another server while preserving the replication relationship with the
primary server. Relocating reassigns ownership to the new server and continues
CDP/NSS Administration Guide
351
Replication
replication according to the set policy. Once the replica storage is relocated to the
new server, the replication schedule can be immediately resumed without the need
to rescan the disks.
Before you can relocate the replica, you must import the disk to the new CDP or
NSS appliance. Refer to Import a disk if you need more information.
Once the disk has been imported, open the source server, highlight the virtual
resource that is being replicated, right-click and select Relocate.
Notes:
You cannot relocate a replica that is part of a group.
If you are using continuous replication, you must disable it before
relocating a replica. Failure to do so will keep replication in delta mode,
even after the next manual or scheduled replication occurs. You can reenable continuous replication after relocating the replica.
Remove a replication configuration
Right-click on the primary disk and select Replication --> Disable. This allows you to
remove the replication configuration on the primary and either delete or promote the
replica disk on the target server at the same time.
Expand the size of the primary disk
The primary disk and the replica disk must be the same size. If you expand the
primary disk, you will enlarge the replica disk to the same size.
Note: Do not attempt to expand the primary disk during replication. If you do, the
disk will expand but the replication will fail,
CDP/NSS Administration Guide
352
Replication
Replication with other CDP or NSS features
Replication and TimeMark
While enabling TimeMarks, you can set the Trigger Replication after TimeMark is
taken option. This option is applicable if TimeMark and Replication are both enabled
for that device/Group. If TimeMark is enabled for a Group, replication must also be
enabled at the group level.
When this option is set, replication synchronization triggers automatically for that
device or group when the TimeMark is created. If SafeCache or CDP is enabled,
replication synchronization is triggered when the cache marker is flushed.
Since you cannot create TimeMarks on a replica device, if you enable this option for
replica devices, it will only take effect after a role reversal.
Note: The timestamp of a TimeMark on a replica is the timestamp of the source.
Replication and Failover
If replication is in progress and a failover occurs at the same time, the replication will
fail. After failover, replication will start at the next normally scheduled interval. This is
also true in reverse, if replication is in progress and a recovery occurs at the same
time.
Replication and Mirroring
When you promote the mirror of a replica resource, the replication configuration is
maintained.
Depending upon the replication schedule, when you promote the mirror of a replica
resource, the mirrored copy may not be an identical image of the replication source.
In addition, the mirrored copy may contain corrupt data or an incomplete image if the
last replication was not successful or if replication is currently occurring. Therefore, it
is best to make sure that the last replication was successful and that replication is
not occurring when you promote the mirrored copy.
Replication and Thin Provisioning
A disk with Thin Provisioning enabled can be configured to replicate to a normal
SAN resource or another disk with Thin Provisioning enabled. The normal SAN
resource can replicate to a disk with Thin Provisioning as long as the size of the
SAN resource is equal to or greater than 10GB (the minimum permissible size of the
thin disk).
CDP/NSS Administration Guide
353
CDP/NSS Administration Guide
Near-line Mirroring
Near-line mirroring allows production data to be synchronously mirrored to a
protected disk that resides on a second storage server. You can enable near-line
mirroring for a single SAN resource or multiple resources.
With near-line mirroring, the primary disk is the disk that is used to read/write data
for a SAN Client and the mirrored copy is a copy of the primary. Each time data is
written to the primary disk, the same data is simultaneously written to the mirror disk.
TimeMark or CDP can be configured on the near-line server to create recovery
points. The near-line mirror can also be replicated for disaster recovery protection.
If the primary disk fails, you can initiate recovery from the near-line server and roll
back to a valid point-in-time.
Application
Servers
IPStor
Service
Enabled Disk
Synchronous
Mirror
Production
IPStor
Synchronous
Mirror
Nearline
CDP/NSS Administration Guide
354
Near-line Mirroring
Near-line mirroring requirements
The following are the requirements for setting up a near-line mirroring configuration:
The primary server cannot be configured to replicate to the near-line server.
At least one protocol (FC or iSCSI) must be enabled on the near-line server.
If you are using the FC protocol for your near-line mirror, zone the
appropriate initiators on your primary server with the targets on your nearline server. For recovery purposes, zone the appropriate initiators on your
near-line server with the targets on your primary server.
Near-line mirroring setup
You can enable near-line mirroring for a single SAN resource or multiple resources.
To enable and set up near-line mirroring on one resources, follow the steps
described below. To enable near-line mirroring for multiple resources, refer to
Enable Near-line Mirroring on multiple resources.
1. Right-click on the resource and select Near-line Mirror --> Add.
The Welcome screen displays.
2. If you are enabling one disk, specify if you want to enable near-line mirroring for
the primary disk or just prepare the near-line disk.
When you create a near-line disk, the primary server performs a rescan to
discover new devices. If you are configuring multiple near-line mirrors, the scans
can become time consuming. Instead, you can select to prepare the near-line
disk now and then manually rescan physical resources and discover new
resources on the primary server. Afterwards, you will have to re-run the wizard
and select the existing, prepared disk.
CDP/NSS Administration Guide
355
Near-line Mirroring
If you are enabling near-line mirroring for multiple disks, the above screen will
not display.
3. Select the storage pool or physical device(s) for the near-line mirrors virtual
header information.
4. Select the server that will contain the near-line mirror.
CDP/NSS Administration Guide
356
Near-line Mirroring
5. Add the primary server as a client of the near-line server.
You will go through several screens to add the client:
Confirm or specify the IP address the primary server will use to connect to
the near-line server as a client. This IP address is used for iSCSI; it is not
used for Fibre Channel.
Determine if you want to enable persistent reservation for the client
(primary server). This allows clustered clients to take advantage of
Persistent Reserve/Release to control disk access between various
cluster nodes.
Select the clients protocol(s). If you select iSCSI, you must indicate if this
is a mobile client.
(FC protocol) Select or add WWPN initiators for the client.
(FC protocol) Specify if you want to use Volume Set Addressing (VSA).
VSA is used primarily for addressing virtual buses, targets, and LUNs. If
your client requires VSA to access a broader range of LUNs, you must
enable it for the client.
(iSCSI protocol) Select the initiator that this client uses. If the initiator does
not appear, you may need to rescan. You can also manually add it, if
necessary.
(iSCSI protocol) Add/select users who can authenticate for this client.
6. Confirm the IP address of the primary server.
Confirm or specify the IP address the near-line server will use to connect to the
primary server when a TimeMark is created, if snapshot notification is used. If
needed, you can specify a different IP address from what you used when you
added the primary server as a client of the near-line server.
CDP/NSS Administration Guide
357
Near-line Mirroring
7. Determine if you want to monitor the mirroring process.
If you select to monitor the mirroring process, the I/O performance will be
checked to decide if I/O to the mirror disk is lagging beyond an acceptable limit.
If it is, mirroring will be suspended so it does not impact the primary storage.
Monitor mirroring process every n seconds - Specify how frequently the system
should check the lag time (delay between I/O to the primary disk and the mirror).
Checking more or less frequently will not impact system performance. On
systems with very low I/O, a higher number may help get a more accurate
representation.
Maximum lag time for mirror I/O - Specify an acceptable lag time (1 - 1000
milliseconds) between I/Os to the primary disk and the mirror.
Suspend mirroring - If the I/O to the mirror disk is lagging beyond the specified
level of acceptance, mirroring will be suspended when the following conditions
are met:
When the failure threshold reaches n% - Specify what percentage of I/O must
pass the lag time test. For example, you set the percentage to 10% and the
maximum lag time to 15 milliseconds. During the test period, 100 I/O occurred
and 20 of them took longer than 15 milliseconds to update the mirror disk. With a
20% failure rate, mirroring would be suspended.
CDP/NSS Administration Guide
358
Near-line Mirroring
When the outstanding I/Os reaches n - Specify the minimum number of I/Os that
can be outstanding. When the number of outstanding I/Os are above the
specified number, mirroring is suspended.
Note: If a mirror becomes out of sync because of a disk failure or an I/O error
(rather than having too much lag time), the mirror will not be suspended.
Because the mirror is still active, re-synchronization will be attempted based
on the global mirroring properties that are set for the server. Refer to Set
global mirroring options for more information.
8. If mirroring is suspended, specify when re-synchronization should be attempted.
Re-synchronization can be started based on time (every n minutes/hours) and/or
I/O activity (when I/O is less than n KB/MB). If you select both, the time will be
applied first before the I/O activity level. If you do not select either, the mirror will
stay suspended until you manually synchronize it.
If you select one or both re-synchronization methods, you must also specify how
often the system should retry the re-synchronization if it fails to complete. If you
only select the second resync option, the default will be 10 minutes.
When the system initiates re-synchronization, it does not check lag time and
mirroring will not be suspended if there is too much lag time.
If you manually resume mirroring, the system will monitor the process during
synchronization and check lag time. Depending upon your monitoring policy,
mirroring will be suspended if the lag time gets above the acceptable limit.
Note: If CDP/NSS is restarted or the server experiences a failover while
attempting to resynchronize, the mirror will remain suspended.
CDP/NSS Administration Guide
359
Near-line Mirroring
9. Select how you want to create this near-line mirror resource.
Custom lets you select which physical device(s) and which segments to
use and lets you designate how much space to allocate from each.
Express lets you select which physical device(s) to use and automatically
creates the near-line resource from the available hard disk segments.
Select existing lets you select an existing virtual device that is the same
size as the primary or a previously prepared (but not yet created) near-line
mirror resource. (The option to only prepare a near-line disk appeared on
the first Near-line Mirror wizard dialog.)
CDP/NSS Administration Guide
360
Near-line Mirroring
10. Enter a name for the near-line resource.
Note: Do not change the name of the near-line resource if the server is a nearline mirror or configured with near-line mirroring.
11. (iSCSI protocol) Select the iSCSI targets to assign.
CDP/NSS Administration Guide
361
Near-line Mirroring
12. Confirm that all information is correct and then click Finish to create the near-line
mirroring configuration.
To set the near-line mirror throughput speed/throttle for near-line mirror
synchronization, refer to Set mirror throttle.
CDP/NSS Administration Guide
362
Near-line Mirroring
Enable Near-line Mirroring on multiple resources
You can enable near-line mirroring on multiple SAN resources.
1. Right-click on SAN Resources and select Near-line Mirror --> Add.
The Enable Near-line Mirroring wizard launches.
2. Click Next at the Welcome screen.
The list of available resources displays.
3. Select the resources be Near-line Mirror resources or click the Select All button.
4. Select the storage pool or physical device(s) for the near-line mirrors virtual
header information.
5. Select the server that will contain the near-line mirrors.
6. Continue to set up near-line mirroring as described in Near-line mirroring setup.
Whats next?
Near-line disks
are prepared
but not created
If you prepared one or more near-line disks and are ready to create near-line
mirrors, you must manually rescan physical resources and discover new devices on
the primary server. Afterwards, you must re-run the Near-line Mirror wizard for each
primary disk and select the existing, prepared disk. This will create a near-line mirror
without re-scanning the primary server.
Near-line mirror
is created
After creating your near-line mirror, you should enable TimeMark or CDP on the
near-line server. This way your data will have periodic snapshots and you will be
able to roll back your data when needed.
For disaster recovery purposes, you can also enable replication for a near-line disk
to replicate the data to another location.
CDP/NSS Administration Guide
363
Near-line Mirroring
Check near-line mirroring status
You can see the current status and properties of your mirroring configuration by
checking the General tab for a mirrored resource.
Current status and
properties of
mirroring
configuration.
CDP/NSS Administration Guide
364
Near-line Mirroring
Near-line recovery
The following is required before recovering data:
If you are using the FC protocol, zone the appropriate initiators on your
near-line server with the targets on your primary server.
You must unassign the primary disk from its client(s).
If enabled, disable mirroring for the near-line disk.
If enabled, suspend replication for the near-line disk.
All SAN resources must be online and accessible.
If the near-line mirror is part of a group, the near-line mirror must leave the
group prior to recovery.
TimeMark must be enabled on the near-line resource and the near-line
replica, if one exists.
At least one TimeMark must be available to rollback to during recovery.
If you have been using CDP and want to rollback to a specific point-in-time,
you may want to create a TimeView first and view it to make sure it contains
the appropriate data that you want.
Note: If the near-line recovery fails due to a TimeMark rollback failure, device
discovery failure, etc., you can retry the near-line recovery by selecting Near-line
Mirror Resources --> Retry Recovery on the Near-line Disk.
Recover data from a near-line mirror
Recovery is done in the console from the near-line resource.
1. Right-click on the near-line resource and select Near-line Mirror Resource ->Start Recovery
You can also start recovery by selecting TimeMark --> Rollback.
2. Add the near-line server as a client of the primary server.
You will go through several screens to add the client:
Confirm or specify the IP address the near-line server will use to connect
to the primary server as a client. This IP address is used for iSCSI; it is not
used for Fibre Channel.
Determine if you want to enable persistent reservation for the client (nearline server). This allows clustered clients to take advantage of Persistent
Reserve/Release to control disk access between various cluster nodes.
Select the clients protocol(s). If you select iSCSI, you must indicate if this
is a mobile client.
(FC protocol) Select or add WWPN initiators for the client.
(FC protocol) Specify if you want to use Volume Set Addressing (VSA).
VSA is used primarily for addressing virtual buses, targets, and LUNs. If
your client requires VSA to access a broader range of LUNs, you must
enable it for the client.
CDP/NSS Administration Guide
365
Near-line Mirroring
your storage devices use VSA, you must enable it.
(iSCSI protocol) Select the initiator that this client uses. If the initiator does
not appear, you may need to rescan. You can also manually add it, if
necessary.
(iSCSI protocol) Add/select users who can authenticate for this client.
3. Click OK to begin the recovery process.
4. Select the point-in-time to which you want to roll back.
Rollback restores your drive to a specific point in time, based on an existing
TimeMark or your CDP journal. After rollback, your drive will look exactly like it
did at that point in time.
You can select to roll back to any TimeMark. If this resource has CDP enabled
and you want to select a specific point-in-time, type in the exact time.
Once you click Ok, the system will roll back the near-line mirror to the specified
point-in-time and will then synchronize the data back to the primary server.
When the process is completed, your screen will look similar to the following:
CDP/NSS Administration Guide
366
Near-line Mirroring
5. When the Mirror Synchronization Status shows the status as Synchronized, you
can select Near-line Mirror Resource --> Resume Config to resume the
configuration of the near-line mirror.
This re-sets the original near-line configuration so that the primary server can
begin mirroring to the near-line mirror.
6. Re-assign your primary disk to its client(s).
Recover data from a near-line replica
Another type of recovery is recovering from the TimeMark of the near-line replica
disk. The following is required before recovering data from a near-line replica:
All of the clients assigned to the primary disk must be removed.
The near-line disk and the replica disk must be in sync as required for role
reversal.
If the Near-line Disk is already enabled with mirror, the mirror must be
removed first.
Recovery is performed via the console from the near-line resource as described
below:
1. Right-click on the near-line resource and select Replication -->
Recovery --> Prepare/Start
CDP/NSS Administration Guide
367
Near-line Mirroring
2. Click OK to update the configuration for recovery.
3. Click OK to perform role reversal.
CDP/NSS Administration Guide
368
Near-line Mirroring
The Recovery from Near-line Replica TimeMark screen displays.
4. Select the TimeMark to rollback to restore your drive to a specific point in time.
Once you click Ok, the system will roll back the near-line mirror to the specified
point-in-time.
5. Perform Replication synchronization from the REVERSED Near-line Replica
Disk to the near-line disk after successful rollback.
This will synchronize the rollback data from the REVERSED replica to Near-line
Disk and primary disk since the near-line disk is the replica now and the primary
disk is the mirror of the near-line disk.
To do this:
Right-click on the REVERSED Near-line replica disk.
Select Replication-> Synchronize.
6. Perform Role Reversal to switch Near-line Disk back as Replication Primary
Disk and resume the Near-line Mirroring configuration.
To do this:
Right-click on the REVERSED Near-line Replica Disk.
Select Replication-> Recovery-> Resume Config.
CDP/NSS Administration Guide
369
Near-line Mirroring
The Resume Near-line Mirroring from Near-line Replica Recovery screen
displays.
7. Click OK to switch the role of the Near-line disk and the Near-line replica and
resume near-line mirroring.
8. Re-assign your primary disk to its client(s).
Recover from a near-line replica TimeMark using forceful role
reversal
Recovery from a near-line replica TimeMark with forceful role reversal can be used
when the near-line server is not available. However, this only works if both Near-line
Disk and Near-line Replica are enabled with TimeMark.
To prepare for recovery:
Suspend replication on the near-line server
Unassign all of the clients from the primary disk on the primary server
Suspend near-line mirror on the primary disk to prevent mirror
synchronization of the near-line disk.
Suspend CDP on the near-line disk and the near-line replica.
To recover using this method:
1. Perform forceful role reversal on Near-line Replica
Right-click on the Replica disk --> Replication --> Reversal
The procedure will fail because the server is not available. Click OK.
Click the Cancel button at the login screen to exit the login dialog.
CDP/NSS Administration Guide
370
Near-line Mirroring
The Forceful Replication Role Reversal screen displays.
Type Yes to confirm and click OK.
Click OK to switch the roles of the replica disk and primary server.
The Replica is promoted.
2. Perform TimeMark rollback on the reversed Near-line Replica.
Right-click on the reversed Near-line Replica and select TimeMark -->
Rollback.
CDP/NSS Administration Guide
371
Near-line Mirroring
Select the TimeMark you are rolling back to and click OK.
3. Perform Repair Replica from the reversed Near-line Replica after the near-line
server is online.
Note: You must set the near-line disk to Recovery Mode before repairing the
replica.
4. Right-click on the reversed Near-line Replica and select Replication --> Repair
5. Perform Synchronization on the reversed Near-line Replica.
Right-click on the reversed Near-line Replica and select Replication -->
Synchronize
6. Once synchronization is finished, perform role reversal from the reversed nearline replica.
Right-click on the reversed Near-line Replica and select Replication --> Reversal
CDP/NSS Administration Guide
372
Near-line Mirroring
7. When the Mirror Synchronization Status shows the status as Synchronized, you
can select Near-line Mirror Resource --> Resume Config to resume the
configuration of the near-line mirror.
This re-sets the original near-line configuration so that the primary server can
begin mirroring to the near-line mirror.
8. Once near-line mirror configuration has resumed, you can resume the Near-line
Mirror, Replication, and CDP.
9. Re-assign your primary disk to its client(s)
Swap the primary disk with the near-line mirrored copy
Right-click on the primary SAN resource and select Near-line Mirror --> Swap to
reverse the roles of the primary disk and the mirrored copy. You will need to do this if
you are going to perform maintenance on the primary disk or if you need to remove
the primary disk.
Note: When swapping the primary disk with the near-line mirrored copy, the mirror will swap back to the primary disk if the mirror is in sync and a set period of
time has passed. This is done to reduce the amount of load on the disk from the
Near-line server. The time to swap back is based on the global sync option in the
console.
Manually synchronize a near-line mirror
The Synchronize option re-synchronizes a mirror and restarts the mirroring process
once it is synchronized. This is useful if one of the mirrored disks has a minor failure,
such as a power loss.
1. Fix the problem (turn the power back on, plug the drive in, etc.).
2. Right-click on the primary resource and select Near-line Mirror --> Synchronize.
During the synchronization, the system will monitor the process and check lag time.
Depending upon your monitoring policy, mirroring will be suspended if the lag time
gets above the acceptable limit.
Rebuild a near-line mirror
The Rebuild option rebuilds a mirror from beginning to end and starts the mirroring
process once it is synchronized.
After rebuilding the mirror, you would swap the mirror so that the primary server
could service clients again. To do this, right-click on a primary resource and select
Near-line Mirror --> Rebuild. You can see the current settings by checking the Mirror
Synchronization Status field on the General tab of the resource.
CDP/NSS Administration Guide
373
Near-line Mirroring
Expand a near-line mirror
Use the Expand SAN Resource Wizard to expand the near-line mirror. Make sure
the near-line server is up and running. If the near-line server is down, you will not be
able to expand the primary disk or the near-line mirror disk. However, if the primary
server is down, you can still expand the near-line mirror and the primary disk will be
expanded in the next mirror expansion.
You can expand the near-line mirror with or without the near-line replica server. If a
near-line replica server exists, both the near-line mirror and the replica disk will be
expanded at the same time.
To expand a virtualized disk:
1. Right-click on the primary disk or near-line mirror and select Expand.
If you want to enlarge the primary disk, you will need to enlarge the mirrored
copy to the same size. The Expand SAN Resource Wizard will automatically
lead you through expanding the near-line mirror disk first.
The Expand SAN Resource Wizard screen displays.
CDP/NSS Administration Guide
374
Near-line Mirroring
2. Select the physical storage.
3. Select an allocation method and specify the size to allocate.
CDP/NSS Administration Guide
375
Near-line Mirroring
The near-line mirror and the replica expands.
4. Click finish to confirm the expansion of the near-line mirror and the replica.
You are automatically routed back to the beginning of the Expand SAN
Resource Wizard to expand the primary server.
Note: Thin provisioning is not supported with near-line mirroring.
Expand a service-enabled disk
To expand the service-enabled disk, the near-line mirror expand size must be
greater than or equal than the primary disk expand size. You must expand the
storage size on the physical disk first. Then go to the console and rescan the
physical disk.
Once you have performed a rescan of the physical disk, follow the same steps
described above to expand the disk.
CDP/NSS Administration Guide
376
Near-line Mirroring
Suspend/resume near-line mirroring
When you manually suspend a mirror, the system will not attempt to re-synchronize,
even if you have a re-synchronization policy. You will have to resume the mirror in
order to synchronize.
When you resume mirroring, the mirror is synchronized before mirroring is resumed.
During the synchronization, the system will monitor the process and check lag time.
Depending upon your monitoring policy, mirroring will be suspended if the lag time
gets above the acceptable limit.
To suspend/resume mirroring for a resource:
1. Right-click on a primary resource and select Near-line Mirror --> Suspend (or
Resume).
You can see the current settings by checking the Mirror Synchronization Status
field on the General tab of the resource.
Change your mirroring configuration options
Set global
mirroring
options
You can set global mirroring options that affect system performance during all types
of mirroring (near-line, synchronous, or asynchronous). While the default settings
should be optimal for most configurations, you can adjust the settings for special
situations.
To set global mirroring properties for a server:
1. Right-click on the server and select Properties.
2. Select the Performance tab.
Synchronize Out-of-Sync Mirrors - Determine how often the system should
check and attempt to resynchronize active out-of-sync mirrors, how often it
should retry synchronization if it fails to complete, and whether or not to include
replica mirrors. These settings will only be used for active mirrors. If a mirror is
suspended because the lag time exceeds the acceptable limit, that resynchronization policy will apply instead.
The mirrored devices must be the same size. If you want to enlarge the primary
disk, you will need to enlarge the mirrored copy to the same size. When you use
the Expand SAN Resource Wizard, it will automatically lead you through
expanding the near-line mirror disk first.
CDP/NSS Administration Guide
377
Near-line Mirroring
Change
properties for a
specific primary
resource
You can change the following near-line mirroring configuration for a primary
resource:
Policy for monitoring the mirroring process
Conditions for re-synchronization
Throughput control policies
To change the configuration:
1. Right-click on a primary resource and select Near-line Mirror --> Properties.
2. Make the appropriate changes and click OK.
Change
properties for a
specific nearline resource
For a near-line mirroring resource, you can only change the IP address that is used
by the near-line server to connect to the primary server.
To change the configuration:
1. Right-click on a near-line resource and select Near-line Mirror Resource -->
Properties.
2. Make the appropriate change and click OK.
Remove a near-line mirror configuration
You can remove a near-line mirror configuration from the primary or near-line mirror
resource(s).
From the primary server, right-click on the primary resource and select Near-line
Mirror --> Remove.
From the near-line server, right-click on the near-line resource and select Near-line
Mirror Resource --> Remove.
CDP/NSS Administration Guide
378
Near-line Mirroring
Recover from a near-line mirroring hardware failure
Replace a
failed disk
If one of the mirrored disks has failed and needs to be replaced:
1. Right-click on the resource and select Near-line Mirror --> Remove to remove
the mirroring configuration.
2. Physically replace the failed disk.
Important: To replace the disk without having to reboot your storage server, refer
to Replace a failed physical disk without rebooting your storage server.
3. Re-run the Near-line Mirroring wizard to create a new mirroring configuration.
If both disks fail
If a disaster occurs at the site where the primary and near-line server are housed, it
is possible to recover both disks if you had replication configured for the near-line
disk to a remote location.
In this case, after removing the mirroring configuration and physically replacing the
failed disks, you can perform a role reversal to replicate all of the data back to the
near-line disk.
Afterwards, you can recover the data from the near-line mirror back to the primary
disk.
Fix a minor disk
failure
If one of the mirrored disks has a minor failure, such as a power loss:
1. Fix the problem (turn the power back on, plug the drive in, etc.).
2. Right-click on the primary resource and select Near-line Mirror --> Synchronize.
This re-synchronizes the disks and restarts the mirroring.
If the near-line
server is set up
as a failover
pair and is in a
failed state
If the you are performing a near-line recovery and the near-line server is set up as a
failover pair, always add the first and second nodes of the failover set to the primary
for recovery.
1. Select the proper initiators for recovery
2. Assign both nodes back to the primary for recovery.
Note: There are cases where the server may not show up in the list since the
machine maybe down and the particular port is not logged into the switch. In this
situation, you must know the complete WWPN of your recovery initiator(s). This is
important in cases where you need to manually enter the WWPN into the recovery wizard to avoid any adverse effects during the recovery process.
CDP/NSS Administration Guide
379
Near-line Mirroring
Replace a disk that is part of an active near-line mirror
If you need to replace a disk that is part of an active near-line mirror storage, take
the primary disk offline first. Then follow the steps below. If the primary server is part
of a High Availability (HA) set, take the disks offline for both servers before
proceeding.
1. If you need to replace the primary disk, right-click on the primary resource and
select Near-line Mirror --> Swap to reverse the roles of the disks.
2. Select Near-line Mirror --> Replace Primary Disk.
3. Replace the disk.
Important: To replace the disk without having to reboot your storage server, refer
to Replace a failed physical disk without rebooting your storage server.
4. Swap the disks to reverse their roles.
Replace a failed physical disk without rebooting your storage
server
Do the following if you need to replace a failed physical disk without rebooting your
storage server.
1. If you are not sure which physical disk to remove, execute the following to
access the drive and cause the disks light to blink:
hdparm -t /dev/sd#
where # represents a,b,c,d, depending on the order of the disks.
2. You MUST remove the SCSI device from the Linux OS.
Type the following for Linux (2.4 kernel):
echo scsi remove-single-device A:C:S:L > /proc/scsi/scsi
A C S L stands for: Adapter, Channel, SCSI, and LUN.This can be found in the
Console.
Type the following for Linux (2.6 kernel):
echo 1 > /sys/class/scsi_device/DeviceID/device/delete
Where DeviceID is obtained from ls /sys/class/scsi-device
For example:
CDP/NSS Administration Guide
380
Near-line Mirroring
echo "1" > /sys/class/scsi_device/1:0:0:0/device/delete
3. Execute the following to re-add the device so that Linux can recognize the drive:
echo "scsi add-single-device x x x x">cat /proc/scsi/scsi.
where x x x x stands for A C S L numbers: Adapter, Channel, SCSI, and LUN
number.
4. Rescan the adapter to which the device has been added.
In the Console, right-click on AdaptecSCSI Adapter.x and select Rescan, where
x is the adapter number the device is on.
Set Recovery Mode
The Set Recovery Mode option should only be used when recovering data from a
near-line replica TimeMark using forceful role reversal.
CDP/NSS Administration Guide
381
CDP/NSS Administration Guide
ZeroImpact Backup
FalconStors ZeroImpact Backup Enabler allows you to perform a local raw device
tape backup/restore of your virtual drives.
A raw device backup is a low-level backup or full copy request for block information
at the volume level. Linuxs dd command generates a low-level request.
Examples of Linux applications that have been tested with the storage server to
perform raw device backups include BakBones NetVault version 7.42 and
Symantec Veritas NetBackup version 6.0.
Using the FalconStor ZeroImpact Backup Enabler with raw device backup software
eliminates the need for the application server to play a role in backup and restore
operations. Application servers on the SAN benefit from better performance and the
elimination of overhead associated with backup/restore operations because the
command and data paths are rendered exclusively local to the storage server. This
results in the most optimal data transfer between the disks and the tape, and is the
only way to achieve net transfer rates that are limited only by the disks or tapes
engine. The backup process automatically leverages the FalconStor snapshot
engine to guarantee point-in-time consistency.
To ensure full transactional integrity, this feature integrates with FalconStor
Snapshot Agents and the Group Snapshot feature.
Configure ZeroImpact backup
You must have a Snapshot Resource for each virtual device you want to back up. If
you do not have one, you will be prompted to create one. Refer to Create a
Snapshot Resource for more information.
1. Right-click on the SAN resource that you want to back up and select Backup -->
Enable.
Note: There is a maximum of 255 virtual devices that can be enabled for
ZeroImpact backup.
CDP/NSS Administration Guide
382
ZeroImpact Backup
2. Enter a raw device name for the virtual device that you want to back up.
3. Configure the backup policy.
Use an existing TimeMark snapshot - (This option is only valid if you are using
FalconStors TimeMark option on this SAN resource.) If a TimeMark exists for
this virtual device, that image will be used for the backup. It may or may not be a
current image at the time backup is initiated. If a TimeMark does not exist, a
snapshot will be taken.
Create a new snapshot - A new snapshot will be created for the backup,
ensuring the backup will be made from the most current image.
CDP/NSS Administration Guide
383
ZeroImpact Backup
4. Determine how long to maintain the backup session.
Each time a backup is requested by a third-party backup application, the storage
server creates a backup session. Depending upon the snapshot criteria set on
the previous window, a snapshot may be taken at the start of the backup
session. (If the resource is part of a group, snapshots for all resources in the
group will be taken at the same time.) Subsequently, each raw device is opened
for backup and then closed. Afterwards, the backup application may verify the
backup by comparing the data on tape with that of the snapshot image created
for this session. Therefore, it is important to maintain the backup session until
the verification is complete. The storage server cannot tell how long your backup
application needs to rewind the tape and compare the data, so you must select
an option on this screen indicating how long the storage server is to maintain the
session. The session length only applies to backups (reading from a raw device),
not restores (writing to a raw device). The actual session will end within 60
seconds of the session length specified.
Absolute session length - This option maintains the backup session for a set
period of time from the start of the backup session. Use this option when you
know approximately how long the backup operation will take. This option can
also be used to limit the length of time that a backup can run. The Backup
operation will terminate when the Absolute Session Length timeout is reached
(whether or not Backup has completed). An Event message is logged that the
backup terminated when the Absolute Session Length timeout was reached.
Relative session length - This option maintains the backup session for a period
of time after the backup completes (the last raw device is opened and closed).
This is more flexible than the absolute session length since it may be difficult to
estimate how long a back up will take for all devices. With relative time, you can
estimate how long to wait after the last device is backed up. If there is a problem
during the backup, and the backup cannot complete, the Inactivity timeout tells
the storage server how long to wait before ending the backup session.
5. Confirm all information and click Finish to enable backup.
CDP/NSS Administration Guide
384
ZeroImpact Backup
Back up a CDP/NSS logical resource using dd
Below are procedures for using Linuxs dd command to perform a raw device
backup. Refer to the documentation that came with your backup software if you are
using a backup application to perform the backup.
1. Determine the raw device name of the virtual device that you want to back up.
You can find this name from the FalconStor Management Console. It is
displayed on the Backup tab when you highlight a specific SAN resource.
2. Execute the following command on the storage server:
dd if=/dev/isdev/kisdev# of=/dev/st0 bs=65536
where kisdev# refers to the raw device name of the logical resource.
st0 is the tape device. If you have multiple tape devices, substitute the correct
number in place of the zero. You can verify that you have selected the right tape
device by using the command: tar -xvf /dev/st0 where 0 is a variable.
bs=65536 sets the block size to 64K to achieve faster performance.
You can also back up a logical resource to another logical resource. Prior to
doing so, all target logical resources must be detached from the client
machine(s), and have backup enabled so that the raw device name for the
logical resource can be used instead of specifying st0 for the tape device.
When the back up is finished, you will only see one logical resource listed in the
Console. This is caused by the fact that when you reserve a hard drive for use
as a virtual device, the storage server writes partition information to the header
and the Console uses this information to recognize the hard drive. Since a Linux
dd will do an exact copy of the hard drive, this partition information will exist on
the second hard drive, will be read by the Console, and only one drive will be
shown. If you need to make a usable copy of a virtual drive, you should use
FalconStors Snapshot Copy option.
CDP/NSS Administration Guide
385
ZeroImpact Backup
Restore a volume backed up using ZeroImpact Backup Enabler
You will need to do the following in order to restore an entire volume that was
backed up with the ZeroImpact Backup Enabler.
1. Unassign the volume you will be restoring from the SAN client to which it
attaches.
This ensures that the client cannot change data while the restore is taking place.
2. Before you start the restore, suspend replication and disable TimeMark.
These can hamper the performance of the restore. Before you disable TimeMark
be sure to record the current policies. This can be done by right-clicking on the
virtual drive and select TimeMark/CDP --> Properties.
3. Once the restore is complete, resume replication and re-enable TimeMark, if
necessary.
CDP/NSS Administration Guide
386
CDP/NSS Administration Guide
Multipathing
The Multipathing option may not be available in all IPStor, CDP, and NSS versions.
Check with your vendor to determine the availability. This option allows the storage
server to intelligently distribute I/O traffic across multiple Fibre Channel (FC) ports to
maximize efficiency and enhance system performance.
Because it uses parallel active storage paths between the storage server and
storage arrays, CDP/NSS can transparently reroute the I/O traffic to an alternate
storage path to ensure business continuity in the event of a storage path failure.
Multipathing is possible due to the existence of multiple HBAs in the storage server
and/or multiple storage controllers in the storage systems that can access the same
physical LUN.
The multiple paths cause the same LUN to have multiple instances in the storage
server.
CDP/NSS Administration Guide
387
Multipathing
Load distribution
Automatic load distribution allows for two or more storage paths to be
simultaneously used for read/write operations, enhancing performance by
automatically and equally dispersing data access across all of the available active
paths.
Preferred paths
Some storage systems support the concept of preferred paths, which means the
system determines the preferred paths and provides the means for the storage
server to discover them.
CDP/NSS Administration Guide
388
Multipathing
Path management
From the FalconStor Management Console, you can specify a preferred path for
each physical device. Right-click on the device and select Alias.
The Path Status can be Standby - Active (passive) or load-balancing (Active).
Changes to the active path configuration become effective immediately, but are not
saved permanently until you use the System Preferred Path --> Save option.
Each path has either a good or bad state. In most cases when the deployment is an
active/passive clustered pair of an NSS Gateway or NSS HC acting as a gateway,
there are two load-balancing groups.
Single load-balancing group: Once the path is determined to be defective, it
will be removed from the load-balanced group and will not be re-used after
the path is restored unless there are no more good paths available or a
manual rescan is performed. If either occurs, the path will be added back to
the load-balanced group.
CDP/NSS Administration Guide
389
Multipathing
Two load-balancing groups: If there are two load-balanced groups (one is
active and the other is passive) for the physical device, then when there are
no more good paths left in the active load-balanced group, the device will
fail over to the passive load-balancing group.
You can see multipathing information from the console by checking the Alias tab for
a LUN (under Fibre Channel Devices). For each device, you see the following:
Path Status: Current, Standby Active, Standby Passive, or load-balancing
Current: Displays if only one path is being used.
Standby Active: Displays when a path is in the active group and is ready.
A rescan from the console will make it load-balanced.
Standby Passive: Displays for all passive paths
load-balancing displays for all active paths across which the I/O is being
balanced.
Standby Passive path(s) cannot be used until the LUN is trespassed. The
load is then balanced across the standby passive paths and the earlier
load-balanced paths now become standby passive.
Connectivity status - indicates whether the device is connected or
disconnected.
CDP/NSS Administration Guide
390
CDP/NSS Administration Guide
Command Line Interface
The Command Line Interface (CLI) is a simple interface that allows client machines
to perform some of the more common functions currently performed by the
FalconStor Management Console. Administrators can use the CLI to automate
many tasks, as well as integrate CDP/NSS with their existing management tools.
The CLI utility can be downloaded from the FalconStor website (on the customer
support portal and TSFTP) under the SAN client category.
Installation and configuration
The CLI is installed as part of the CDP/NSS Client installation. Once installed, a path
must be set up for Windows clients in order to be able to use the CLI. The path can
be set up from the Windows Desktop by performing the following steps:
1. Right-click My Computer and select Properties --> Advanced system settings -->
Environment Variables button.
2. Highlight the Path variable in the System Variables box, click the Edit button and
add the following to the end of the existing path.
;c:\Program Files\FalconStor\IPStor\Client
3. Click OK to save and exit.
For Linux, Solaris, AIX, and HP-UX clients, the path is automatically set during the
Client installation. In order to use the CLI, Linux, Solaris, AIX, and HP-UX client
users must have exited the current shell to set the new environment at least once
after installing the client software.
Using the CLI
CLI command usage help can be obtained by typing: iscli [help] [<command>]
[server parameters]. To run a CLI command, type: iscli <command> <parameters>
Note: You should not have a console connected to your storage server when you
run CLI commands; you may see errors in the syslog if a console is connected.
Type iscli at a command line to display a list of the existing commands.
For example: c:\iscli
These commands must be combined with the appropriate long or short arguments
(ex. Long: --server-name servername Short: -s servername).
If you type the command name (for example, c:\iscli getvdevlist), a list of
arguments will be displayed for that command.
CDP/NSS Administration Guide
391
Command Line Interface
Common arguments
The following arguments are used throughout the CLI. For each, a long and short
variation is included. You can use either one. The short arguments ARE case
sensitive. For arguments that are specific to each command, refer to the section for
that command.
Short Argument
Long Argument
Value/Description
-s
--server-name
storage server Name (hostname or IP address). In
order to use the hostname, the server name has to
be resolvable on the client side and server side.
-u
--server-username
storage server Username
-p
--server-password
storage server User Password
-S
--target-name
Storage Target Server Name (hostname or IP
address)
-U
--target-username
Storage Target Server Username (for replication
commands)
-P
--target-password
Storage Target Server User Password (for
replication commands)
-c
--client-name
Storage Client Name
-v
--vdevid
Storage Virtual Device ID
-v
--source-vdevid
storage server Source Virtual Device ID
-V
--target-vdevid
FalconStor Target Virtual Device ID
-a
--access-mode
Client Access Mode to Virtual Device
-f
--force
Force the deletion of the virtual device
-n
--vdevname
Virtual device name
-X
--rpc-timeout
Specify a number between 1 and 30000 seconds
for the RPC timeout. The default is 30 seconds if
not specified.
Note: You only need to use the --server-username (-u) and --serverpassword (-p) arguments when you log into a server. You do not need them
for subsequent commands on the same server during your current session.
CDP/NSS Administration Guide
392
Command Line Interface
Commands
Below is a list of commands you can use to perform CDP/NSS functions from the
command line. You should be aware of the following as you enter commands:
Type each command on a single line, separating arguments with a space.
You can use either the short or long arguments (as described above).
For details and a list of arguments for each command, type iscli and the
command. For example c:\iscli getvdevlist
Variables are listed in <> after each argument.
Arguments listed in brackets [ ] are optional.
The order of the arguments is irrelevant.
Arguments separated by | are choices. Only one can be selected.
For a value entered as a literal, it is necessary to enclose the value in
quotes (double or single) if it contains special characters such as *, <, >, ?, |,
%, $, or space. Otherwise, the system will interpret the characters with a
special meaning before it is passed to the command.
Literals cannot contain leading or trailing spaces. Leading or trailing spaces
enclosed in quotes will be removed before the command is processed.
In order to use the hostname of the storage server instead of its IP address,
the server name has to be resolvable on the client side and server side.
The following table provides a summary of the command line interface options along with a description.
Command Line Interface (CLI) description table
Command
Description
Login/Logout of the storage server
iscli login
This command allows you to log into the specified storage server
with a given username and password.
iscli logout
This command allows you to log out of the specified storage
server.
If the server was not logged in or you have already logged out from
the server when this command is issued, error 0x0902000f will be
returned. After logging out from the server, the -u and p
arguments will not be optional for the server commands.
Client Properties
iscli setfcclientprop
This command allows you to set Fibre Channel client properties.
<client-name> is required.
iscli getclientprop
This command allows you to get client properties.
iscli setiscsiclientprop
This command allows you to set iSCSI client properties. <user-list>
is in the following format: user1,user2,user3
CDP/NSS Administration Guide
393
Command Line Interface
Command Line Interface (CLI) description table
Command
Description
iSCSI Targets
iscli createiscsitarget
This command creates an iSCSI target. <client-name>, <ipaddress>, and <access-mode> are required. A default iSCSI target
name will be generated if <iscsi-target-name> is not specified.
iscli deleteiscsitarget
This command deletes an iSCSI target. <client-name> and <iscsitarget-name> are required.
iscli assigntoiscsitarget
This command assigns a virtual device or group to an iSCSI target.
A virtual device or group (either ID or name) and iSCSI target are
required. All virtual devices in the same group will be assigned to
the specified iSCSI target if group is specified. If a virtual device ID
is specified and it is in a group, an error will be returned.
iscli unassignfromiscsitarget
This command unassigns a virtual device or group from an iSCSI
target. This command unassigns a virtual device or group from an
iSCSI target. Virtual device and iSCSI target are required. -f (-force) option i8s required when the iSCSI target is assigned to the
client and the client is connected or when the virtual device is in a
group. An error will be returned if the client is connected and the
force option is not specified.
iscli getiscsitargetinfo
This command retrieves information for iSCSI targets. The iSCSI
target ID or iSCSI target name can be specified to get the specific
iSCSI target information. The default is to get the information for all
iSCSI targets.
iscli setiscsitargetprop
This command sets the iSCSI target properties. Refer to Create
iSCSI target above for details about the options.
Users and Passwords
iscli adduser
This command allows you to add a CDP/NSS user. You must log in
to the server as "root" in order to perform this operation.
iscli setuserpassword
This command allows you to change a CDP/NSS users password.
You must log in to the server as "root" in order to perform this
operation if the user is not an iSCSI user.
Mirroring
iscli createmirror
This command allows you to create a mirror for the specified virtual
device. The virtual device can be a SAN, or Replica resource.
iscli getmirrorstatus
This command shows the mirror status of a virtual device. The
resource name, ID and synchronization status will be displayed if
there is a mirror disk configured for the virtual device.
iscli syncmirror
This command synchronizes the mirrored disks.
iscli swapmirror
This command reverses the roles of the primary disk and the
mirrored copy.
CDP/NSS Administration Guide
394
Command Line Interface
Command Line Interface (CLI) description table
Command
Description
iscli promotemirror
This command allows you to promote a mirror disk to a regular
virtual device. The mirror cannot be promoted if the
synchronization is in progress or when it is out-of-sync and the
force option is not specified.
iscli removemirror
This command allows you to remove a mirror for the specified
virtual device.
iscli enablealternativereadmirror
This command enables virtual devices to read from an alternative
mirror.
iscli disablealternativereadmirror
This command disables virtual devices so they no longer read from
an alternative mirror.
iscli
getalternativereadmirroroption
This command retrieves and displays information about all virtual
devices with the alternative mirror option.
iscli migrate
This command allows you to copy a virtual device without a
snapshot. The original virtual device becomes a new virtual device
with a new virtual device ID. The original virtual device name and
ID will be kept, but with segments allocated from different storage.
If the virtual device does not have mirror, it will create a mirror, sync
the mirror, swap the mirror, then promote the mirror. If the virtual
device already has mirror, it will swap the mirror, sync the mirror,
promote the mirror, then re-create the mirror for the original VID.
iscli getmirrorpolicy
The following is an example of the output of the command if Mirror
Health Monitoring Option is enabled:
Mirror Health Monitoring Option Enabled=Yes
Monitoring Interval=1 seconds
Maximum Acceptable Lagging Time=15 milliseconds
Threshold to Report Error=5 %
Minimum outstanding IOs to Report Error=20
Mirror Sync Control Policy:
Sync Control Policy Enabled=Yes
Sync Control Max Sync Time=4 Minute(s)
Sync Control Max Resync Interval=1 Minute(s)
Sync Control Max IOs for Resync=N/A
Sync Control Max IO Size for Resync=20 MB
Sync Control Max Resync Retry=0
iscli setmirrorpolicy
The Mirror policy is for resources enabled with the mirroring option.
You can set the options to check the mirror health status, suspend,
resume and re-synchronize the mirror when it is necessary.
iscli suspendmirror
This command allows you to suspend mirroring.
iscli resumemirror
This command allows you resume mirroring.
CDP/NSS Administration Guide
395
Command Line Interface
Command Line Interface (CLI) description table
Command
Description
Server Commands for Virtual Devices and Clients
iscli createvdev
This command allows you to create a SAN resource on the
specified server. A SAN resource can be created in one of
following categories: virtual or service-enabled. The default
category is virtual if the category is not specified.
iscli getvdevlist
This command retrieves and displays information about all virtual
devices or a specific virtual device from the specified server.
iscli getclientvdevlist
This command retrieves and displays information about all virtual
devices assigned to the client from the specified server. .
iscli renamevdev
This command allows you to rename a virtual device. However
only SAN resource and SAN replica can be renamed. Specify the
id and new name of the resource to be renamed.
iscli assignvdev
This command allows you to assign a virtual device or a group on a
specified server to a SAN client. If this is an iSCSI client, you can
use this command to assign an iSCSI target to a client, but not a
device. Use CLI assigntoiscsitarget to assign a device.
iscli unassignvdev
This command allows you to unassign a virtual device or a group
on the specified server from a SAN client. If the client is an iSCSI
client, iSCSI target should be specified. Otherwise, virtual device
should be specified.
iscli expandvdev
This command allows you to expand the size of a virtual device on
the specified server. SAN resources can be expanded but not a
replica disk by itself or a TimeView resource.
iscli deletevdev
This command allows you to delete a SAN resource, or SAN
TimeView Resource on the specified server. If the resource is
assigned to a SAN client, the assignment(s) will be removed first. If
a Snapshot Resource is created for the virtual device, it will be
removed.
iscli setassignedvdevprop
This command allows you to set properties for assigned virtual
devices. Device properties can only be changed when the client is
not connected.
iscli addclient
This command allows you to add a client to the specified server.
iscli deleteclient
This command allows you to delete a client. <client-name> is the
client to be deleted.
iscli enableclientprotocol
This command allows you to add a protocol to a client.
iscli disableclientprotocol
This command allows you to remove a protocol from a client.
iscli getvidbyserialno
This command allows you to get the corresponding virtual device
ID when you enter a serial number ( a 12-character long
alphanumeric string).
CDP/NSS Administration Guide
396
Command Line Interface
Command Line Interface (CLI) description table
Command
Description
iscli addthindiskstorage
This command allows you to add additional storage to the resource
configured for Thin Provisioning without changing the maximum
disk size seen by the client host. The resource can be SAN, or a
replica.
iscli setthindiskproperties
This command allows you to set the thin disk properties.
iscli getthindiskproperties
This command allows you to get thin disk properties.
iscli getvdevserial
This command retrives the serial number of the specified devices
from the server.
iscli replacefcclientwwpn
This command allows you to replace the Fibre Channel Client
World Wide Port Name (WWPN).
iscli rescanfcclient
This command allows you to notify the Fibre Channel client to
resan the devices.
Email Alerts
iscli enablecallhome
This command allows you to enable Email Alerts.
iscli disablecallhome
This command allows you to disable Email Alerts.
Failover
iscli getfailoverstatus
This command shows you the current status of your failover
configuration. It also shows all Failover settings, including which IP
addresses are being monitored for failover.
Replication
iscli createreplication
This command allows you to set up a replication configuration.
iscli startreplication
This command allows you to start replication on demand for a
virtual device or a group. You can only specify one identifier, -v
<vdevid>, -g <group-id>, or -G <group-name>.
iscli stopreplication
This command allows you to stop the replication that is in progress
for a virtual device or a group. If a group is specified, and the group
is enabled with replication, the replication for all resources in the
group will be stopped. If replication is not enabled for the group,
but some of the resources in the group are configured for
replication, replication for the resources in the group will be
stopped.
iscli suspendreplication
This command allows you to suspend scheduled replications for a
virtual device or a group that will be triggered by your replication
policy. It will not stop a replication that is currently in progress.
iscli resumereplication
This command allows you to resume replication for a virtual device
or a group that was suspended by the suspendreplication
command. The replication will then be triggered by the replication
policy once it is resumed. .
CDP/NSS Administration Guide
397
Command Line Interface
Command Line Interface (CLI) description table
Command
Description
iscli promotereplica
This command allows you to promote a replica to a regular virtual
device if the primary disk is available and the replica disk is in a
valid state.
iscli removereplication
This command allows you to remove the replication configuration
from the primary disk on the primary server and delete the replica
disk on the target server. Either a primary server with a primary
disk or a target server with a replica disk can be specified.
iscli getreplicationstatusinfo
This command shows the replication status. The target server
name and the replica disk ID are required to get the replication
status.
iscli setreplicationproperties
This command allows you to set the replication policy for a virtual
device or group configured for replication.
iscli getreplicationproperties
This command allows you to get the replication properties for a
virtual device or group configured for replication..
iscli relocate
This command relocates a replica after the replica disk has been
physically moved to a different server.
iscli scanreplica
This command scans a replica server.
iscli getreplicationthrottles
This command allows you to view the throttle configuration
information.
iscli setreplicationthrottles
This command allows you to configure the throttle level for target
sites or windows. Can accept a file. The path of the file in the
command must be the full path
iscli getthrottlewindows
This command allows you to view the information of a particular
Target Site.
iscli setthrottlewindows
This command allows you to change the window start/end time.
Can accept a file. The path of the file in the command must be the
full path
iscli removethrottlewindows
This command removes a custom window. Can accept a file. The
path of the file in the command must be the full path.
iscli addthrottlewindows
Creates a custom throttle window with a specific time duration.
Can accept a file. The path of the file in the command must be the
full path.
iscli addlinktypes
This command allows you to create a custom Link Type.
iscli gettargetsitesinfo
This command allows you to view the information of a particular
Target Site.
iscli
addtargetservertotargetsite
This command allows you to add a target server to an existing
Target site. Can accept a file. The path of the file in the command
must be the full path.
CDP/NSS Administration Guide
398
Command Line Interface
Command Line Interface (CLI) description table
Command
Description
iscli
deletereplicationtargetsite
This command deletes / removes a target site from server.
iscli
createreplicationtargetsite
This command creates a target site. You can create a target site
with multiple target servers within at once by listing in their host
names in the command or use a file. The format of the file is one
server per line. The path of the file in the command must be the full
path.
iscli
removetargetserverfromtargetsi
te
This command allows you to remove a target server from an
existing Target site. Can accept a file. The path of the file in the
command must be the full path.
iscli removelinktypes
This command allows you to remove a custom Link Type.
iscli setlinktypes
This command allows you to configure an existing custom Link
Type.
iscli getlinktypes
This command allows you to view the available Link Types on
server.
Server configuration
iscli getserverversion
This command allows you to view the storage version version and
build number.
iscli saveconfig
This command saves the configuration of your storage server. You
should save the configuration any time you change it, including any
time you add/change/delete a client or resource, assign a client, or
make any changes to your failover/mirroring/replication
configuration.
iscli restoreconfig
This command restores the configuration of your storage server.
Specify the configuration file name that was saved with saveconfig
command.
Restoring a configuration overwrites existing virtual device and
client configurations for that server. storage server partition
information will not be restored. This feature should only be used if
your configuration is lost or corrupted.
Snapshot Copy
iscli snapcopy
This command allows you to issue a snapshot copy between two
virtual devices of the same size.
iscli getsnapcopystatus
This command allows you to get the status of snapshot copy.
Physical Device
iscli getpdevinfo
This command provides you with physical device information.
iscli getadapterinfo
This command allows you to get HBA information on a selected
adapter.
CDP/NSS Administration Guide
399
Command Line Interface
Command Line Interface (CLI) description table
Command
Description
iscli rescandevices
This command allows you to rescan the physical resource(s) on
the specified server to get the proper physical resource
configuration.
The adapter number can be specified to rescan only the devices
on that adapter. If an adapter is not specified, all adapters will be
rescanned. In addition to the adapter number, you can also
specify the SCSI range to be rescanned. If the range is not
specified, all SCSI IDs of the specified adapter(s) will be
rescanned. Furthermore, the LUN range can be specified to
narrow down the rescanning range. The range is specified in this
format: #-#, e.g. 1-10.
If you want the system to rescan the device sequentially, you
can specify the L ([--sequential) option. The default is not to
rescan sequentially.
iscli importdisk
This command allows you to import a foreign disk to the specified
server. A foreign disk is a virtualized physical device containing
CDP/NSS logical resources previously set up on a different
storage server.
If the previous server is no longer available, the disk can be set up
on a new storage server and the resources on the disk can be
imported to the new server to make them available to clients.
Either the GUID or SCSI address can be specified for the physical
device to be imported. This information can be retrieved through
the getpdevinfo command.
iscli preparedisk
This command allows you to prepare a physical device to be used
by a CDP/NSS server or reserve a physical device for other usage.
The <guid> is the unique identifier of the physical device. <ACSL>
is the SCSI address of the physical device in this format: #:#:#:#
(adapter:channel:scsi id:lun). You can specify either the <guid> or
<ACSL> for the disk to be prepared.
iscli renamephysicaldevice
This command allows you to rename a physical device. (When a
device is renamed on a server in a failover pair, the device gets
renamed on the partner server also.)
iscli deletephysicaldevice
This command allows you to remove a physical device.
iscli restoresystempreferredpath
This command allows you to restore the system preferred path for
a physical device.
TimeMark/CDP
iscli enabletimemark
This command allows you to enable the TimeMark option for an
individual resource or for a group. TimeMark can be enabled for a
resource as long as it is not yet enabled.
CDP/NSS Administration Guide
400
Command Line Interface
Command Line Interface (CLI) description table
Command
Description
iscli createtimemark
This command allows you to create a TimeMark for a virtual device
or a group. A timestamp will be associated with each TimeMark. A
notification will be sent to the SAN client to stop writing data to its
virtual devices before the TimeMark is created. The new TimeMark
is not immediately available after a successful createtimemark
command. The TimeMark creation status can be retrieved with the
gettimemarkstatus command. The TimeMark timestamp
information can be retrieved with the gettimemark command.
iscli disabletimemark
This command allows you to disable the TimeMark option for a
virtual device or a group.
iscli updatetimemarkinfo
This command is only available in version 5.1 or later and lets you
add a comment or change the priority of an existing TimeMark. A
TimeMark timestamp is required to update the TimeMark
information.
iscli deletetimemark
This command allows you to delete a TimeMark for a virtual device
or a group. <timemark-timestamp> is the TimeMark timestamp to
be selected for the deletion in the following format:
YYYYMMDDhhmmss.
iscli copytimemark
This command allows you to copy the specified TimeMark to an
existing or newly created virtual device with the same size. The
copying status can be retrieved with the gettimemarkstatus
command.
iscli selecttimemark
This command allows you to select a TimeMark and create a raw
device on the server to be accessed directly. Only one raw device
can be created per TimeMark. The corresponding
delselecttimemark command should be issued to release the raw
device when the raw device is no longer needed.
iscli deselecttimemark
This command allows you to release the raw device associated
with the TimeMark previously selected via the selecttimemark
command.
iscli rollbacktimemark
This command allows you to rollback a virtual device to a specific
point-in-time. The rollback status can be retrieved with the
gettimemarkstatus command.
iscli gettimemark
This command allows you to enumerate the TimeMarks and view
the TimeMark information for a virtual device or for a group.
iscli settimemarkproperties
This command allows you to change the TimeMark properties,
such as the automatic TimeMark creation schedule and maximum
TimeMarks allowed for a virtual device or a group.
iscli gettimemarkproperties
This command allows you to view the current TimeMark properties
associated with a virtual device or a group. When the virtual device
is in a group, the TimeMark properties can only be retrieved for the
group.
CDP/NSS Administration Guide
401
Command Line Interface
Command Line Interface (CLI) description table
Command
Description
iscli gettimemarkstatus
This commands allows you to retrieve the TimeMark creation state
and TimeMark rollback or copying status.
iscli createtimeview
This command allows you to create a TimeView virtual device
associated with specified virtual device and TimeMark.
iscli remaptimeview
This command remaps a TimeView associated with a specified
virtual device and TimeMark. The original TimeView is deleted and
all changes to it are gone. A new TimeView is created with the new
TimeMark using the same TimeView device ID. All of the
connection assignments are retained.
iscli suspendcdpjournal
This option suspends CDP. After the CDP journal is suspended,
data will not be written to it until it is resumed.
iscli resumecdpjournal
This option resumes CDP after it has been suspended.
iscli getcdpjournalstatus
This command gets the current size and status of your CDP
journal, including all policies.
iscli removetimeviewdata
This command allows you to remove TimeView data resources
individually or by source virtual devices.
iscli getcdpjournalinfo
This command allows you to retrieve CDP Journal information.
iscli createcdpjournaltag
This command lets you manually add a tag to the CDP journal. The
-A (--cdp-journal-tag) tag can be up to 64 characters long and
serves as a bookmark in the CDP journal. Instead of specifying the
timestamp, the tag can be used when creating a TimeView.
iscli getcdpjournaltags
This command allows you to retrieve CDP Journal tags.
Snapshot Resource
iscli createsnapshotresource
This command allows you to create a snapshot resource for a
virtual device. A snapshot resource is required in order for a virtual
device to be enabled with the TimeMark or Backup options. It is
also required for replication, snapshot copy, and for joining a
group.
iscli deletesnapshotresource
A snapshot resource is not needed if the virtual device is not
enabled for the TimeMark or Backup options, is configured for
replication or is in a group. You can delete the snapshot resource
to free up space when it is not needed.
iscli expandsnapshotresource
This command allows you to expand the snapshot resource on
demand. The maximum size allowed that is specified in the
snapshot policy only applies to the automatic expansion. The size
limit does not apply when the snapshot resource is expanded on
demand.
iscli setsnapshotpolicy
This command allows you to modify the existing snapshot policy
for the specified resource. The new policy will take effect with the
next snapshot operation.
CDP/NSS Administration Guide
402
Command Line Interface
Command Line Interface (CLI) description table
Command
Description
iscli getsnapshotpolicy
This command allows you to view the snapshot policy settings for
the specified resource.
iscli enablereclamationpolicy
This command allows you to set the reclamation policy settings for
the specified resource.
iscli disablereclamationpolicy
This command allows you to disable the reclamation policy
settings for the specified resource.
iscli startreclamation
This command allows you to manually start the reclamation
process for the specified resource.
iscli stopreclamation
This command allows you to manually stop the reclamation
process for the specified resource.
iscli updatereclaimpolicy
This command allows you to update the reclamation policy settings
for the specified resource.
iscli getreclamationstatus
This command allows you to retrieve and view the reclamation
status for the specified resource.
iscli
reinitializesnapshotresource
Snapshot Resource cannot be deleted when the virtual device is in
a Snapshot Group, or when the snapshot is online.
iscli getsnapshotresourcestatus
This command allows you to view snapshot resource status
information. The output will be similar to the following:
Virtual Device Name=Sarah-00457
ID=457
Type=SAN
Snapshot Resource Size=58827 MB
Snapshot Resource Status=Accessible
Used Size=47.54 GB(82%)
iscli setreclamationpolicy
This command allows you to set the reclamation policy on a
selected virtual device.
iscli setglobalreclamationpolicy
This command allows you to set the global reclamation policy.
iscli getsnapshotgroups
This command allows you to retrieve group information for all
groups or a specific group on the specified server.
The default output format is a list of groups and a list of group
members in each group.
iscli createsnapshotgroup
This command allows you to create a group, where <group-name>
is the name for the group.
The maximum length for the group name is 64. The following
characters are invalid for the group name: <>"&$/\
CDP/NSS Administration Guide
403
Command Line Interface
Command Line Interface (CLI) description table
Command
Description
iscli deletesnapshotgroup
This command allows you to delete a group. A group can only be
deleted when there are no group members in it.
If the group is configured for replication, both the primary group
and replica group have to be deleted. The force option is required if
one of the following conditions applies:
Deleting the replica group on the target server when the
primary server is not available.
Deleting the primary group on the primary server when
the target server is not available.
An error will be returned if the force option is not specified for these
conditions.
iscli joinsnapshotgroup
This command allows you to add a virtual device to the specified
group. <vdevid> is the virtual device to join the group.
Either <group-id> or <group-name> can be specified for the group.
iscli leavesnapshotgroup
This command allows you to remove a virtual device from a group.
If the group is configured for replication, both the primary and
target servers need to be available because the system will
remove the primary disk from the group on the primary server and
the replica disk from the group on the target server.
You can use the force option to allow the primary disk to leave the
group on the primary server without connecting to the target
server, or allow the replica disk to leave the group on the target
server without connecting to the primary server. The force option
should only be used when either the primary disk is not in the
primary group anymore or when the replica disk is not in the replica
group anymore
iscli enablereplication
This command allows you to enable replication for a group. Specify
the <group-id> or <group-name> for the group that should have
replication enabled. All of the resources in the group have to be
configured with replication in order for the group to be enabled for
replication.
Use the -E (--enable-resource-option) option to allow the system to
configure the non-eligible resources with replication first before
enabling the group replication option.
A target server must be specified. A group for the replica disks will
be created on the target server. You can specify the <target-groupname> or use the default. The default is to use the same group
name.
iscli disablereplication
This command allows you to disable replication for a group. All
replica disks will leave the replica group and the replica group on
the target server will be deleted. The replication configuration of all
resources in the group will remain the same, but TimeMarks will
not be taken for all resources together anymore. All replication
operations will be applied to the individual resource only.
CDP/NSS Administration Guide
404
Command Line Interface
Command Line Interface (CLI) description table
Command
Description
Cache resources
iscli createcacheresource
This command creates a cache resource for a virtual device or a
group.
iscli getcacheresourcestatus
This command gets the status of a cache resource.
iscli setcacheresourceprop
This command sets the properties of a cache resource.
iscli getcacheresourceprop
This command displays the properties of a cache resource.
iscli suspendcacheresource
This command suspends a cache resource. After the cache
resource is suspended, no more new data will be written to it. The
data on the cache resource will be flushed to the source resource.
iscli resumecacheresource
This command resumes a suspended cache resource.
iscli deletecacheresource
This command deletes a cache resource. The data on the cache
resource has to be flushed before the cache resource can be
deleted. The system will suspend the cache resource first if it is not
already suspended.
Report data
iscli getreportdata
This command allows you to get report data from the specified
server and save the data to an output file in csv or text file format.
Event log
iscli geteventlog
The date range can be specified to get the event log for a specific
range. The default is to get all of the event log messages if a date
range is not specified.
Backup
iscli enablebackup
This command allows you to enable the backup option for an
individual resource or for a group. Backup can be enabled for a
resource as long as it is not already enabled.
iscli disablebackup
This command allows you to disable backup for a virtual device or
a group. Backup of a resource cannot be disabled if the resource is
in a group enabled for backup.
A groups backup can be disabled as long as there is no group
activity using the snapshot resource. Individual resources in the
group will remain backup-enabled after the groups backup is
disabled.
CDP/NSS Administration Guide
405
Command Line Interface
Command Line Interface (CLI) description table
Command
Description
iscli stopbackup
This command allows you to stop the backup activity for a virtual
device or a group.
If a group is specified and the group is enabled for backup, the
backup activity for all resources in the group is stopped. If the
backup option is not enabled for the group, but some of the
resources in the group are enabled for backup, the backup activity
for the resources in the group is stopped.
iscli setbackupproperties
This command allows you to change the backup properties, such
inactivity timeout, closing grace period, backup window, and
backup life span, for a virtual device or a group.
When the virtual device is in a group, the backup properties can
only be set for the group. To remove the inactivity timeout or
backup life span, specify 0 as the value.
iscli getbackupproperties
This command allows you to view the current backup properties
associated with a virtual device or a group enabled for backup.
When the virtual device is in a group, the backup properties can
only be retrieved for the group.
Xray
iscli getxray
This command allows you to get X-ray information from the
storage server for diagnostic purposes. Each X-ray contains
technical information about your server, such as server messages
and a snapshot of your server's current configuration and
environment. You should not create an X-ray unless you are
requested to do so by your Technical Support representative.
CDP/NSS Administration Guide
406
Command Line Interface
Command Line Interface (CLI) error codes
The following table contains command line error codes. For any error not listed in
this table, please contact FalconStor Technical Support.
CDP-NSS Command Line Interface Error Messages
Error code
Text
0x90020001
Invalid arguments.
0x90020002
Invalid Virtual Device ID.
0x90020003
Invalid Client access mode.
0x90020004
Connecting to $ISPRODUCTSHORT$ server failed.
0x90020005
You are connected to $ISPRODUCTSHORT$ server with read-only
privileges.
0x90020006
Connecting to SAN client failed.
0x90020007
Getting SAN client state failed.
0x90020008
The requested Virtual Device is already attached.
0x90020009
Attaching to Virtual Device failed.
0x9002000a
Disconnecting from SAN client failed.
0x9002000b
Detaching from Virtual Device failed.
0x9002000c
Invalid size.
0x9002000d
Invalid X Ray options.
0x9002000e
Logging in to $ISPRODUCTSHORT$ server failed.
0x9002000f
User has already logged out from $ISPRODUCTSHORT$ server.
0x90020010
Invalid client.
Note: Make sure you use the SAN client names that are created on the server. These
names may be different from the actual hostname or the ones in /etc/hosts.
0x90020011
Replication policy is not specified.
0x90020012
Memory allocation error.
0x90020013
Failed to get configuration file from server.
0x90020014
Failed to get dynamic configuration from server.
0x90020015
Failed to parse configuration file.
0x90020016
Failed to parse dynamic configuration file.
0x90020017
Failed to connect to the target server.
0x90020018
You are connected to the target Server with readonly privilege.
CDP/NSS Administration Guide
407
Command Line Interface
CDP-NSS Command Line Interface Error Messages
Error code
Text
0x90020019
Failed to get the configuration file from the target server.
0x9002001a
Failed to get the dynamic configuration file from target server.
0x9002001b
Failed to parse the configuration file from target server.
0x9002001c
Failed to parse the dynamic configuration file from the target server.
0x9002001d
Invalid source virtual device.
0x9002001e
Invalid target virtual device.
0x9002001f
Invalid source resource type.
0x90020020
Invalid target resource type.
0x90020021
The virtual device is a replica disk.
0x90020022
The virtual device is a replication primary disk.
0x90020023
Failed to delete virtual device from client.
0x90020024
Failed to delete virtual device.
0x90020025
Failed to delete remote client.
0x90020026
Failed to save the file.
0x90020027
Remote client does not exist.
0x90020028
You have to run login command with valid user id and password or
provide server user id and password through the command.
0x90020029
You have to run login command with valid user id and password or
provide target server user id and password through this command.
0x9002002a
Virtual Device ID %1 is not assigned to the client %2.
0x9002002b
The size of the source disk and target disk does not match.
0x9002002c
The virtual device is not assigned to the client.
0x9002002d
Replication is already suspended.
0x9002002e
Replication is not suspended.
0x9002002f
Rescanning Devices failed.
0x90020030
The requested Virtual Device is already detached.
0x90020031
$ISPRODUCTSHORT$ server is not added to the client.
0x90021000
?CLI_RPC_FAILED.
0x90021001
?CLI_RPC_COMMAND_FAILED.
0x90022000
Failed to start a transaction for this command.
0x90022001
Failed to start a transaction on the primary server for this command.
CDP/NSS Administration Guide
408
Command Line Interface
CDP-NSS Command Line Interface Error Messages
Error code
Text
0x90022002
Failed to start a transaction on the target server for this command.
0x90022003
$ISPRODUCTSHORT$ server specified is an invalid IP address.
0x90022004
Failed to resolve $ISPRODUCTSHORT$ server to a valid IP
address.
Note: For CLI to work with server name instead of IP address, the server name has to be
resolved on the client side and server side. This can happen, for example when the
server hostname is not in DNS or /etc/hosts file.
0x90022005
Failed to create a connection.
Note: Check network interface is not down on the server to make sure RPC calls go
through.
0x90022006
Failed to secure the connection.
0x90022007
User authentication failed.
0x90022008
Failed to login to $ISPRODUCTSHORT$ server.
0x90022009
Failed to get the device statistics from client.
0x9002200a
Device is not ready. 2
0x9002200b
Device is not detached.
0x9002200c
Failed to get device status from client.
0x9002200d
The source virtual device is already a snapcopy source.
0x9002200e
The source virtual device is already a snapcopy target.
0x9002200f
The target virtual device is already a snapcopy source.
0x90022010
The target virtual device is already a snapcopy target.
0x90022011
The source virtual device is a replica disk.
0x90022012
The target virtual device is a replica disk.
0x90022013
Invalid category for source virtual device.
0x90022014
Invalid category for target virtual device.
0x90022015
The category of source virtual device is different from category of
the target virtual device.
0x90022016
The size of the primary disk does not match the size of the replica
disk. The minimum size for the expansion is %1 MB in order to
synchronize them.
0x90022017
Getting $ISPRODUCTSHORT$ server information failed. It's
possible that the server version is prior to version 1.02
CDP/NSS Administration Guide
409
Command Line Interface
CDP-NSS Command Line Interface Error Messages
Error code
Text
0x90022018
The Command Line Interface and the $ISPRODUCTSHORT$
Server are running different software versions:\n\t<CLI version %1
(build %2) and $ISPRODUCTSHORT$ Server version %3 (build
%4)>\nPlease update these components to the same version in
order to use the Command Line Interface.
0x90022019
Invalid client list information.
0x9002201a
Invalid resource list information.
0x9002201b
Getting report data timeout.
0x9002201c
There is no report data.
0x9002201d
Failed to open the output file: %1.
0x9002201e
Invalid Report Data.
0x9002201f
Output file: %1 already exists.
0x90022020
The target server name cannot be resolved on the primary server.
Please make sure your DNS is set up properly or use static IP
address for the target server.
0x90022021
Failed to promote mirror due to virtual device creation error. The
mirror is not recovered.
0x90022022
Invalid physical segment information.
0x90022023
Failed to open file: %1.
0x90022024
Physical segment section not defined.
0x90022025
Some physical segment information are overlapped.
0x90022026
Invalid segment size.
0x90022027
Invalid segment section.
0x90022028
Invalid TimeMark.
0x90022029
The virtual device is in a snapshot group. You have to enable the
TimeMark before joining the virtual device to the snapshot group.
0x90022030
The virtual device is in a snapshot group. Please use force option to
disable the TimeMark option for this virtual device that is in a
snapshot group.
0x90022031
The virtual device is in a snapshot group. All the virtual devices in
the same snapshot group have to be unassigned as well. Please
use force option to unassign the virtual device or -N (--no-groupclient-assignment) option to unassign the virtual device only.
0x90022032
Failed to write to the output file: %1. Please check to see if you
have enough space.
CDP/NSS Administration Guide
410
Command Line Interface
CDP-NSS Command Line Interface Error Messages
Error code
Text
0x90022033
The client is currently connected and the virtual device is attached.
We recommend you to disconnect the client first before unassigning
the virtual device. You must use the <force> option to unassign the
virtual device from the client when the client is connected.
0x90022034
Failed to connect to the replication primary server. Please use
<force> option to promote the replica.
0x90022035
TimeMark cannot be disabled when the virtual device is in a
snapshot group. Please remove the virtual device from the
snapshot group first.
0x90022036
The virtual device is in a snapshot group, the individual TimeMark
policy for the virtual device cannot be updated. Please specify the
group id or group name to update the TimeMark policy for the
snapshot group.
0x90022037
Please specify at least one Snapshot property to be updated.
0x90022038
Replica disk does not exist. Therefore, there is no new resource
promoted from the replica disk. but the replication configuration is
removed from the primary disk.
0x90022039
TimeView virtual device exists for this TimeMark.
0x9002203a
After rollback, some of the timemarks will no longer exist and there
are TimeView resources created for those timemarks. Please delete
the TimeView resources first if you want to rollback the timemark.
0x9002203b
There are TimeView virtual devices associated with this virtual
device. Please delete the TimeView virtual devices first.
0x9002203c
Replica disk does not exist. Only the replication configuration is
removed from the primary disk.
0x9002203d
Invalid adapter number: %1.
0x9002203e
Total number of Snapshot Group reaches the maximum groups:
%1.
0x9002203f
Total number of Snapshot Group on the target server reaches the
maximum groups: %1.
0x90022040
The resource is in a Snapshot Group. Please set the backup
properties through the Snapshot Group.
0x90022042
Replication is not configured for this resource.
0x90022043
The resource is in a Snapshot Group. Please set the replication
properties through the Snapshot Group.
0x90022044
Please specify at least one replication option to be updated.
0x90022045
Failed to get Server Time information.
CDP/NSS Administration Guide
411
Command Line Interface
CDP-NSS Command Line Interface Error Messages
Error code
Text
0x90022046
Invalid resource type for deleting TimeMark.
0x90022047
Invalid resource type for rolling back TimeMark.
0x9002204
The virtual device is in a Snapshot Group enabled with TimeMark.
Please perform the group TimeMark operation.
0x90022048
The Snapshot Group is not enabled for TimeMark. Please perform
the TimeMark operation through the virtual device.
0x9002204a
TimeView virtual device already exists for this TimeMark.
0x9002204b
There is no Snapshot Image created for this Snapshot Group.
0x9002204c
The virtual device is in a replication-enabled snapshot group.
0x9002204d
The snapshot group is not enabled with replication. If the virtual
device in the snapshot group is enabled with replication, please
perform the replication operations through the virtual device.
0x9002204e
Failed to create connection for failover partner server.
0x9002204f
Failed to start transaction for failover partner server.
0x90022050
You are connected to $ISPRODUCTSHORT$ failover server
partner with readonly privilege.
0x90022051
Failed to parse the configuration from failover partner server.
0x90022052
Replication feature is not supported on this server: %1.
0x90022053
Backup feature is not supported on this server: %1.
0x90022054
TimeMark feature is not supported on this server: %1.
0x90022055
Snapshot Copy feature is not supported on this server: %1.
0x90022056
Mirroring feature is not supported on this server: %1.
0x90022057
Copy Manager feature is not supported on this server: %1.
0x90022058
Fibre Channel feature is not supported on this server: %1.
0x90022059
The specified TimeMark is the latest TimeMark on the replica disk,
which cannot be deleted.
0x9002205a
Unable to get NAS write access.
0x9002205b
Failed to parse NAS configuration.
0x9002205c
The primary disk is not available. The replication configuration on
the primary disk will not be removed.
0x9002205d
There are SAN client connected to the resource. You have to
disconnect the client(s) first before deleting the resource.
CDP/NSS Administration Guide
412
Command Line Interface
CDP-NSS Command Line Interface Error Messages
Error code
Text
0x9002205e
There are active SMB connections associated with this NAS
resource. Please disconnect them first or use force option.
0x9002205f
Snapshot Group feature is not supported on this server: %1.
0x90022060
NAS feature is not supported on this server: %1.
0x90022061
Timeout while disabling cache resource.
0x90022062
?CLI_ERROR_DIR_EXIST
0x90022063
?CLI_PARSE_NAS_USER_CONF_FAILED
0x90022064
Invalid NAS User.
0x90022065
The IP address of the replication target server for this configuration
has to be in the range of %1.
0x90022066
Local Replication feature is not supported on this server: %1.
0x90022067
The specified replica disk is the same as the primary disk
0x90022068
The batch mode processing is not completed for all the requested
virtual devices.
0x90022069
The server \"%1\"is not configured for failover.
0x9002206a
Unable to get server name.\nPlease check that the environment
variable ISSERVERNAME has been set properly.
0x9002206b
Unable to get user name.\nPlease check that the environment
variable ISUSERNAME has been set properly.
0x9002206c
Unable to get password.\nPlease check that the environment
variable ISPASSWORD has been set properly.
0x9002206d
Invalid login information format.
0x9002206e
File %1 does not exist.
0x9002206f
Unable to open configuration file: %1.
0x90022070
Error reading configuration file: %1.
0x90022071
There are virtual devices assigned to this client.
0x90022072
NAS resource is not ready.
0x90022073
Invalid Windows User name.
0x90022074
Invalid NAS authentication mode.
0x90022076
Failed to get server name.
0x90022077
The server is not a failover secondary server.
0x90022078
Failover is already enabled on this server.
CDP/NSS Administration Guide
413
Command Line Interface
CDP-NSS Command Line Interface Error Messages
Error code
Text
0x90022079
Failover is already suspended on this server.
0x90022083
?CLI_ERROR_BMR_COMPATIBILITY.
0x90022084
?CLI_ERROR_ISCSI_COMPATIBILITY.
0x90022085
This command \"%1\"is not supported for this server version: %2.
0x90022086
Cache group is not supported for this server version: %1.
0x9002208a
Snapshot Notification Option is not supported for this server
version: %1.
0x9002208e
Compression option is not supported for this server version: %1.
0x9002208f
Encryption option is not supported for this server version: %1.
0x90022090
Timeout policy is not supported for this server version: %1.
0x90022091
Cache parameter is not supported for this version: %1.
0x90022092
Cache parameter <skip-duplidate-write> is not supported for this
version: %1.
0x90022093
Reserving service-enabled Disk Inquiry String feature is not
supported for this version: %1.
0x90022094
This is not a valid server configuration to set the server
communication information.
0x90022095
The resource is a NAS resource and it is attached. Please unmount
and detach the NAS resource first before performing TimeMark
rollback.
0x90022096
Invalid iSCSI Target starting lun.
0x90022097
Invalid IPStor user:
0x90022098
iSCSI Initiator %1 is already assigned to other client.
0x90022099
There are no users assigned to this iSCSI client.
0x9002209a
Invalid client type for updating the device properties.
0x900220a7
The client has to support at least one client protocol.
0x900220a8
Generic client protocol is not supported on this server.
0x900220a9
Invalid type for client / resource assignment.
0x900220aa
Invalid Client Type.
0x900220b0
?CLI_NAS_SMB_HOME.
0x900220b1
?CLI_SNMP_MAX_TRAPSINK.
0x900220b2
?CLI_SNMP_NO_TRAPSINK.
CDP/NSS Administration Guide
414
Command Line Interface
CDP-NSS Command Line Interface Error Messages
Error code
Text
0x900220b3
?CLI_SNMP_OUT_INDEX
0x90023000
Invalid BootIP client.
0x90023001
This client is not enabled BootIP.
0x90023002
This client is already enabled BootIP.
0x90023003
Did not input any BootIP properties.
0x90023004
Ip address is needed.
0x90023005
Hardware address is needed.
0x90023006
Invalid <use-static-ip> value.
0x90023007
Invalid <default-boot> value.
0x90023008
Duplicated MAC address.
0x90023009
Duplicated IP address.
0x90023010
This device is a BootIP resource. Disable BootIP before you delete/
unassign it.
0x90023011
-S 1 and <ip-address> should be specified at the same time.
0x90023012
BootIP feature is not supported on this server: %1.
0x90023013
DHCP is not enabled in this server, cannot use static ip.
0x90023100
Fail to connect to the server. Please make sure the server is
running and the version of the server is 4.01 or later.
0x90023101
Use existing TimeMark for replication option is not supported on this
server: %1.
0x90023103
Invalid share information for batch mode NAS share creation.
0x90023104
Replica size exceeds the licensed Worm Size limit on the target
server: %1 GB
0x90023105
NAS resource will exceed the licnesed Worm Size limt: %1 GB.
0x90023102
?CLI_TARGET_SERVER_NOT_WORM_KERNEL.
0x90023106
Compliance time can only be set when compliance clock option is
set.
0x90023107
Worm is not supported by the kernel of this server.
0x90023108
Stop Write option is no longer supported in this version of server:
%1.
0x90023109
Invalid replica disk.
0x9002310a
The compliance clock between failover servers is more than 5
minutes apart. Please use force option to continue.
CDP/NSS Administration Guide
415
Command Line Interface
CDP-NSS Command Line Interface Error Messages
Error code
Text
0x9002310b
The compliance clock between replication servers is more than 5
minutes apart. Please use force option to continue.
0x9002310c
Local replication is not supported for WORM resource.
0x9002310d
You do not have the license for WORM resource.
0x9002310e
Invalid iSCSI user password length (12 - 16).
0x9002310f
The login user is not authorized for this operation.
0x90023110
The specified user name already exists.
0x90023111
Invalid user name.
0x90023112
Continuous Replication is not supported.
0x90023113
Replication is still in progress. Please wait for replication to
complete before disabling the TimeMark option.
0x90023114
This resource is in a snapshot group. Snapshot notification will be
determined at the group level and cannot be updated at the
resource level.
0x90023115
This server is configured as Symmetric failover server. In
Symmetric failover setup, the same target WWPN will be used on
the secondary server during failover instead of the standby WWPN
as in Asymmetric failover setup. It's not necessary and not allowed
to configure Fibre channel client protocol for the same client on the
failover partner server. This client is already enabled with Fibre
Channel protocol on the failover partner server. The operation
cannot proceed.
0x90023116
Replication protocol is not supported for this version of server: %1.
0x90023117
TCP protocol is not supported for continuous mode replication on
this target server. It is supported on a target server of version 5.1 or
later.
0x90023118
It is required to assign all the fibre channel devices to \"all-to-all\"for
Symmetric failover setup.
0x90023119
Invalid CDP journal timestamp.
0x9002311a
CDP option is not supported for this server version: %1.
0x9002311b
CDP journal is not available.
0x9002311c
TimeMark priority is not supported on this server: %1.
0x9002311d
TimeMark information update is not supported on this server: %1.
0x9002311e
TimeMark information cannot be update for replica group.
0x9002311f
TimeMark comment cannot be updated for TimeMark group.
CDP/NSS Administration Guide
416
Command Line Interface
CDP-NSS Command Line Interface Error Messages
Error code
Text
0x90023120
TimeMark priority cannot be updated for TimeMark group member.
0x90023121
CDP journal was suspended at the specified timestamp.
0x90023122
This virtual device is still valid for cross mirror setup. Manual
swapping is not allowed.
0x90023123
Notification Frequency option is not supported on this version of
server: %1
0x90023124
This operation is not supported for cross mirror configuration.
0x90023125
This virtual device is in a TimeMark group. Rollback is currently in
progress for one of the group member %1. Please wait until the
rollback is completed before starting rollback for this virtual device.
0x90023126
Invalid CDP journal tag.
0x90023127
Clients have to be unassigned before rollback is performed.
0x90023128
The specified data point is not valid for post rollback TimeView
creation.
0x90023129
The specified data point is not valid for recurring rollback.
0x90023130
This virtual device is still valid for cross mirror setup. Manual
swapping is not allowed.
0x90023131
Group replication schedule has to be suspended first before joining
the resources to the group.
0x90023132
Replication schedule of the specified resource has to be suspended
first before joining the replication group.
0x90023133
Replication schedule for all the group members has to be
suspended first before joining the resources to the group or
enabling the group replication.
0x90023134
MicroScan option for individual resource is not support on this
server: %1.
0x90023135
Source virtual device isn't on a Falconstor SED.
0x90023136
Resource has STP enabled.
0x90023137
?CLI_ERROR_MULTI_STAGE_REPL_NOT_SUPPORTED.
0x90023138
Suspend / Resume Mirror option is not supported on this server:
%1.
0x90023139
Mirror of this resource is already suspended.
0x9002313a
Mirror of this resource is already suspended.
0x9002313b
This resource is in the replication disaster recovery state, the
operation is not allowed.
CDP/NSS Administration Guide
417
Command Line Interface
CDP-NSS Command Line Interface Error Messages
Error code
Text
0x9002313c
This group is in the replication disaster recovery state, the operation
is not allowed.
0x90024001
The target virtual device is in a SafeCache group or a CDP group.
Please remove the resource from the group first if you need to copy
the data to the resource.
0x90024002
Mirror Policy feature is not supported on this server: %1.
0x90024003
?CLI_ERROR_REPL_TRANSMITTED_INFO_NOT_SUPPORTED.
0x90024004
The virtual device info serial number is not supported for this
version of server: %1.
0x90024005
Fast replication synchronization is not supported for this version of
server: %1.
0x90024006
Mirror Swap option is already disabled
0x90024007
Mirror Swap option is already enabled.
0x90024008
Disable Mirror Swap option is not supported for this server version:
%1.
0x90024101
A server is in a cross-mirror setup.
0x90024102
Virtual device is a Near-line Disk.
0x90024103
Virtual device is a Primary Disk enabled with Near-line Mirror.
0x90024104
Rescan didn't find the new assigned virtual device
0x90024105
The new assigned virtual device has been allocated
0x90024106
The remote client hasn't iSCSI target.
0x90024107
The virtual device is not a Primary Disk enabled with Near-line
Mirror
0x90024108
The virtual device is not a Near-line Disk.
0x90024109
The servers are not Near-line Mirroring partners for the specified
virtual device.
0x9002410a
There is an error in Near-line Mirroring configuration.
0x9002410b
Mirror license is required to perform this operation.
0x9002410c
Please swap the Primary Disk with its mirror first
0x9002410d
All segments of the Primary Disk are in online state.
0x9002410e
Cannot join a Near-line Disk to a group contains Near-line Disk with
different Near-line server
0x90024201
The virtual device has been assigned to a client and the virtual
device's userACL doesn't match the snapshot group's userACL.
CDP/NSS Administration Guide
418
Command Line Interface
CDP-NSS Command Line Interface Error Messages
Error code
Text
0x90024202
Near-line Mirror option is not support on this server: %1
0x90024203
The operation is not allowed when Near-line Recovery is initiated
for the specified Primary Disk:.
0x90024204
The operation is not allowed when Near-line Recovery is initiated
for the specified Nearline Disk:.
0x90024205
TimeMark rollback is not supported for Near-line Disk:.
0x90024206
The specified resource is a Near-line Disk. Please remove the
Near-line Mirroring configuration first.
0x90024207
The specified resource is enabled with Near-line Mirror. Please
remove the Near-line Mirroring configuration first.
0x90024208
The specified iSCSI target is assigned to a Near-line server.
0x90024209
The operation is not allowed when Near-line Replica Recovery is
initiated for the specified Nearline Replica Disk:
0x90024301
Cannot disable InfiniBand, since there are targets/devices assigned
to InfiniBand client.
0x90024302
Infini-band is not supported in this build.
0x90024303
iSCSI isn't enabled.
0x90024304
No infini-band license.
0x90024305
Command is not allowed because FailOver is enabled.
0x90024306
Failed to convert IP address to integer.
0x90024307
The given IP address isn't binded to an infini-band NIC.
0x90024308
InfiniBand isn't enabled.
0x90024351
Each zone's size cannot be bigger than the HotZone resource's
size.
0x90024401
Problem vdev command is not supported by this version of server.
0x90024402
Virtual device signature is not supported by this version of server.
0x90024403
Invalid physical device name.
0x90024404
There is no Fibre Channel devices to perform this operation.
0x90024501
The virtual device specified by <timeview-vid> isn't a timeview.
0x90024502
The timeview doesn't belong to the given virtual device or snapshot
group.
0x90024601
The cli command is not allowed because of the server is in failover
state.
CDP/NSS Administration Guide
419
Command Line Interface
CDP-NSS Command Line Interface Error Messages
Error code
Text
0x90024602
There is virtual device allocated on the physical device.
0x90024603
The physical device is in a storage pool.
0x90024604
The server isn't the owner of the physical device.
0x90024605
Physical device is online.
0x90024606
Invalid initiator WWPN.
0x90024607
The specified new initiator WWPN is invalid already exists.
0x90024608
Replacing Fibre Channel client WWPN operation is not supported
on this server.
0x90024609
The specified target disk is a thin disk.
0x9002460a
Thin provisioning feature is not supported on this server: %1.
0x9002460b
Minimum outstanding IOs for mirror policy is not supported on this
server: %1
0x9002460c
Mirror throughput control policy is not supported on this server: %1.
0x9002460d
Replication throughput control policy is not supported on this
server: %1.
0x9002460e
iSCSI Mutual Chap Secret option is not supported on this server:
%1.
0x9002460f
Host Apps Info is not supported on this server: %1
0x90024610
The version of the primary server has to be the same or later than
the version of the Near-line server for Near-line mirroring setup.
0x90024611
The version of the primary server has to be the same or later than
the version of the replica server for replication setup.
0x90024612
Saving persisted timeview data information is not supported in this
version of server: %1
0x90024613
Replication using specific TimeMark is not supported in this version
of server: %1.
0x90024614
iSCSI mobile user update is not supported in this version of server.
0x90024615
Service Enable Device license is required to perform this operation.
0x90024616
Primary server is configured for symmetric failover. Near-line
Recovery alredy triggered for the Primary failover partner server.
Please resume the configuration first.
0x90024617
Primary server is configured for symmetric failover. Near-line server
client exists on the primary failver partner server. Please remove the
client from the primary failover partner server first.
CDP/NSS Administration Guide
420
Command Line Interface
CDP-NSS Command Line Interface Error Messages
Error code
Text
0x90024618
Near-line disk is enabled with mirror. Please remove the mirror from
Near-line disk before performing Near-line recovery.
0x90024619
Near-line Resource is not the mirror of the Primary Disk. Please
swap the mirror first before performing the Near-line Recovery.
0x9002461a
Invalid Near-line client for iscsi protocol".
0x9002461b
Failed to discover device on Near-line server.
0x9002461c
Failed to discover device on Near-line server failover parnter.
0x9002461d
There is not enough space available for virtual header allocation.
0x9002461e
Near-line server client does not exist on the Primary server.
0x9002461f
Near-line server failover partner client does not exist on the Primary
server
0x90024620
Near-line server client properties is not configured for assignment.
0x90024621
Near-line server failover partner client properties is not configured
for assignment.
0x90024622
Near-line recovery is not supported on the specified server.
0x90024623
Timeout waiting for sync status for thin disk packing.
0x90024624
Thin disk is out-of-sync for packing.
0x90024625
Timeout waiting for swap status for thin disk packing.
0x90024626
The data copying program is missing.
0x90024627
Thin disk copy is not supported on the specified server.
0x90024628
Global cache resource is not supported on this server: %1.
0x90024629
Thin disk relocation is not supported on the specified server.
0x9002462a
Near-line Disk is already configured on Near-line server, but the
configuration does not match with primary disk.
0x9002462b
Primary Disk is already configured on Primary server, but the
configuration does not match with the specified Near-line Server.
0x9002462c
The specified Primary Disk is already configured for Near-line
Mirroring on the specified Near-line server.
0x9002462d
The Primary server is configured for failover, but the failover partner
server is not configured properly for Near-line Mirroring.
0x9002462e
The Primary server is not configured as client on the Near-line
server.
0x9002462f
The Primary failover partner server is not configured as client on the
Near-line server.
CDP/NSS Administration Guide
421
Command Line Interface
CDP-NSS Command Line Interface Error Messages
Error code
Text
0x90024630
Near-line Disk is not assigned to the Primary server client on Nearline server.
0x90024631
Near-line Disk cannot be found on the specified Near-line server.
0x90024632
service-enabled Device of Near-line Disk cannot be found on the
Primary failover partner server.
0x90024633
Failed to get the serial number for the Primary Disk.
0x90024634
Suspend mirror from the Primary first before performing conversion.
0x90024635
Failed to discover device on Near-line primary server.
0x90024636
Invalid Primary resource type.
0x90024637
Invalid Near-line resource type.
0x90024638
CDP Journal is enabled and active for replica group. Please
suspend CDP Journal and wait for the data to be flushed.
0x90024639
safeCache is enabled and active for replica group. Please suspend
safeCache and wait for the data to be flushed.
0x9002463a
CDP Journal is enabled and active for primary group. Please
suspend CDP Journal and wait for the data to be flushed.
0x9002463b
safeCache is enabled and active for primary group. Please suspend
safeCache and wait for the data to be flushed.
0x9002463c
CDP Journal is enabled and active for replica disk. Please suspend
CDP Journal and wait for the data to be flushed.
0x9002463d
safeCache is enabled and active for replica disk. Please suspend
safeCache and wait for the data to be flushed.
0x9002463e
CDP Journal is enabled and active for primary disk. Please
suspend CDP Journal and wait for the data to be flushed.
0x9002463f
safeCache is enabled and active for primary disk. Please suspend
safeCache and wait for the data to be flushed.
0x90024640
Primary disk is enabled with Near-line Mirroring, the operation is not
allowed.
0x90024641
Primary disk is a Near-line disk, the operation is not allowed.
0x90024642
Primary disk is a NAS resource. Please umount and detach the
resource first.
0x90024643
HotZone is enabled and active for the primary disk. Please suspend
the HotZone first.
0x90024644
There is no member in the group for the operation.
0x90024645
CDR is enabled for the primary disk. Please disable CDR first.
CDP/NSS Administration Guide
422
Command Line Interface
CDP-NSS Command Line Interface Error Messages
Error code
Text
0x90024646
CDR is enabled for the primary group. Please disable CDR first.
0x90024647
The group configuration is invalid. The operation cannot proceed.
0x90024648
Replication configuration between the primary and replica is
inconsistent. The operation cannot proceed.
0x90024649
Forceful Role Reversal can only be performed from replica server
for disaster recovery when the primary server is not available.
0x9002464a
Forceful Role Reversal cannot be performed when the primary
server is still available and operational.
0x9002464b
Forceful Role Reversal is not supported in this version of server: %1
0x9002464c
The replica disk is not loaded for the operation.
0x9002464d
Updating umap timestamp for the new primary resource(s) failed.
0x9002464e
The operation can only be performed after forceful role reversal.
0x9002464f
HotZone is enabled on new replica, repair cannot proceed.
0x90024650
CDR is enabled for the original primary disk. Please disable CDR
first.
0x90024651
CDR is enabled for the original primary group. Please disable CDR
first.
0x90024652
?CLI_MICRSCAN_COMPRESSION_CONFLICT_ON_TARGET.
0x90024653
Snapshot resource cannot be reinitialized when it is accessible.
0x90024654
The option for discardable changes for the timeview is not enabled.
0x90024655
Snapshot resource is offline.
0x90024655
?CLI_INVALID_NEARLINE_CONFIG.
0x90024656
?CLI_INVALID_NEARLINE_DISK.
0x90024657
The option for discardable changes is not supported for this type of
resource.
0x90024658
Fail to enable cache for the timeview to keep discardable changes.
0x90024659
Fail to enable cache for the timeview to keep discardable changes
and timeview cannot be removed.
0x9002465a
There is still cached data not being flushed to the timeview. Please
flush the changes first if you do not want to discard the changes
before deleting the timeview.
0x9002465b
The option for discardable changes for the timeview is not enabled.
0x9002465c
This operation can only be performed on failover secondary server
in failover state.
CDP/NSS Administration Guide
423
Command Line Interface
CDP-NSS Command Line Interface Error Messages
Error code
Text
0x9002465f
The option for discardable TimeView changes is not supported for
this version of server: %1.
0x90024665
Primary server user id and password are required for the target
server to establish the communication information.
0x90024666
The resource is a Near-line Disk and the Primary Disk is a thin disk.
Expansion is not supported for think disk.
0x90024667
The options for snapshot resource error handling are not supported.
0x90024668
Failed to connect to the primary server.
0x90024669
There is no iSCSI targets configured on the specified server.
0x9002466a
iSCSI initiator connection information is not available on this version
of server: %1.
0x9002466b
Your password is expired. Please change your password first.
0x9002466c
TimeView replication option is not supported on this version of
server: %1.
0x9002466e
TimeView replication option is not supported on this version of
server: %1.
0x9002466f
The replica disk of the source resource is invalid for TimeView
replication.
0x90024670
TimeMark option is not enabled on the replica disk of the source
resource of the TimeView.
0x90024671
The TimeMark of the TimeView is not available on the replica disk of
the source resource.
0x90024672
Failed to get the TimeMarks of the replica disk to validate the
TimeMark timestamp for TimeView replication.
0x90024673
TimeView replication can only be performed for source resource
enabled with remote replication. Local replication is enabled for the
source resource. TimeView replication cannot proceed.
0x90024674
TimeView replication option is not enabled for the source resource.
0x90024675
This operation can only be performed for a Near-line disk as a
reversed replica.
0x90024676
TimeView resource of the source TimeMark exists on the primary
server. TimeView data replication cannot proceed.
0x90024677
TimeView resource of the replica TimeMark exists on the target
server. TimeView data replication cannot proceed.
0x90024678
TimeView data exists on the replica TimeMark. Please specify -vf (-force-to-replicate) option to force the replication.
CDP/NSS Administration Guide
424
Command Line Interface
CDP-NSS Command Line Interface Error Messages
Error code
Text
0x90024679
Remote replication is not enabled for the resource for TimeView
data replication.
0x9002467a
Inquiry page retrieval is not supported for this version of server: %1
0x9002467b
TimeView rollback is not supported on this version of server: %1.
0x9002467c
TimeView copy is not supported on this version of server: %1.
0x9002467d
CDP Journal rollback and TimeView data rollback are mutually
exclusive.
0x9002467e
There is no TimeView data associated with this TimeMark to
perform TimeView copy.
0x9002467f
Virtual device MicroScan option is not supported in this version of
server: %1.
0x90024680
The specified target device is enabled with global SafeCache.
Please disable global SafeCache first if you need to copy data to
this resource.
0x90024681
There is no unflushed cache marker.
0x90024682
There is no unflushed cache marker and the cache is not full.
Please create a cache marker to flush the data to it first.
0x90024683
There is no TimeView data associated with this TimeMark to
perform TimeView rollback.
0x90024684
Sync priority setting is not supported for this version of server: %1
0x90024685
Fail to get the TimeMarks of the source disk to validate the
TimeMark timestamp for TimeView replication.
0X90024686
There is no TimeView data for the specified TimeMark on the
source resource.
0x90024687
The TimeMark of the TimeView is not available on the source
resource.
0x90024688
Failed to parse the configuration file from primary server.
0x90024689
The primary disk of the source resource is invalid.
0x90024690
Failed to get the TimeMarks of the primary disk to validate the
TimeMark timestamp for TimeView replication status.
0x90024691
Timeview data replication is in progress for the specified Timemark.
0x90024692
TimeView data of the specified replica TimeMark is invalid.
0x90024693
Physical device cannot be found on failover partner server.
0x90024694
Physical device is already owned by the failover partner server.
CDP/NSS Administration Guide
425
Command Line Interface
CDP-NSS Command Line Interface Error Messages
Error code
Text
0x90024695
Sync CDR replica TimeMark setting is not supported for this version
of server: %1.
0x90024696
Preserve CDR primary TimeMark setting is not supported for this
version of server: %1.
0x90024697
Specified CDR related parameters without CDR enabled.
0x90024698
It appears that the nearline disk is still available. Please login into
the nearline server before removing the configuration.
0x90024699
Keep TimeMarks setting is not supported for this version of server:
%1
0x9002469a
To keep Timemarks, the TimeView resources have to be
unassigned before rollback is performed.
0x9002469b
CDP journal is active. To keep Timemarks, please suspend CDP
journal and wait for the data to be flushed.
0x9002469c
SafeCache is active. To keep Timemarks, please suspend
SafeCache and wait for the data to be flushed.
0x9002469d
Group CDP journal is active. To keep Timemarks, please suspend
Group CDP journal and wait for the data to be flushed.
0x9002469e
Group SafeCache is active. To keep Timemarks, please suspend
Group SafeCache and wait for the data to be flushed.
0x90024700
Specified mirror monitoring related parameters without mirror
monitoring option enabled.
0x9002469f
Specified throughput control related parameters without throughput
control option enabled.
0x90024701
Read the partition from inactive path option is not supported for this
version of server: %1.
0x90024702
Use report luns option and lun ranges option are mutually exclusive.
0x90024703
Specified discover new devices options while in scan existing
devices mode.
0x90024704
BTOS feature is not supported for this version of server: %1.
0x90024705
Select TimeMark with timeview data is not supported for this version
of server: %1.
0x90024706
Fibre channel client rescan is not supported for this version of
server: %1.
0x90024707
Configuration Repository can not be disabled when failover is
enabled.
0x90024708
Configuration Repository is not enabled.
CDP/NSS Administration Guide
426
Command Line Interface
CDP-NSS Command Line Interface Error Messages
Error code
Text
0x90024709
Configuration Repository has already been enabled.
0x9002470a
Configuration Repository can not be enabled when failover is
enabled.
0x9002470b
Only administrators have the privilege for the operation.
0x9002470c
TimeView replication is in progress. Please wait until the timeview
replication is completed.
0x9002470d
Please specify -F to allow forceful role reversal.
0x90024719
Backup is enabled. Recovery cannot proceed.
0x9002471a
Replication job queue is not supported in this version of server: %1.
0x9002471b
Replication schedule is not allowed for this resource.
0x9002471c
Replication schedule is not allowed for this group.
0x9002471d
Virtual Device name cannot be renamed for this resource.
0x9002471e
I/O latency retrieval is not supported in this version of server: %1.
0x9002471f
The new client OS types is not supported on this server.
0x90024720
Group rollback is only supported for SAN resources.
0x90024721
Group rollback is not supported for group with Near-line disks.
0x90024722
No Timemark available for selected CDP journal timestamp.
0x90024723
Group rollback is not supported in this version of server: %1.
0x90024724
The specified virtual device does not have CDP enabled for the
journal related options.
0x09021000
RPC call failed:
RPC encoding arguments error.
RPC decoding results error.
RPC sending error.
RPC receiving results error.
RPC timeout error. (Note: Check the server is not disconnected
from the network where RPC call timeout could happen after 30
sec.)
RPC version mismatch.
RPC authentication error.
RPC program not available.
RPC program version mismatch.
0x80020500
Cannot parse XML configuration
0x800B0100
Cannot allocate memory
0x8023040b
Cannot find openssl library or the public key file.
CDP/NSS Administration Guide
427
Command Line Interface
CDP-NSS Command Line Interface Error Messages
Error code
Text
0x80230406
Cannot reach the registration server on the Internet.
0x80230403
Cannot connect to the registration database.
0x80230404
Cannot find the keycode in the registration database.
0x80230405
License registration limit has been reached.
0x80230406
Cannot find host while attempting register keycode.
Note: The FalconStor license server can not be reached, make
sure the server has Internet access.
0x80230407
Failed to register keycode because system call timed out.
0x80230408
Server is in failover state.
0x80020600
Failed to read config file.
0x8023040c
ISHOME isn't defined.
CDP/NSS Administration Guide
428
CDP/NSS Administration Guide
SNMP Integration
CDP/NSS provides SNMP support to integrate CDP/NSS management into an
existing enterprise management solution such as HP OpenView, HP Network Node
Manager (NNM), Microsoft System Center Operations Manager (SCOM), CA
Unicenter, IBM Tivoli NetView, and BMC Patrol.
For Dell appliances, SNMP integration with Dell OpenManage is supported.
Information can be obtained via your MIB browser (i.e. query Dells OID with
OpenView) or via the Dell OpenManage software.
For HP appliances, SNMP integreation with HP Advanced Server Management
(ASM) is supported. Information can be obtained via your MIB browser or from the
HP Systems Insight Manager (SIM).
CDP/NSS uses the MIB (Management Information Base) to determine what data
can be monitored. The MIB is a database of information that you can query from an
SNMP agent.
A MIB module contains actual specifications and definitions for a MIB. A MIB file is
just a text file that contains one or more MIB modules.
There are three major areas of management:
Accounting management (including discovery)
Locates all storage servers and Windows clients. It shows how all the
resources are aggregated, virtualized, and provisioned, including the
number of adapters, physical devices, and virtual devices attached to a
Server. Most of the information comes from the storage servers
configuration file (ipstor.conf).
Performance management (including statistics)
Shows information about your storage servers and clients, including the
number of clients being serviced by a server, server memory used, CPU
load, and the total MB transferred. Most of the information comes from the /
proc/ipstor directory on the servers or the client monitor on the clients. For
more information about each of the statistics, please refer to the IPSTORMIB.txt file that is in the Servers /usr/local/ipstor/etc/snmp/mibs directory.
Fault management
This allows a trap to be generated when certain conditions occur.
CDP/NSS Administration Guide
429
SNMP Integration
SNMPTraps
Simple Network Management Protocol (SNMP) is used to monitor systems for fault
conditions, such as disk failures, threshold violations, etc.
Essentially, SNMP agents expose management data on the managed systems as
variables. The variables accessible via SNMP are organized in hierarchies. These
hierarchies, and other metadata (such as type and description of the variable), are
described by Management Information Bases (MIBs).
An SNMP-managed network consists of three key components:
Managed device
Agent software which runs on managed devices
Network management system (NMS) software which runs on the
manager
An SNMP trap is an asynchronous event indicating that a significant event has
occurred.
There are statistic traps, disk-full traps, failover/recovery traps, and process-down
traps. Statistics traps allow you to set a threshold for an Object Identifier (OID) so
that a trap is sent when the threshold is met. In order to integrate with some thirdparty SNMP managers, you may need to load the MIB file. To load the MIB file,
navigate to $ISHOME/etc/snmp/mibs/IPSTOR-MIB.TXT and copy the IIPSTORMIB.TXT file to the machine running the SNMP manager.
An SNMP trap message is sent when triggered by an event. The message contains
the OID, time stamp, and specific information for each trap.
Process down traps allow you to monitor the status of the CDP/NSS modules (or
processes) so that a trap is sent when a CDP/NSS component is down. The
following table lists the name and description of the modules (or processes) that can
be configured to be monitored:
CDP/NSS
processes
The following table list the name and description of the CDP/NSS modules .
CDP/NSS module
Description
IPStor SNMPD
An agent that processes SNMP requests and returns the
information to the sender/requester i.e. SNMP management
software.
IPStor Configuration
Provides backward compatibility to the service start up script
for CDP/NSS.
IPStor Base
QLogic FC initiator modules provide configuration and
interaction between the CDP/NSS server and the FC
environment/storage.
IPStor HBA
QLogic FC initiator modules provide configuration and
interaction between the CDP/NSS server and the FC
environment/storage.
CDP/NSS Administration Guide
430
SNMP Integration
CDP/NSS module
CDP/NSS
Event Log
messages
Description
IPStor Authentication
Security authentication module for connections.
IPStor Block Device
Generic block-to-SCSI driver that provides the SCSI interface
for CDP/NSS to access non-SCSI block devices.
storage server
(Compression)
Compression driver; It uses the LZO open-source
compression algorithm.
storage server
(FSNBase)
Provides basic IO services to the kernel modules.
storage server
(Upcall)
Handles interactions between kernel and user mode
components.
storage server
(Transport)
Provides support for replication.
storage server (Event)
Provides message logging interface to the syslog.
storage server (Path
Manager)
Manages the IO paths to the storage.
storage server
(Application)
Provides core IO services to the rest of the application.
IPStor Advanced
Backup
Provide a raw device interface from CDP/NSS virtualized disk
for full, differential or incremental backup.
IPStor Target
Provides Fibre Channel target functionality.
IPStor iSCSI Target
Provides iSCSI target functionality that links the network
adapter to the I/O core
IPStor iSCSI
(Daemon)
User daemon which handles the login process to CDP/NSS
iSCSI target initiated from an iSCSI initiator
IPStor Communication
Handles console-to-server communication and manages
overall system configuration information.
IPStor CLI Proxy
Facilitates communication between CLI utility and a CDP/NSS
server.
IPStor Logger
Provides the logging function for CDP/NSS reports.
IPStor Central Client
Manager
Provides integration with Central Client Manager.
IPStor Local Client
(VBDI)
Block device driver that provides a block device interface to an
CDP/NSS virtual device
IPStor Self Monitor
Self-monitor process which checks the server's own health.
CDP/NSS Event Log messages can be sent to your SNMP manager. By default,
Event Log messages (informational, warnings, errors, and critical errors) will not be
sent. From the FalconStor Management Console, you can determine which type of
messages should be sent. To select the Trap level:
CDP/NSS Administration Guide
431
SNMP Integration
1. Right-click on the server and select Properties --> SNMP Maintenance --> Trap
Level.
2. After selecting a Trap Level, click Add to enter the name of the server receiving
the traps (or IP address if the name is not resolvable), and a Community name.
Five levels are available:
None (Default) No messages will be sent.
Critical - Only critical errors that stop the system from operating properly
will be sent.
Error Errors (failure such as a resource is not available or an operation
has failed) and critical errors will be sent.
Warning Warnings (something occurred that may require maintenance
or corrective action), errors, and critical errors will be sent.
Informational Informational messages, errors, warnings, and critical error
messages will be sent.
CDP/NSS Administration Guide
432
SNMP Integration
Implement SNMP support
The SNMP software is installed on the Server and Windows clients during the CDP/
NSS installation.
Note: CDP/NSS installs an SNMP module that stops the native SNMP agent on
the storage server. The CDP/NSS SNMP module is customized for use with your
SNMP manager. If you do not want to use the CDP/NSS SNMP module, you can
stop it by executing: ./ipstor stop snmpd. However, the next time the
server is rebooted, it will start again. Contact technical support if you do not want
it to restart on boot up.
To complete the implementation, you must install software on your SNMP manager
machine and then configure the manager to support CDP/NSS.
Since this process is different for each SNMP manager, please refer to the
appropriate section below.
CDP/NSS Administration Guide
433
SNMP Integration
Microsoft System Center Operations Manager (SCOM)
Microsoft SCOM is a Microsoft management server with SNMP functionality. CDP/
NSS supports SNMP trap integration with Microsoft SCOM 2007 R2. SNMP
integration requires that you manually create a rule and discover the SNMP device
from the Microsoft SCOM console. To do this:
1. From the Microsoft SCOM console, navigate to Authoring --> Management Pack
Object -> Rules. Right-click and select select Create a new rule.
The Create Rule Wizard displays.
2. Select the type of rule to create. Alert Generating Rules -> Event based ->
SNMPTrap ( Alert ) and click Next.
3. Select the rule name, description, and select the rule target for the snmp network
device.
The Select a Target Type screen displays allowing you select for the populated
list or use the Look for field to filter down to a specific target or sort the targets
by Management Pack.
4. In the configure the trap OIDs to collect, select the Use discovery community
string option and enter the OID. For example: 1.3.6.1.4.1.7368.0.9
5. Configure Alerts by specifying the information that will be generated by the alert
and click Create.
Once the rule is created, you will be able to discover the SNMP network device.
6. Discover the SNMP network device.
From the Administration node, navigate to Device Management --> Network
Devices and select Discovery Wizard from the right-click menu.
7. Click Next at the Computer and Device Management Wizard screen. Then
select Advanced discovery.
Select network device in the Computer & Device Types field.
8.
Select the discovery method. Specify the IP address (i.e. 172.11.22.333 to
172.11.22.333), type the community string (i.e. public), select the SNMP version
(i.e. v2172), and click Discover.
After discovery you should see the network device. You can right-click on it and
select Open --> Alert View to see trap information on the Alert properties screen.
CDP/NSS Administration Guide
434
SNMP Integration
HP Network Node Manager (NNM) i9
CDP/NSS provides SNMP trap integration and MIB upload for the HP management
server - Network Node Manager I 9 (NNMi9).
NMM i9 trap
The trap configuration can be set by logging into the NNMi9 console from web and
following the steps below:
1. From the HP Network Node Manager console, navigate to Workspaces -->
Configuration, and select Incident Configuration.
2. Select the New icon under the SNMP Traps tab.
Enter the Basics and then click Save and Close:
Name : IPSTOR-information
SNMP ObJect ID : .1.3.6.1.4.1.7368.0.9
Category : IPStor
Family : IPStor
Severity : Critical
Message Format: $oid
Navigate to Incident Browsing --> SNMP Traps to see the trap collection
information
Upload MIB
The MIB browser can be launched from the HP Network Node Manager console by
selecting Tools --> MIB Browser.
1. Upload the MIB file from the HP Network Node Manager console by selecting
Tools --> Upload Local MIB File.
The Upload Local MIB File window launches.
2. Browse to select the MIB file from the CDP/NSS storage server and click Upload
MIB.
The Upload MIB File Data Results screen displays an upload summary.
CDP/NSS Administration Guide
435
SNMP Integration
HP OpenView Network Node Manager 7.5
Installation
The software installation media includes software that must be installed on your HP
OpenView Network Node Manager (NNM) machine. This software adds several
CDP/NSS menu options into your NNM and adds a CDP/NSS MIB tree so that you
can set traps.
1. Launch the software installation package.
2. Select Install Products --> Install SNMP for HP OpenView.
If not automatically launched, navigate to the \SNMP\OpenView directory and
run setup.exe to launch the SNMP install program.
3. Start the NNM when the installation is finished.
Under the Tools menu you will see a new CDP/NSS menu option.
Configuration
You need to define which hosts will receive traps from your storage server(s) and
determine which CDP/NSS components to monitor. To do this:
1. In the NNM, highlight a storage server and select Tools --> SNMP MIB Browser.
2. In the tree, expand private --> enterprises --> ipstor --> ipstorServer --> trapReg
and highlight trapSinkSettingTable.
The default read-only community is public. The default read-write community is
falcon.
Set the Community name to "falcon" so that you will be allowed to change
the configuration.
Click the Start Query button to query the configuration.
From the MIB values field, select a host to receive traps. You can set up to
five hosts to receive traps. If the value is 0, the host is invalid or not set.
In the SNMP set value field, enter the IP address or machine name of the
host that will receive traps.
Click the Set button to save the configuration in snmpd.conf.
3. In the SNMP MIB Browser, select private --> enterprises --> ipstor -->
ipstorServer --> alarmTable.
Click the Start Query button to query the alarms.
In the MIB values field, select which CDP/NSS components to monitor.
You will be notified any time the component goes down. A description of
each is listed in the SNMPTraps section.
In the SNMP set value field, enter enable or 1 to enable.
Click the Set button to enable the trap you selected.
CDP/NSS Administration Guide
436
SNMP Integration
Statistics in NNM
In addition to monitoring CDP/NSS components and receiving alerts, you can view
CDP/NSS statistics in NNM. There are two ways to do this:
CDP/NSS
menu
1. Highlight a storage server or Client and select Tools --> IPStor.
2. Select the appropriate menu option.
These reports are provided by CDP/NSS as a convenient way to view statistical
information without having to go through the MIB browser.
You can add your own reports to the menu by selecting Options --> MIB
Application Builder: SNMP. Refer to OpenViews documentation for details on
using the MIB Application Builder.
MIB browser
1. Highlight a storage server or Client and select Tools --> SNMP MIB Browser.
2. In the tree, expand private --> enterprises --> ipstor --> ipstorServer.
If this is a Client, select ipstorClient.
From here you can view information about this storage server. If you run a query
at the ipstorServer level, you will get a superset of all of the information from all
of the sub-categories.
For more specific information, expand the sub-categories.
For more information about each of the statistics, you can click the Describe
button or refer to the IPSTOR-MIB.txt file that is in your \\OpenView\snmp_mibs
directory.
CDP/NSS Administration Guide
437
SNMP Integration
CA Unicenter TNG 2.2
Installation
The software installation media includes software that must be installed on your CA
Unicenter TNG 2.2 machine. This software creates a CDP/NSSSNMP class in
Unicenter and adds a CDP/NSS MIB tree so that you can set traps.
1. Launch the software installation media.
2. Select Install Products --> Install SNMP for CA Unicenter.
If not automatically launched, navigate to the \SNMP\Unicenter directory and run
setup.exe to launch the SNMP install program.
Configuration
You need to define which hosts will receive traps from your storage server(s) and
determine which CDP/NSS components to monitor. To do this:
1. Run Unicenters Auto Discovery.
If you have a repository with existing machines and then you install the storage
server software, Unicenter will not automatically re-classify the machine and
mark it as a storage server.
2. If you need to re-classify a machine, open the Unicenter TNG map, highlight the
machine, select Reclassify Object, select Host --> IPStor SNMP and then
change the Alarmset Name to IPStorAlarm.
If you want to re-align the objects on the map after re-classification, select
Modes --> Design --> Folder --> Arrange Objects and then the appropriate
network setting.
3. Restart the Unicenter TNG map.
4. To define hosts, right-click on storage server --> Object View.
5. Click Object View, select Configure Toolbar, set the Get Community and Set
Community to falcon, and set the Model to ipstor.mib.
falcon is the default community name (password). If it was changed in the
snmpd.conf file (on the storage server), enter the appropriate community name
here.
6. Expand Vendor Information and highlight trapSinkSettingEntry.
7. To define a host to receive traps, highlight the trHost field of an un-defined host,
right-click and select Attribute Set.
You can set up to five hosts to receive traps.
CDP/NSS Administration Guide
438
SNMP Integration
8. In the New Value field, enter the IP address or machine name of the host that will
receive traps (such as your Unicenter TNG server).
Your screen will now show that machine.
9. Highlight alarmEntry.
10. Highlight the alarmStatus field for a component, right click and select Attribute
Set.
11. Set the value to enable for on or disable for off.
View traps
1. From your Start --> Programs menu, select Unicenter TNG --> Enterprise
Management --> Enterprise Managers.
2. Double-click on the Unicenter machine.
3. Double-click on Event.
4. Double-click on Console Logs.
Statistics in TNG
You can view statistics about CDP/NSS directly from the ObjectView screen.
To do this, highlight a category in the tree and the CDP/NSS information will be
displayed in the right pane.
Launch the FalconStor Management Console
If the FalconStor Management Console is installed on your Unicenter TNG machine,
you can launch it directly from the Unicenter map by right-clicking on a storage
server and selecting Launch FalconStor Management Console.
CDP/NSS Administration Guide
439
SNMP Integration
IBM Tivoli NetView 6.0.1
Installation
The software installation media includes software that must be installed on your
Tivoli NetView machine. This software adds several CDP/NSS menu options into
NetView and adds a CDP/NSS MIB tree so that you can set traps.
1. Launch the software installation media.
2. Select Install Products --> Install SNMP for IBM Tivoli.
If not automatically launched, navigate to the \SNMP\Tivoli directory and run
setup.exe to launch the SNMP install program.
3. Start NetView when the installation is finished.
You will see a new CDP/NSS menu option on NetViews main menu.
Configuration
You need to define which hosts will receive traps from your storage server(s). To do
this:
1. In NetView, highlight a storage server on the map and click the Browse MIBs
button.
2. In the tree, expand enterprises --> ipstor --> ipstorServer --> trapReg -->
trapSinkSettingTable --> trHost.
The default read-only community is public. The default read-write community is
falcon.
3. Set the Community Name so that you will be allowed to change the
configuration.
4. Click the Get Values button.
5. Select a host to receive traps. You can set up to five hosts to receive traps. If the
value is 0, the host is invalid or not set.
6. In the New Value field, enter the IP address or machine name of the Tivoli host
that will receive traps.
7. Click the Set button to save the configuration in snmpd.conf.
CDP/NSS Administration Guide
440
SNMP Integration
Statistics in Tivoli
In addition to monitoring CDP/NSS components and receiving alerts, you can view
CDP/NSS statistics in NetView. There are two ways to do this:
CDP/NSS
menu
1. Highlight a storage server or Client and select IPStor from the menu.
2. Select the appropriate menu option.
For a Server, you can view:
Memory used
CPU load
SCSI commands
MB read/written
Read/write errors
For a Client, you can view:
SCSI commands
Error report
These reports are provided by CDP/NSS as a convenient way to view statistical
information without having to go through the MIB browser.
You can add your own reports to the menu by using NetViews MIB builder. Refer
to NetViews documentation for details on using the MIB builder.
MIB browser
1. Highlight a storage server or Client and click Tools --> MIB --> Browser.
2. In the tree, expand private --> enterprises --> ipstor -->Server.
If this is a Client, select ipstorClient.
3. Select a category.
4. Click the Get Values button.
The information is displayed in the bottom section of the dialog.
CDP/NSS Administration Guide
441
SNMP Integration
BMC Patrol 3.4.0
Installation
The software installation media includes software that must be installed on your
BMC Patrol machine. This software adds several CDP/NSS icon options into Patrol
and adds several CDP/NSS MIB items so that you can retrieve information and set
traps.
1. Launch the software installation media.
2. Select Install Products --> Install SNMP for BMC Patrol.
If not automatically launched, navigate to the \SNMP\Patrol directory and run
setup.exe to launch the SNMP install program.
3. Start Patrol when the installation is finished.
4. Click Hosts --> Add on the Patrol main menu and enter the Host Name (IP
preferred), Username (Patrol administrator name of the storage server),
Password (Patrol administrator password of the storage server), and Verify
Password fields to add the storage server.
5. Click Hosts --> Add on the Patrol main menu and input the Host Name,
Username (administrator name of the Patrol machine), Password (administrator
password of the Patrol machine), and Verify Password fields to add the Patrol
Console machine.
6. Click File --> Load KM on the Patrol main menu and load the
IPSTOR_MODULE.kml module.
7. Click File --> Commit KM --> To All Connected Hosts on the Patrol main menu to
send changed knowledge (IPSTOR_MODULE.kml) to all connected agents,
including the storage server and Patrol Console machine.
8. Expand the storage server tree.
You will see three new CDP/NSS sub-trees with several icons on the Patrol
console.
Configuration
You need to define which hosts will receive traps from your storage server(s) and
determine which CDP/NSS components to monitor. To do this:
1. In the Patrol Console, on the Desktop tab, right-click the ServerInfo item in the
IPS_Server subtree of one storage server and select KM Commands -->
trapReg --> trapSinkSettingEntry.
The default read-only community is public. The default read-write community is
falcon.
CDP/NSS Administration Guide
442
SNMP Integration
2. Select a host to receive traps. You can set up to five hosts to receive traps. If the
value is '0', the host is invalid or not set.
3. In the Host fields, enter the IP address or machine name of the host that will
receive traps.
4. In the Community fields, enter the community.
5. Click the Set button to save the configuration in snmpd.conf.
6. In the Patrol Console, on the Desktop tab, right-click the ServerInfo item in the
IPS_Server subtree of one storage server and select KM Commands -->
alarmTable --> alarmEntry.
Set the status value to enable(1) for on or disable(0) for off.
View traps
1. In the Patrol Console, on the Desktop tab, right-click the IPS_Trap_Receiver -->
SNMPTrap_Receiver of the Patrol Console machine and select KM Commands -> Start Trap Receiver to let the Patrol Console machine start receiving traps.
2. After turning the trap receiver on, you can double-click the SNMP_Traps icon in
the SNMPTrap_Receiver subtree of the Patrol Console machine to get the
results of the traps that have been received.
Statistics in Patrol
In addition to monitoring CDP/NSS components and receiving alerts, you can view
storage server statistics in Patrol. There are two ways to do this:
IPStor icon
1. Highlight a storage server and totally expand the IPS_ProcessMonitor subtree
and the IPS_Server subtree from the storage server.
2. Select the appropriate icon option.
For a Server, you can view:
- Processes Status (Authentication Process, Communication Process, Logger
Process, Self Monitor Process, SNMPD Process, etc.).
To monitor more processes, you can change to KM tab on the Patrol Console
and right-click one process from Knowledge Module --> Application Classes -->
IPS_ProcessMonitor --> Global --> Parameters and click the Properties item on
the menu. You can check the Active option to let the specified process to be
monitored. After, change to Desktop tab and you can see the specified process
is visible in the IPS_ProcessMonitor subtree.
- Server Status (ipsLaAvailLoad, ipsMemAvailSwap and ipsMemAvaiReal)
These reports are provided by CDP/NSS as a convenient way to view statistical
information without having to go through the MIB browser.
CDP/NSS Administration Guide
443
SNMP Integration
MIB browser
1. Highlight a storage server and right-click ServerInfor from IPS_Server subtree
and select KM commands.
In KM commands, several CDP/NSS integrated MIB items are inside.
2. Click one of the MIB items to retrieve the information related to the storage
server.
Advanced topics
This information applies to all SNMP managers.
The snmpd.conf file
The snmpd.conf file is located in the /usr/local/ipstor/etc/snmp directory of
the storage server and contains SNMP configuration information, including the CDP/
NSS community name and the network over which you are permitted to use SNMP
(the default is the network where your storage server is located).
If your SNMP manager resides on a different network, you will have to modify the
snmpd.conf file before you can implement SNMP support through your SNMP
manager.
In addition, you can modify this file if you want to limit SNMP communication to a
specific subnet or change the community name. The default read-write community is
falcon. This is the only community you should change.
Use an SNMP configuration for multiple storage servers
To re-use your SNMP configuration for multiple storage servers, go to /usr/
local/ipstor/etc/snmp and copy the following files to the same directory on
each storage server.
snmpd.conf - contains trapSinkSettings
IPStorSNMP.conf - contains trapSettings
Note: In order for the configuration to take effect, you must restart the SNMPD
module on each storage server to which you copied these files.
CDP/NSS Administration Guide
444
SNMP Integration
IPSTOR-MIB tree
Once you have loaded the IPSTOR-MIB file, MIB Browser parses it into a tree
hierarchy structure. The table below describes many of the tables and fields. Refer
to the IPSTOR-MIB.txt file that is in your \\OpenView\snmp_mibs directory for a
complete list.
Table / Field descriptions
Server Information
serverName
The hostname which the storage server is running.
loginMachineName
Identifies which storage server you are logged into.
serverVersion
The storage server version and build number.
osVersion
The operation system version of the host the storage server is running.
kernelVersion
The kernel version of the host the storage server is running.
processorTable
A table containing the information of all processors in the host which the
storage server is running.
processorInfo: The specification of a processor type and power.
memory
The amount of memory which the storage server is running
swap
The swap space of the host which the storage server is running
netInterfaceTable
A table containing the information of all network interfaces in the host which
the storage server is running.
netInterfaceInfo: The specification containing MAC, IP address and MTU of
a network interface.
FailoverInformationTable
A table containing the failover information which is currently configured of the
storage server.
foName: The property of a failover configuration.
foValue: The setting value of a failover configuration.
foConfType: The Configuration Type of a failover configuration.
foPartner: The Failover Partner of a failover configuration.
foPrimaryIPRsource: The Primary Server IP Resource of a failover
configuration
foSecondaryIPResource: The Secondary Server IP Resource of a failover
configuration.
foCheckInterval: The Self Check Interval of a failover configuration
foHearbeatInterval: The Hearbeat Interval of a failover configuration.
foRecoverySetting: The Recovery Setting of a failover configuration.
foState: The Failover State of a failover configuration.
foPrimaryCrossLinkIP: The Primary Server CrossLink IP of a failover
configuration.
foSecondaryCrossLinkIP: The Secondary Server CrossLink IP of a failover
configuration.
CDP/NSS Administration Guide
445
SNMP Integration
Table / Field descriptions
failoverInformationTable
(continued)
foSuspended: The Suspended status of a failover configuration
foPowerControl: The Power Control of a failover configuration
fofcWWPN: The Fibre Channel WWPN of a failover configuration.
serverOption
nasOption:Indicates the status of NAS option is enable or disable of the
storage server.
fibreChannelOption: Indicatess the status of Fibre Channel option is enable
or disable of the storage server.
replicationOption: It indicates the status of Replication option is enable or
disable of the storage server.
syncMirroringOption: Indicates the status of synchronized Mirroring option is
enable or disable of the storage server.
timemarkOption: Indicates the status of Timemark option is enable or disable
of the storage server.
zeroimpactOption: Indicates the status of Zero Impact Backup option - if it is
enabled or disabled on the storage server.
MTCPVersion
The MTCP Version which the storage server uses.
performanceTable
A table containing the information of the performance in the host which the
storage server is running.
performanceMirrorSyncTh: The Mirror Synchronization Throttle of the
performance table.
performanceSyncMirrorInterval The Synchronize out-of-sync mirrors Interval
of the performance table.
performanceSyncMirrorRetry The Synchronize out-of-sync mirrors retry
times of the performance table.
performanceSyncMirrorUpnum The Synchronize out-of-sync mirrors up
numbers at each interval of the performance table.
performanceInitialMirrorSync The option of starting initial synchronize when
mirror is added of the performance table.
performanceIncludeReplicaMirror: The option of including replica mirror in
the automatic synchronize process of the performance table.
performanceReplicationMicroScan It indicates the MicroScan option of
Replication is enable or disable of the storage server.
serverRole
The storage server role.
smioption
The storage server SMI-S option.
ServerIPaliasTable
A table containing the information of the IP alias in the host which the storage
server is running.
ServerIPAliasIP: The storage server IP Alias
PhysicalResources
numOfAdapters
The amount of physical adapters configured by the storage server.
numOfDevices
The amount of physical devices configured by the storage server.
CDP/NSS Administration Guide
446
SNMP Integration
Table / Field descriptions
scsiAdapterTable
A table containing the information of all the installed SCSI adapters of the
storage server.
adapterNumber: The SCSI adapter number.
adapterInfo: The model name of the SCSI adapter.
scsiDeviceTable
A table containing all the SCSI devices of the storage server.
deviceNo: The sequential digit number as a index key of the device table.
deviceType:: Represents the access type that the device attached to the
storage server.
vendorID: The product vendor ID.
produtcID: The product model name.
firmwareRev: The firmware version of the device.
adapterNo: The configured SCSI adapter number.
channelNo: The configured SCSI channel number.
scsiID: The configured SCSI ID.
lun: The configured SCSI LUN number.
totalSectors: The amount of sectors or blocks of the device.
sectorSize: The size of bytes for each sector or block.
totalSize: The size of the device represented in megabytes.
configStatus: : Represents the attaching status of the device.
totalSizeQuantity: The quantity size of the device.
totalSizeUnit: The size unit of the device. 0 = KB. 1 = MB. 2 = GB. 3 = TB.
totalSectors64: The amount of sectors or blocks of the device.
totalSize64: The size of the device represented in megabytes.
StoragePoolTable
A table containing the information of Storage Pool of the storage server.
PoolName: The name of the Storage Pool.
PoolID: The Pool ID of the Storage Pool.
PoolType: The Pool Type of the Storage Pool.
DeviceCount: The device Count in the Storage Pool.
PoolCount: The Storage Pool counts.
PoolTotalSize: The total Size of the Storage Pool.
PoolUsedSize: The amount of Storage Pool space used.
PoolAvailableSize: The available Size of the Storage Pool.
PoolTotalSizeQuantity: The total Size quantity of the Storage Pool.
PoolTotalSizeUnit The total Size unit of the Storage Pool. 0 = KB. 1 = MB. 2
= GB. 3 = TB.
PoolUsedSizeQuantity: The quantity used of the Storage Pool.
PoolUsedSizeUnit: The amount of space used unit of the Storage Pool. 0 =
KB. 1 = MB. 2 = GB. 3 = TB.
PoolAvailableSizeQuantity: The available Size quantity of the Storage Pool.
PoolAvailableSizeUnit: The available Size unit of the Storage Pool. 0 = KB. 1
= MB. 2 = GB. 3 = TB.
PoolTatalSize64: The total Size of the Storage Pool.
PoolUsedSize64: The amount of Storage Pool space used.
PoolAvailableSize64: The available Size of the Storage Pool.
CDP/NSS Administration Guide
447
SNMP Integration
Table / Field descriptions
LogicalResources
numOfLogicalResources
The amount of logical resources which including the SAN, NAS, and Replica
devices are available in the storage server.
SnapshotReservedArea
numOfSnapshotReserved: The amount of the shareable snapshot reserved
areas.
snapshotReservedTable : Table containing the snapshot reserved areas
information.
ssrName : The name of the snapshot reserved area.
ssrDeviceName : The physical device name of the snapshot reserved area.
ssrSCSIAddress : The SCSI address of the physical device which the
snapshot reserved area created.
ssrFirstSector : The first sector of the snapshot reserved area.
ssrLastSector : The last sector of the snapshot reserved area.
ssrTotalSectors : The amount of sectors that the snapshot reserved area
created.
ssrSize : The amount of resource size which is representing with megabyte
unit of the snapshot reserved area.
ssrSizeQuantity : The amount quantity of resource size of the snapshot
reserved area.
ssrSizeUnit : The resource size unit of the snapshot reserved area. The size
unit of the device. 0 = KB. 1 = MB. 2 = GB. 3 = TB.
ssrFirstSector64 : The first sector of the snapshot reserved area.
ssrLastSector64 : The last sector of the snapshot reserved area.
ssrTotalSector64 : The amount of sectors that the snapshot reserved area
created.
ssrSize64 : The amount of resource size which is representing with
megabyte unit of the snapshot reserved area.
CDP/NSS Administration Guide
448
SNMP Integration
Table / Field descriptions
Logical Resources --> SANResources
numOfSANResources
The amount of SAN resources are available by the storage server.
SANResourceTable
A table containing the SAN resources information.
sanResourceID : The SAN resource ID assigned by the storage server.
sanResourceName : The SAN resource name created by the user.
srAllocationType : Represents the resource type when user allocating the SAN
device.
srTotalSectors : The amount of sectors allocated by the SAN resource.
srTotalSize : The amount of device size which is representing with megabyte
unit of the SAN resource.
srConfigStatus: Represents the attaching status of the SAN resource.
srMirrorSyncStatus: Represents the mirror synchronization status of the SAN
resource.
srReplicaDevice : Represents the target replica server and device as the
format <hostname of target>:<virtual device id>, if the replication option is
enabled of the SAN resource
srReplicatingSchedule: Represents the current status of the replicating
schedule(On-schedule, Suspended, or N/A) set for the SAN resource.
srSnapshotCopyStatus : The snapshot copy status of the SAN resource.
srPhysicalAllocLayoutTable : Table containing the physical layout information
for the SAN resources.
srpaSanResourceName : The SAN resource name created by the user.
srpaSanResourceID : The SAN resource ID assigned by the storage server.
srpaName : The physical device name.
srpaType: Represents the type(Primary, or Mirror) of the physical layout.
srpaAdapterNo : The SCSI adapter number of the physical device.
srpaChannelNo : The SCSI channel number of the physical device.
srpaScsiID : The SCSI ID of the physical device.
srpaLun : The SCSI LUN number of the physical device.
srpaFirstSector : The first sector of the physical device which is allocated by
the SAN resource.
srpaLastSector : The last sector of the physical device which is allocated by
the SAN resource.
srpaSize : The amount of the allocated size which is representing with
megabyte unit within a physical device.
srpaSizeQuantity : The amount of the allocated size quantity within a
physical device.
srpaSizeUnit : The amount of the allocated size unit within a physical device.
The size unit of the device. 0 = KB. 1 = MB. 2 = GB. 3 = TB.
srpaFirstSector64 : The first sector of the physical device which is allocated
by the SAN resource.
srpaLastSector64 : The last sector of the physical device which is allocated
by the SAN resource.
CDP/NSS Administration Guide
449
SNMP Integration
Table / Field descriptions
SANResourceTable
(continued)
srpaSize64 : The amount of the allocated size which is representing with
megabyte unit within a physical device.
srClientInfoTable : Table containing the SAN clients information.
srClientNo : The SAN client ID assigned by the storage server.
srcName : The SAN client name assigned by the storage server.
srcSANResourceID : SAN resource ID assigned by the storage server.
srcSANResourceName : The SAN resource name created by the user.
srcAdapterNo : The adapter number of the SAN client.
srcChannelNo : The channel number of the SAN client.
srcScsiID : The SCSI ID of the SAN client.
srcLun : The SCSI LUN number of the SAN client.
srcAccess : SAN resource accessing mode assigned to the SAN client.
srcConnAccess : Identifies the connecting and accessing status with a
resource of the SAN client.
srFCClientInfoTable: : Table containing the Fibre Channel clients
information.
srFCClientNo : Fibre Channel client ID assigned by the storage server.
srFCName : Fibre Channel client name assigned by the storage server.
srFCSANResourceID : SAN resource ID assigned by the storage server.
srFCSANResourceName : The SAN resource name created by the user.
srFCInitatorWWPN : The world wide port name(WWPN) of the Fibre
Channel client's initator HBA.
srFCTargetWWPN : The world wide port name(WWPN) of the Fibre
Channel client's target HBA.
srFCLun : The SCSI LUN number of the Fibre Channel client.
srFCAccess : The SAN resource accessing mode assigned to the Fibre
Channel client.
srFCConnAccess : Identifies the connecting and accessing status with a
resource of the Fibre Channel client.
srSnapShotTable : Table containing the snapshot resources created by the
SAN resource.
srSnapShotResourceID : SAN resource ID assigned by storage server.
srSnapShotResourceName : SAN resource name created by the user.
srSnapShotOption : The status represents the snapshot option is enable
or disable of the SAN resource.
srSnapShotSize : The allocated size when creating the SAN resource at
first time.
srSnapShotThreshold : The value represents the threshold setting which
is in percentage(%) format of the SAN resource.
srSnapShotReachTh : The policy is setting for expanding resource
automatically or manually while reaching the threshold.
srSnapShotIncSize : The incremental size for each time when is running
out the resource. This is meaningful when expanding resource is
automatically.
srSnapShotMaxSize : The maximum resource size which is represented
in megabyte unit is allowed for allocating.
CDP/NSS Administration Guide
450
SNMP Integration
Table / Field descriptions
SANResourceTable
(continued)
srSnapShotUsedSize64 : The resource size which is representing in kilobyte
unit have been used.
srSnapShotFreeSize64 : The free resource size which is representing in
megabyte unit before reaching the threshold.
srSnapShotReclaimPolicy : The status represents the snapshot Reclaim
option is enabled or disabled of the SAN resource.
srSnapShotReclaimTime : The initial time when the snapshot Reclaim
option is enabled of the SAN resource.
srSnapShotReclaimInterval : The schedule interval to start the snapshot
Reclaim of the SAN resource.
srSnpaShotReclaimWaterMark : The threshold for the minimum amount of
space that can be reclaimed per TimeMark of the SAN resource.
srSnapShotReclaimMaxTime : The maximum time for the reclaim process of
the SAN resource.
srSnapShotShrinkPolicy : The status represents the snapshot Shrink option
is enabled or disabled of the SAN resource.
srSnapShotShrinkThresHold : The minimum disk space to shrink the
snapshot resource of the SAN resource.
srSnapShotShrinkMinSize : The minimum size for the snapshot resource to
shrink.
srSnapShotShrinkMinSizeQuantity : The minimum size quantity for the
snapshot resource to shrink.
srSnapShotShrinkMinSizeUnit : The minimum size unit for the snapshot
resource to shrink. 0 = KB. 1 = MB. 2 = GB. 3 = TB.
srSnapShotShrinkMinSize64 : The minimum size for the snapshot resource
to shrink.
srSnapShotResourceStatus : The snapshot resource status of the SAN
resouce.
srTimeMarkTable : Table containing the timamark resources created by the
SAN resource.
srTimeMarkResourceID : The SAN resource ID assigned by the storage
server.
srTimeMarkResourceName : The SAN resource name created by the user.
srTimeMarkOption : The status represents the timemark option is enable or
disable of the SAN resource.
srTimeMarkCounts : The maximum timemarks that is allowed to create of
the SAN resource.
srTimeMarkSchedule : The time interval creates one new timemark.
srTimeMarkLastTimeStamp : The lately timestamp creates timemark.
srTimeMarkSnapshotImage : The time of each day creates snapshot-image
automatically.
srTimeMarkSnapshotNotificationOption : This option triggers the snapshot
notification schedule.
srTimeMarkReplicationOption : The replication option after the timemark is
taken.
CDP/NSS Administration Guide
451
SNMP Integration
Table / Field descriptions
SANResourceTable
(continued)
srBackupTable : Table containing the backup resources created by the SAN
resource.
srBackupResourceID : The SAN resource ID assigned by the storage
server.
srBackupResourceName : The SAN resource name created by the user.
srBackupOption : The status represents the backup option is enable or
disable of the SAN resource.
srBackupWindow : The daytime allows for opening one backup sesion.
srBackupSessionLen : The time interval allows for one backup session in
each time.
srBackupRelativeTime : The time interval waits before closing the backup
session which is in inactivity status.
srBackupWaitTime : The time interval which is represnting in minute unit
waits before closing the backup session after completion.
srBackupSelectCriteria : The snapshot image selection criteria that could be
new or latest for the backup session. New represents that it always creates
new snapshot image for backup, and latest represents that it uses the last
created snapshot image for backup.
srBackupRawDeviceName : The SAN Backup Resource Raw Device Name
created by the user.
srReplicationTable : Table containing the replication resources created by
the SAN resource.
srReplicationResourceID : The SAN resource ID assigned by the storage
server.
srReplicationResourceName : The SAN resource name created by the user.
srReplicationOption : The status represents the replication option is enable
or disable of the SAN resource.
srReplicaServer : The target replia server name.
srReplicaDeviceID : The target replica device ID.
srReplicaSchedule : Represents Current status of the replicating
schedule(On-schedule, Suspended, or N/A) set for the SAN resource.
srReplicaWatermark : The watermark sets to generate one new replication
automatically.
srReplicaWatermarkRetry : The retry interval which is representing in minute
unit if the replication failed.
srReplicaTime : The daytime of each day creates one new replication.
srReplicaInterval : The time interval creates one new replication.
srReplicationContinuousMode : The status represents the Continuous Mode
of Replication is enable or disable.
srReplicationCreatePrimaryTimeMark : Allows you to create the primary
TimeMark when a replica TimeMark is created.
srReplicaSyncTimeMark : Allows you to synchronize the replica TimeMark
when a primary TimeMark is created.
srReplicationProtocol : Allows you to synchronize the replica TimeMark
when a primary TimeMark is created.
srReplicationCompression : The status represents the Compression option
is enable or disable of Replication.
CDP/NSS Administration Guide
452
SNMP Integration
Table / Field descriptions
SANResourceTable
(continued)
srReplicationEncryption : The status represents the Encryption option is
enable or disable of Replication.
srReplicationMicroScan : The status represents the MicroScan option is
enable or disable of Replication.
srReplicationSyncPriority : The Priority setting when Replication
Synchronize of the SAN resource.
srReplicationStatus : The Replication status of the SAN resource.
srReplicationMode : The Replication mode of the SAN resource.
srReplicationContinuousResourceID : The Continuous Replication
Resource ID of the SAN resource.
srReplicationContinuousResourceUsage : The Continuous Replication
Resource Usage of the SAN resource.
srReplicationDeltaData : The Accumulated Delta Data of replication of the
SAN resource.
srReplicationUseExistTM When Continuous Mode is disabled, the option
about using existing TimeMark of the replication.
srReplicationPreserveTm When Continuous Mode is disabled, the option
about perserving TimeMark of the replication.
srReplicaLastSuccessfulSyncTime : The last successful synchronize time of
the replication.
srReplicaAverageThroughput : The average throughput (MB/s) of the
replication.
srReplicaAverageThroughputQuantity : The average throughput quantity of
the replication.
srReplicaAverageThroughputUnit : The average throughput unit of the
replication. 0 = KB. 1 = MB. 2 = GB. 3 = TB.
srCacheTable : Table containing the cache resource created by the SAN
device.
srCacheResourceID : The SAN resource ID assigned by the storage server.
srCacheResourceName : The SAN resource name created by the user.
srCacheOption : The status represents the cache option is enable or disable
of the SAN resource.
srCacheSuspend : The cache resource is currently suspended or not.
srCacheTotalSize : The allocated size when creating the cache resource.
srCacheFreeSize : The free resource size which is representing in
megabyte unit before reaching the maximum resource size.
srCacheUsage : The percentage of the used resource size.
srCacheThresHold : The data needs to be in the cache before beginning
flushing the cache.
srCacheFlushTime : The number of milliseconds before cache begins to
flush when below the data threshold level.
srCacheFlushCommand : The outstanding commands will be sent at one
time during the flush process.
srCacheSkipWriteCommand This option allows the system to skip multiple
pending write commands targeted for the same block.
CDP/NSS Administration Guide
453
SNMP Integration
Table / Field descriptions
SANResourceTable
(continued)
srCacheFlushSpeed : The flush speed will be sent at one time during the
flush process.
srCacheTotalSizeQuantity : The allocated size quantity when creating the
cache resource.
srCacheTotalSizeUnit : The allocated size unit when creating the cache
resource. 0 = KB. 1 = MB. 2 = GB. 3 = TB.
srCacheFreeSizeQuantity : The free resource size quantity before reaching
the maximum resource size.
srCacheFreeSizeUnit : The free resource size unit before reaching the
maximum resource size. 0 = KB. 1 = MB. 2 = GB. 3 = TB.
srCacheOwnResourceID : The Cache resource ID assigned by the storage
server.
srCacheTotalSize64 : The allocated size when creating the cache resource.
srCacheFreeSize64 : The free resource size which is representing in
megabyte unit before reaching the maximum resource size.
srCacheStatus : Current safecache device's status of the SAN resource.
srWriteCacheproperty : The property represents the write cache is enabled
or disabled of the SAN resource.
srMirrorTable : Table containing the mirror property created by the SAN
device.
srMirrorResourceID : The SAN resource ID that enables the mirror property.
srMirrorType : The mirror type when a SAN resource enable the mirror
property.
srMirrorSyncPriority : The mirror synchronization priority when a SAN
resource enable the mirror property.
srMirrorSuspended Whether the mirror is suspended.
srMirrorThrottle : The mirror throttle value for SAN resource.
srMirrorHealthMonitoringOption : The status represents the mirror health
monitoring option is enable or disable.
srMirrorHealthCheckInterval : The Interval to Check and report mirror health
status.
srMirrorMaxLagTime : The Maximum acceptable lag time for mirror I/O.
srMirrorSuspendThPercent: Suspends mirroring when the threshold of
failure reaches the percentage of the failure conditions.
srMirrorSuspendThIOnum: Suspends mirroring when the outstanding IOs is
greater than or equal to the threshold.
srMirrorRetryPolicy : The status represents the mirror synchronization retry
policy is enable or not.
srMirrorRetryInterval : The mirror synchronization retry at specified interval.
srMirrorRetryActivity : The mirror synchronization retry when I/O activity is
below or at threshold.
srMirrorRetryTimes : The maximum mirror synchronization retry times.
CDP/NSS Administration Guide
454
SNMP Integration
Table / Field descriptions
SANResourceTable
(continued)
srMirrorSychronizationStatus : Represents the mirror synchronization status
of the SAN resource.
srMirrorAlterReadMirror : Represents the alternative read mirror option of
the SAN resource.
srMirrorAverageThroughput : The average throughput (MB/s) of the mirror
synchronization operation.
srMirrorAverageThroughputQuantity : The average throughput quantity of
the mirror synchronization operation.
srMirrorAverageThroughtputUnit : The average throughput unit of the mirror
synchronization operation. 0 = KB. 1 = MB. 2 = GB. 3 = TB.
srThinProvisionTable : Table containing the Thin Provision of the SAN
device.
srThinProvisionOption : Represents the Thin Provisioning option is enable or
disable of the resource.
srThinProvisionCurrAllocSize : Current Allocated Size of the Thin Provision
resource on the storage server.
srThinProvisionUsageSize : Current usage size of the Thin Provision
resource.
srThinProvisionUsagePercentage : Current usage percentage of the Thin
Provision resource.
srThinProvisionCurrAllocSizeQuantity : Current Allocated Size quantity of
the Thin Provision resource on the storage server.
srThinProvisionCurrAllocSizeUnit : Current Allocated Size unit of the Thin
Provision resource on the storage server. 0 = KB. 1 = MB. 2 = GB. 3 = TB.
srThinProvisionUsageSizeQuantity : Current usage size quantity of the Thin
Provision resource.
srThinProvisionUsageSizeUnit : Current usage size unit of the Thin
Provision resource. 0 = KB. 1 = MB. 2 = GB. 3 = TB.
srThinProvisionCurrAllocSize64 : Current Allocated Size of the Thin
Provision resource on the storage server.
srThinProvisionUsageSize64 : Current usage size of the Thin Provision
resource.
srCDPJournalTable : Table containing the CDP Journal resources created by
the SAN resource.
srCDPJournalResourceID : The CDP Journal ID assigned by the storage
server.
srCDPJournalSANResourceID : The CDP Journal SAN resource ID
assigned by the storage server.
srCDPJournalOption : The status represents the CDP Journal option is
enable or disable of the SAN resource.
srCDPJournalTotalSize : The CDP Journal Total size of the SAN resource.
srCDPJournalStatus : The status represents current CDP Journal of the
SAN resource.
srCDPJournalPerformanceLevel : The setting Performance level for the
CDP Journal of the SAN resource.
srCDPJournalTotalSizeQuantity : The CDP Journal Total size quantity of the
SAN resource.
CDP/NSS Administration Guide
455
SNMP Integration
Table / Field descriptions
SANResourceTable
(continued)
srCDPJournalTotalSizeUnit : The CDP Journal Total size unit of the SAN
resource. The size unit of the device. 0 = KB. 1 = MB. 2 = GB. 3 = TB.
srCDPJournalTotalSize64 : The CDP Journal Total size of the SAN
resource.
srCDPJournalAvalibleTimerange : The CDP Journal Avaiable time range of
the SAN resource.
srCDPJournalUsageSize : The CDP Journal usage size(MB) of the SAN
resource.
srCDPJournalUsagePercentage : The CDP Journal usage percentage of
the SAN resource.
srCDPJournalUsageQuantity : The CDP Journal Usage size quantity of the
SAN resource.
srCDPJournalUsageUnit : The CDP Journal Usage size unit of the SAN
resource. The size unit of the device. 0 = KB. 1 = MB. 2 = GB. 3 = TB.
srCDPJournalUsageSize64 : The CDP Journal Usage size of the SAN
resource.
srNearLineMirrorTable : Table containing the Near-Line Mirror property of
the SAN device.
srNearLineMirrorRemoteServerName : The remote server name of NearLine mirror resource sets on the storage server.
srNearLineMirrorRemoteServerAlias : The remote server Alias of Near-Line
mirror resource sets on the storage server.
srNearLineMirrorRemoteID : The remote resource ID of Near-Line mirror
resource sets on the storage server.
srNearLineMirrorRemoteGUID : The remote resource GUID of Near-Line
mirror resource sets on the storage server.
srNearLineMirrorRemoteSN : The remote resource serial number of NearLine mirror resource sets on the storage server.
srTotalSizeQuantity : The amount of device size quantity of the SAN
resource.
srTotalSizeUnit : The amount of device size quantity of the SAN resource.
srTotalSectors64 : The amount of sectors allocated by the SAN resource.
srTotalSize64 : The amount of device size which is representing with
megabyte unit of the SAN resource
srISCSIClientInfoTable : Table containing the iSCSI clients information.
srISCSIClientNO : The iSCSI client ID assigned by the storage server.
srISCSIName : The iSCSI client name assigned by the storage server.
srISCSISANResourceID : The SAN resource ID assigned by the storage
server.
srISCSISANResourceName : The SAN resource name created by the user.
srISCSIAccessType : The resource access type of the iSCSI client.
srISCSIConnectAccess : Identifies the connecting and accessing status with
a resource of the iSCSI client.
CDP/NSS Administration Guide
456
SNMP Integration
Table / Field descriptions
SANResourceTable
(continued)
srPhysicalTotalAllocLayoutTable : Table containing the total physical layout
information for the SAN resources.
srpaAllocSANResourceName : The SAN resource name created by the
user.
srpaAllocName : The physical device name.
srpaAllocType : Represents the type(Primary, or Mirror) of the physical
layout.
srpaAllocAdapterNo : The SCSI adapter number of the physical device.
srpaAllocChannelNo : The SCSI channel number of the physical device.
srpaAllocScsiID : The SCSI ID of the physical device.
srpaAllocLun : The SCSI LUN number of the physical device.
srpaAllocFirstSector : The first sector of the physical device which is
allocated by the SAN resource.
srpaAllocLastSector : The last sector of the physical device which is
allocated by the SAN resource.
srpaAllocFirstSector64 : The first sector of the physical device which is
allocated by the SAN resource.
srpaAllocLastSector64 : The last sector of the physical device which is
allocated by the SAN resource.
srpaAllocSize : The amount of the allocated size which is representing with
megabyte unit within a physical device.
srpaAllocSizeQuantity : The amount of the allocated size quantity within a
physical device.
srpaAllocSizeUnit : The amount of the allocated size unit within a physical
device. The size unit of the device. 0 = KB. 1 = MB. 2 = GB. 3 = TB.
srpaAllocSize64 : The amount of the allocated size which is representing
with megabyte unit within a physical device.
srHotZonePrefetchInfoTable : Table containing the HotZone Prefetch
information.
srHotZonePrefetchSANResourceID : The SAN resource ID that assigned by
storage server.
srHotZonePrefetchMaximumChains : The maximum number of sequential
read chains to detect.
srHotZonePrefetchMaximumReadAhead : The maximum size to read ahead
representing with KB.
srHotZonePrefetchReadAhead : The size of read the read command issued
when reading ahead representing with KB.
srHotZonePrefetchChainTimeout : The time before the chain is removed and
the readahead buffers are freed.
srHotZoneReadCacheInfoTable : Table containing the HotZone Read Cache
information.
srHotZoneCacheResourceID : The Resource ID that assigned by storage
server.
srHotZoneCacheSANResourceID : The SANResource ID that assigned by
storage server.
CDP/NSS Administration Guide
457
SNMP Integration
Table / Field descriptions
SANResourceTable
(continued)
srHotZoneCacheTotalSize : The amount of device size which is representing
with megabyte unit of the HotZone read cache resource.
srHotZoneCacheStatus : The status represents current HotZone read cache
of the SAN resource.
srHotZoneCacheSuspended : The Suspended status represents current
HotZone read cache of the SAN resource.
srHotZoneCacheAccesType : The zone's access type policy of the SAN
resource.
srHotZoneCacheAccessIntensity : The access intensity to determine how
the zone is accessed.
srHotZoneCacheMinimumStayTime : The minimum time that how long a
zone can stay at least in the HotZone before it is swapped out.
srHotZoneCacheEachZoneSize : The size of each zone setting.
srHotZoneCacheTotalZones : The total zones that is allocated of the SAN
resource.
srHotZoneCacheUsedZones : Current used zones of the SAN resource.
srHotZoneCacheHitRatio : The hit ratio represents current HotZone read
cache of the SAN resource.
srHotZoneCacheTotalSizeQuantity : The amount of device size quantity
which is representing with megabyte unit of the HotZone read cache
resource.
srHotZoneCacheTotalSizeUnit : The amount of device size unit of the
HotZone read cache resource. The size unit of the device. 0 = KB. 1 = MB. 2
= GB. 3 = TB.
srHotZoneCacheTotalSize64 : The amount of device size which is
representing with megabyte unit of the HotZone read cache resource.
CDP/NSS Administration Guide
458
SNMP Integration
Table / Field descriptions
Logical Resources --> replicaResources
numOfReplica
The amount of replica resources created by the storage server.
ReplicaResourceTable
A table containing the replica resources.
rrVirtualID : The resource ID assigned by the storage server.
rrVirtualName : The resource name created by the user.
rrAllocationType : Represents the resource type when user allocating the
resource.
rrSectors : The amount of sectors allocated by the resource.
rrTotalSize : The amount of device size which is representing with megabyte
unit of the resource.
rrConfigurationStatus : Represents the attaching status of the resource.
rrGUID : The GUID string of the replica resource.
rrPrimaryVirtualID : Represents the source replication server and device as
the format <hostname of source>:<virtual device id>, if the replication option
is enabled of the resource.
rrReplicationStatus : Represents the current status(Replication failed, New,
Idle, and Merging) of the replication schedule.
rrLastStartTime : The latest timestamp of the replication.
rrMirrorSyncStatus : Represents the mirror synchronization status of the
resource.
rrWriteCache : Represents the write cache option is enable or disable of the
resource.
rrThinProvisionOption : Represents the Thin Provisioning option is enable or
disable of the resource.
rrThinProvisionCurrAllocSize : Current Allocated Size of the resource which
enables Thin Provisioning.
rrThinProvisionUsageSize : Current usage size of the resource which
enables Thin Provisioning.
rrThinProvisionUsagePercentage : Current usage percentage of the
resource which enables Thin Provisioning.
rrTotalSizeQuantity : The amount of device size quantity of the resource.
rrTotalSizeUnit : The amount of device size unit of the resource.0 = KB. 1 =
MB. 2 = GB. 3 = TB
rrThinProvisionCurrAllocSizeQuantity : Current Allocated Size quantity of
the resource which enables Thin Provisioning.
rrThinProvisionCurrAllocSizeUnit : Current Allocated Size unit of the
resource which enables Thin Provisioning. 0 = KB. 1 = MB. 2 = GB. 3 = TB.
rrThinProvisionUsageSizeQuantity : Current usage size quantity of the
resource which enables Thin Provisioning
rrThinProvisionUsageSizeUnit : Current usage size unit of the resource
which enables Thin Provisioning. 0 = KB. 1 = MB. 2 = GB. 3 = TB.
rrSectors64 : The amount of sectors allocated by the resource.
rrTotalSize64 : The amount of device size which is representing with
megabyte unit of the resource
CDP/NSS Administration Guide
459
SNMP Integration
Table / Field descriptions
ReplicaResourceTable
(continued)
rrThinProvisionCurrAllocSize64 : Current Allocated Size of the resource which
enables Thin Provisioning.
rrThinProvisionUsageSize64 : Current usage size of the resource which
enables Thin Provisioning.
rrLastSuccessSyncTime : The last successful synchronize timestamp of the
replication.
rrAverageThroughput : The average throughput (MB/s) of the replication.
rrAverageThroughputQuantity : The average throughput quantity of the
replication.
rrAverageThroughputUnit : The average throughput unit of the replication. 0 =
KB. 1 = MB. 2 = GB. 3 = TB
ReplicaPhyAllocLayoutTable
A table containing the physical layout information for the replica resources.
rrpaVirtualID : The replica resource ID assigned by the storage server.
rrpaVirtualName : The replica resource name created by the user.
rrpaName : The physical device name.
rrpaType : Represents the type(Primary, or Mirror) of the physical layout.
rrpaSCSIAddress : The SCSI address with <Adapter:Channel:SCSI:LUN>
format of the replica resource.
rrpaFirstSector : The first sector of the physical device which is allocated by the
replica resource.
rrpaLastSector : The last sector of the physical device which is allocated by the
replica resource.
rrpaSize : The amount of the allocated size which is representing with
megabyte unit within a physical device.
rrpaSizeQuantity : The amount of the allocated size quantity within a physical
device.
rrpaSizeUnit : The amount of the allocated size unit within a physical device.
The size unit of the device. 0 = KB. 1 = MB. 2 = GB. 3 = TB.
rrpaFirstSector64 : The first sector of the physical device which is allocated by
the replica resource.
rrpaLastSector64 : The last sector of the physical device which is allocated by
the replica resource.
rrpaSize64 : The amount of the allocated size which is representing with
megabyte unit within a physical device
Logical Resources --> Snapshot Group Resources
numOfGroup
The amount of snapshot groups created by the storage server.
CDP/NSS Administration Guide
460
SNMP Integration
Table / Field descriptions
snapshotgroupInfoTable
snapshotgroupName : The user-created snapshot group resource name.
snapshotgroupType : The property of the snapshot group, which it can be
one of the following types. timemark, backup, replication, timemark +
backup, timemark + replication, backup + replication, and timmemark +
backup + replication.
snapshotgroupTimeMarkInfoTable : Table containing the timemark
properties of snapshot groups.
snapshotgroupTimeMarkGroupID : The snapshot group resource ID
assigned by the storage server.
snapshotgroupTimeMarkOption : The status represents the timemark option
is enable or disable of the snapshot group resource.
snapshotgroupTimeMarkCounts : The maximum timemarks that is allowed
to create of the snapshot group resource.
snapshotgroupTimeMarkSchedule : The time interval creates one new
timemark.
snapshotgroupTimeMarkSnapshotImage : The time of each day creates
snapshot-image automatically.
snapshotgroupTimeMarkSnapshotNotificationOption : The option of
triggering snapshot notification schedule.
snapshotgroupTimeMarkReplicationOption : The replication option after the
timemark is taken.
snapshotgroupBackupInfoTable - : Table containing the backup properties of
snapshot groups.
snapshotgroupBackupGroupID : The snapshot group resource ID assigned
by the storage server.
snapshotgroupBackupOption : The status represents the backup option is
enable or disable of the snapshot group resource.
snapshotgroupBackupWindow : The daytime allows for opening one backup
sesion.
snapshotgroupBackupSessionLen : The time interval allows for one backup
session in each time.
snapshotgroupBackupRelativeTime : The time interval waits before closing
the backup session which is in inactivity status.
snapshotgroupBackupWaitTime : The time interval which is represnting in
minute unit waits before closing the backup session after completion.
snapshotgroupBackupSelectCriteria : The snapshot image selection criteria
that could be new or latest for the backup session. New represents that it
always creates new snapshot image for backup, and latest represents that it
uses the last created snapshot image for backup.
snapshotgroupReplicationInfoTable - : Table containing the replication
properties of snapshot
snapshotgroupReplicationGroupID: The snapshot group resource ID
assigned by the storage server
snapshotgroupReplicationOption : The status represents the replication
option is enable or disable of the snapshot group resource.
CDP/NSS Administration Guide
461
SNMP Integration
Table / Field descriptions
snapshotgroupInfoTable
(continued)
snapshotgroupCDPInfoTable
snapshotgroupReplicaServer : The target replia server name.
snapshotgroupReplicaGroupID : The target replica group ID.
snapshotgroupReplicaWatermark : The watermark sets to generate one new
replication automatically.
snapshotgroupReplicaTime : The daytime of each day creates one new
replication.
snapshotgroupReplicaInterval : The time interval creates one new
replication.
snapshotgroupReplicawatermarkRetry : The retry interval which is
representing in minute unit if the replication failed.
snapshotgroupReplicaContinuousMode : The status represents the
Continuous Mode of Replication is enable or disable.
snapshotgroupReplicaCreatePrimaryTimeMark : Allows you to create the
primary TimeMark when a replica TimeMark is created.
snapshotgroupReplicaSyncTimeMark : Allows you to synchronize the
replica TimeMark when a primary TimeMark is created.
snapshotgroupReplicaProtocol It states the Protocol which Replication uses.
snapshotgroupReplicaCompression : The status represents the
Compression option is enable or disable of Replication.
snapshotgroupReplicaEncryption : The status represents the Encryption
option is enable or disable of Replication.
snapshotgroupReplicaMicroScan : The status represents the MicroScan
option is enable or disable of Replication.
snapshotgroupReplicaSyncPriority : The Priority setting when Replication
Synchronize of the SAN resource.
snapshotgroupReplicaMode : The Replication mode of the SAN resource.
snapshotgroupReplicaUseExistTM: When Continuous Mode is disabled, the
option about using existing TimeMark of the replication.
snapshotgroupReplicaPreserveTM: When Continuous Mode is disabled, the
option about perserving TimeMark of the replication.
A table containing the cdp properties of snapshot groups.
snapshotgroupCDPInfoGroupID : The snapshot group resource ID assigned by
the storage server.
snapshotgroupCDPInfoOption : The status represents the snapshot group
CDP Journal option is enable or disable of the storage server.
snapshotgroupCDPInfoTotalSize : The total size of snapshot group CDP
Journal of the storage server.
snapshotgroupCDPInfoStatus : The status of the snapshot group CDP Journal
of the storage server.
snapshotgroupCDPInfoPerformanceLevel : The performance level setting of
the snapshot group CDP Journal of the storage server.
snapshotgroupCDPInfoAvailableTimerange : The available time range of the
snapshot group CDP Journal of the storage server.
snapshotgroupCDPInfoUsageSize : The usage size(MB) of snapshot group
CDP Journal of the storage server.
CDP/NSS Administration Guide
462
SNMP Integration
Table / Field descriptions
snapshotgroupCDPInfoTable
(continued)
snapshotgroupCDPInfoUsagePercent: The usage percentage of snapshot
group CDP Journal of the storage server.
snapshotgroupCDPInfoTotalSizeQuantity : The total size quantity of snapshot
group CDP Journal of the storage server.
snapshotgroupCDPInfoTotalSizeUnit : The total size unit of snapshot group
CDP Journal of the storage server. 0 = KB. 1 = MB. 2 = GB. 3 = TB.
snapshotgroupCDPInfoTotalSize64 : The total size 64 bit long of snapshot
group CDP Journal of the storage server.
snapshotgroupCDPInfoUsageSizeQuantity : The usage size quantity of
snapshot group CDP Journal of the storage server.
snapshotgroupCDPInfoUsageSizeUnit : The usage size unit of snapshot group
CDP Journal of the storage server. 0 = KB. 1 = MB. 2 = GB. 3 = TB.
snapshotgroupCDPInfoUsageSize64 : The usage size 64 bit long of snapshot
group CDP Journal of the storage server.
snapshotgroupSafeCache
InfoTable
A table containing the safecache properties of snapshot groups.
snapshotgroupSafeCacheInfoGroupID : The snapshot group resource ID
assigned by the storage server.
snapshotgroupSafeCacheInfoOption : The status represents the snapshot
group safecache option is enable or disable of the storage server.
snapshotgroupSafeCacheInfoSuspend : The group safecache resource is
currently suspended or not.
snapshotgroupSafeCacheInfoTotalSize : The allocated size when creating the
cache resource.
snapshotgroupSafeCacheInfoFreeSize : The free resource size which is
representing in megabyte unit before reaching the maximum resource size.
snapshotgroupSafeCacheInfoUsage : The percentage of the used resource
size.
snapshotgroupSafeCacheInfoThreshold : The data needs to be in the cache
before beginning flushing the cache.
snapshotgroupSafeCacheInfoFlushTime : The number of milliseconds before
cache begins to flush when below the data threshold level.
snapshotgroupSafeCacheInfoSkeipWriteCommands This option allows the
system to skip multiple pending write commands targeted for the same block.
snapshotgroupSafeCacheInfoFlushSpeed : The flush speed will be sent at one
time during the flush process.
snapshotgroupSafeCacheInfoTotalSizeQuantity : The allocated size quantity
when creating the cache resource.
snapshotgroupSafeCacheInfoTotalSizeUnit : The allocated size unit when
creating the cache resource. 0 = KB. 1 = MB. 2 = GB. 3 = TB.
snapshotgroupSafeCacheInfoFreeSizeQuantity : The free resource size
quantity before reaching the maximum resource size.
snapshotgroupSafeCacheInfoFreeSizeUnit : The free resource size unit before
reaching the maximum resource size. 0 = KB. 1 = MB. 2 = GB. 3 = TB.
snapshotgroupSafeCacheInfoResourceID : The Cache resource ID assigned
by the storage server.
snapshotgroupSafeCacheInfoTotalSize64 : The allocated size when creating
the cache resource.
CDP/NSS Administration Guide
463
SNMP Integration
Table / Field descriptions
snapshotgroupSafeCache
InfoTable (continued)
snapshotgroupSafeCacheInfoFreeSize64 : The free resource size which is
representing in megabyte unit before reaching the maximum resource size.
snapshotgroupSafeCacheInfoStatus : The status of the snapshot group
safecache of the storage server.
snapshotgroupMembers
The snapshot group member counts of the storage server.
snapshotgroupAssignClients : The snapshot group assign client counts of
the storage server.
snapshotgroupCacheOption : The status represents the snapshot group
cache option is enable or disable of the storage server.
snapshotgroupReplicationOption : The status represents the snapshot group
replication option is enable or disable of the storage server.
snapshotgroupTimeMarkOption : The status represents the snapshot group
timemark option is enable or disable of the storage server.
snapshotgroupCDPOption : The status represents the snapshot group cdp
option is enable or disable of the storage server.
snapshotgroupBackupOption : The status represents the snapshot group
backup option is enable or disable of the storage server.
snapshotgroupSnapShotOption : The status represents the snapshot group
snapshot notification option is enable or disable of the storage server.
snapshotgroupMemberTable
snapshotgroupMemberTableGroupID: The snapshot group resource ID
assigned by the storage server.
snapshotgroupMemberTableName : Virtual resource name created by the user.
CDP/NSS Administration Guide
464
CDP/NSS Administration Guide
Email Alerts
Email Alerts is a unique FalconStor customer support utility that proactively identifies
and diagnoses potential system or component failures and automatically notifies
system administrators via email.
With Email Alerts, the performance and behavior of servers can be monitored so
that system administrators are able to take corrective measures within the shortest
amount of time, ensuring optimum service uptime and IT efficiency.
Using pre-configured scripts (called triggers), Email Alerts monitors a set of predefined, critical system components (SCSI drive errors, offline device, etc.).
With its open architecture, administrators can easily register new elements to be
monitored by these scripts. When an error is triggered, Email Alerts uses the built-in
CDP/NSS X-ray feature to capture the appropriate information. This includes the
CDP/NSS event log, as well as a snapshot of the CDP/NSS appliances current
configuration and environment. The technical information needed to diagnose the
reported problem is then sent to a system administrator.
Configuration
Email Alerts can be configured to meet your business needs. You can specify who
should be notified about which events. The triggers can be defined to combine any
of the scripts listed below. For example, it can be used to monitor a particular Thin
disk or all Thin disks.
To configure Email Alerts:
1. In the Console, right-click on your storage server and select Options --> Enable
Email Alerts.
CDP/NSS Administration Guide
465
Email Alerts
2. Enter general information for your Email Alerts configuration.
SMTP Server - Specify the mail server that Email Alerts should use to
send out notification emails.
SMTP Port - Specify the mail server port that Email Alerts should use.
SMTP Username/Password - Specify the user account that will be used by
Email Alerts to log into the mail server.
User Account - Specify the email account that will be used in the From
field of emails sent by Email Alerts.
Target Email - Specify the email address of the account that will receive
emails from Email Alerts. This will be used in the To field of emails sent
by Email Alerts.
CC Email - Specify any other email accounts that should receive emails
from Email Alerts.
Subject - Specify the text that should appear on the subject line. The
general subject defined during setup will be followed by the trigger specific
subject. If the trigger does not have a subject, the trigger name and
parameters are appended to the general email subject. For the
syslogcheck.pl trigger, the first alert category is appended to the
general email subject. If the email is sent based on event severity, the
event ID will be appended to the general email subject.
Interval - Specify the time period between each activation of Email Alerts.
The Test button allows you to test the configuration by sending a test
email.
CDP/NSS Administration Guide
466
Email Alerts
3. Enter the contact information that should appear in each Email Alerts email.
4. Set the triggers that will cause Email Alerts to send an email.
CDP/NSS Administration Guide
467
Email Alerts
Triggers are the scripts/programs that perform various types of error checking
when Email Alerts activates. By default, FalconStor includes scripts/programs
that check for low system memory, changes to the CDP/NSS XML configuration
file, and relevant new entries in the system log.
Note: If the system log is rotated prior to the Email Alerts checking interval
and contains any triggers but the new log does not have any triggers in it,
then no email will be sent. This is because only the current log is checked, not
the previous log.
The following are the some of the default scripts that are provided:
activity.pl - (Activity check) - This script checks to see if an fsstats activity
statistics file exists. If it does, an email alert is sent with the activity file
attached.
cdpuncommiteddatachk.pl -t 90 - This script checks for uncommitted
data on CDP and generates an email alert message if the percentage of
uncommitted data is more than that specified. By default, the trigger gets
activated when the percentage of uncommitted data is 90%.
chkcore.sh 10 (Core file check) - This script checks to see if a new core
file has been created by the operating system in the bin directory of CDP/
NSS. If a core file is found, Email Alerts compresses it, deletes the original,
and sends an email report but does not send the compressed core file
(which can still be large). If there are more than 10 (variable) compressed
core files under $ISHOME/bin directory, it will keep latest 10 compressed
core files and delete the oldest ones.
defaultipchk.sh eth0 10.1.1.1 (NIC IP address check) - This script checks
that the IP address for the specified NIC matches what is specified here. If
it does not, Email Alerts sends an email report. You can add multiple
defaultipcheck.sh triggers for different NICs (for example eth1 could
be used in another trigger). Be sure to specify the correct IP address for
each NIC.
diskusagechk.sh / 95 (Disk usage check) - This script checks the disk
space usage at the root of the file system. If the percentage is over the
specified percentage (default is 95), Email Alerts sends an email report.
You can add multiple diskusagechk.sh triggers for different mount
points (for example, /home could be used in another trigger).
fccchk.pl - (QLogic HBA check) - This script checks each QLogic adapter
initiator port and sends an email alert if there is a status change from
Online to Not Online. The script also checks QLogic link status and sends
an email alert if the status of FC Link Down changes from OK to Not
OK.fmchk.pl and smchk.pl - These scripts (for checking if the fm and
ipstorsm modules are responding) are disabled.
ipstorstatus.sh (IPStor status check) - This script checks if any module of
CDP/NSS has stopped. If so, Email Alerts sends an email report.
CDP/NSS Administration Guide
468
Email Alerts
kfsnmem.sh 10 (CDP/NSS memory management check) - This script
checks to see if the maximum number of memory pages has been set. If
not, Email Alerts sends an email report. If it is set, the script checks the
available memory pages. If the percentage is lower than specified
percentage (default is 10), Email Alerts sends an email report.
memchk.sh (Memory check) - This script takes in a percentage as the
parameter and checks whether the available system memory is below this
percentage. If yes, Email Alerts sends an email report.
netconfchk.pl - (Inactive network interfaces/invalid broadcasts check) This script uses the ifconfig command to check network configuration once
a day (by default) and sends an email alert if there are any network
devices set to '_tmp' or any broadcast addresses that do not match the IP
and netmask rules.
neterrorchk.pl - (Network configuration check) - This script uses the
ifconfig command to check network configuration and sends an email alert
if there are any network errors, overruns, dropped events, or network
collisions.
powercontrolchk.pl - This script checks system configuration file and
reports absent power control in a failover setup once a day, by default.
processchk.pl - (System process check) This script checks system
processes (via the ps command) and sends an email alert if there are
processes using more than 1 GB of non-swapped physical memory. This
script also sends an email alert if there are processes using more than
90% of CPU usage.
promisecheck.pl - (Promise storage check) - This script checks events
reported by Promise storage hardware every 10 minutes (by default) and
sends an email alert if there is an event with a category other than Info.
This trigger needs to be enabled on-site and requires the IP address and
user/password account needed to access the storage via ssh. The ssh
service must be enabled and started on the Promise storage.
repmemchk.sh (Memory check) - This script checks memory usage by
continuos replication resources. If data in the CDR resource is using more
than 1GB of kernal memory, it triggers an email alert.
reportheartbeat.pl (Heartbeat check) - This script checks to see if the
server is active. If a server heartbeat is detected, an email alert is sent.
reposit_check.pl - This script checks the configuration repositorys
current configuration. If it is not updated, generates an email alert.
However, this trigger works only in case of failover pair. This trigger does
not generate an email alert for a CDP/NSS server with quorum repository
but not in failover mode.
serverstatus.sh (Server status check) - This script checks server module
status. If any module has stopped, an email alert is sent.
snapshotreschk.pl (Snapshot resource are usage check) - This script
checks the snapshot resource area usage. If the usage corresponds to the
actual percentage threshold minus the margin value (default 10%), an
email alert is sent to warn users to take remedy actions before the actual
threshold is reached.
CDP/NSS Administration Guide
469
Email Alerts
swapcheck.pl 80 (Memory swap usage check) - This script checks
available swap memory. If the percentage is below the specified value
(default 80), an email alert is sent with the total swap space and the swap
usage.
syslogchk.pl (System log check) - This script looks at the last 20 MB of
messages in the system log. If any message matches what was defined
on the System Log Check dialog and does not match what was defined on
the System Log Ignore dialog, an email alert is sent with an attachment
that includes all files in $ISHOME/etc and $ISHOME/log.
If you want to limit the number of email alerts for the same System log
event or category of events, set the -memorize parameter to the number of
minutes to remember each event. If the same event is detected in the
previous Email Alerts interval, no email alert is sent for that event. If an
event is detected several times during the current interval, the first
occurrence is reported in the email that is sent for that interval and the
number of repetitions is indicated at the end of the email body with the last
occurrence of the message. The default value is the same as the Email
Alerts interval that was set on the first dialog (or the General tab if Email
Alerts is already configured). Some of the common events in Syslogchk
are as follows:
Fail over to the partner
Take over the partner
Replication failure
Mirrored primary device failure
Mirror device failure
Mirror swap
SCSI Error
Stack
Abandoned commands
FC pending commands
Busy FC
Storage logout
iSCSI client reset because of commands stuck in IO Core
Kernel error
Kernel memory swap
thindevchk.pl -t 200 -s 200 -n 48 - This script monitors total free storage,
storage pool free space, free space for thin device expansion, and number
of segments of a thin device. The trigger parameters are:
-t threshold of percentage of global free space: if the (global free
storage space/global total storage space) is less than the given
percentage, send an alert.
-i threshold of percentage of free space of each storage pool: if the
(free storage space/total storage space) of any storage pool is less
than the given percentage, send an alert.
CDP/NSS Administration Guide
470
Email Alerts
-s threshold of free space for expansion of thin-provisioning devices: if
the available GB storage to expand each thin-provisioning device is
less than the given value, send an alert. if the thin device VID is
provided by "-v", then only check that device.
-v vid: The vid of a thin-provisioning device that needs to be checked
for free storage for expansion.
-n threshold of number of segments of a thin-provisioning disk: If the
number of segments on primary disk or mirror disk of a thinprovisioning device exceeds the given threshold, send an alert.
-interval: enter this parameter followed by the number of minutes to
trigger this script every n minutes. This parameter applies to all
triggers. This interval overrides the global setting.
tmkusagechk - This script monitors TimeMark memory usage. It checks
the values of 'Low Total Memory' and 'Total Memory reserved by IOCore'.
When TimeMark memory usage goes over the lower of these two values,
by the percentage defined in the trigger, an Email Alert is generated.
xfilechk.sh - This script checks and notifies changes in executable files on
the server. If an executable file is added, removed, renamed, or modified,
it sends an email alert. It does not monitor non-executable files.
zombiechk.pl (Defunct process check) - This script checks system
processes once a day (by default) and sends an email alert if there are 10
(default) or more defunct processes.
5. Select the components that will be included in the X-ray.
Note: Because of its size (minimum of 2 MB), the X-ray file is not sent by
default with each notification email. It will, however, be available, should the
system administrator require it.
The following options are available to customize your x-ray. Regardless of which
option you choose, the bash_history file is created containing a history of the
CDP/NSS Administration Guide
471
Email Alerts
commands typed. This is useful in obtaining the history of commands typed
before an issue occurred.
System Information - When this option is selected, the X-ray creates a file
called info which contains information about the entire system,
including: host name, disk usage, operating system version, mounted file
systems, kernel version, CPU, running processes, IOCore information,
uptime, and memory. In addition, if an IPMI device is present in the server,
the X-ray info file will also include the following files for IPMI:
ipmisel - IPMI system event log
ipmisensor - IPMI sensor information
ipmifru - IPMI built-in FRU information
IPStor Configuration - This information is retrieved from the /usr/local/
ipstor/etc/<hostname> directory. All configuration information (ipstor.conf,
ipstor.dat, IPStorSNMP.conf, etc.), except for shared secret information, is
collected.
SCSI Devices - SCSI device information included in the info file.
IPStor Virtual Device - Virtual device information included in the info file.
Fibre Channel - Fibre Channel information.
Log File - The Linux system message file, called messages, is located in
the
/var/log directory. All storage server messages, including status and
error messages are stored in this file.
Loaded Kernel - Loaded kernel modules information is included in the
info file.
Network Configuration - Network configuration information is included in
the info file.
CDP/NSS Administration Guide
472
Email Alerts
Kernel Symbols - This information is collected in the event it will need to be
used for debugging purposes.
Core File - The /usr/local/ipstor path will be searched for any core files that
might have been generated to further help in debugging reported
problems.
Scan Physical Devices - Physical devices will be scanned and information
about them will be included. You can select to Scan Existing Devices or
Discover New Devices.
6. Indicate the terms that should be tracked in the system log by Email Alerts.
The system log records important events or errors that occur in the system,
including those generated by CDP/NSS. This dialog allows you to rule out
entries in the system log that have nothing to do with CDP/NSS, and to list the
types of log entries generated by CDP/NSS that Email Alerts needs to examine.
Entries that do not match the entries entered here are ignored, regardless of
whether or not they are relevant to CDP/NSS.
The trigger for monitoring the system log is syslogchk.pl. To inform the
trigger of which specific log entries need to be captured, you can specify the
general types of entries that need to be inspected by Email Alerts. On the next
dialog, you can enter terms to ignore, thereby eliminating entries that match
these general types, yet can still be disregarded. The resulting subset contains
all entries for which Email Alerts needs to send out email reports.
Each line is a regular expression. The regular expression rules follow the pattern
for AWK (a standard Unix utility).
Note: By default, the system log file is included in the X-ray file which is not
sent with each notification email.
CDP/NSS Administration Guide
473
Email Alerts
7. Indicate which categories of internal messages should not be included.
By default, all categories are disabled except the syslog.ignore.customized. If a
category is checked, it will ignore any messages related to that category.
Select the Customized System Log Ignore tab to add customized ignore entries.
You can enter terms to ignore, thereby eliminating entries that will cause Email
Alerts to send out email reports.
Each line is a regular expression. The regular expression rules follow the pattern
for AWK (a standard Unix utility).
CDP/NSS Administration Guide
474
Email Alerts
8. Select the severity level of server events for which you want to receive an email
alert.
By default, the alert security level is set to None. You can select one of the
following severity levels
Critical - checks only the critical severity level
Error - checks the error and any severity level higher than error.
Warning - checks the warning and any severity level higher than warning.
Informational - checks all severity levels.
9. Confirm all information and then click OK to enable Email Alerts.
CDP/NSS Administration Guide
475
Email Alerts
Modifying Email Alerts properties
Once Email Alerts is enabled, you can modify the information by right-clicking on
your storage server and selecting Email Alerts.
Click on the appropriate tab to update the desired information.
The General tab displays server and message configuration and allows you
to send a test email.
The Signature tab allows you to edit the contact information that appears in
each Email Alerts email.
The Trigger tab allows you to set triggers that will cause Email Alerts to send
an email as well as set up an alternate email.
The Attachment tab allows you to select the information (if any) to send with
the email alert. You can send log files or X-Ray files.
The System Log Check tab allows you to add, edit, or delete syntax from the
log entries that need to be captured. You can also specify the general types
of entries that need to be inspected by Email Alerts.
The System Log Ignore tab allows you to select system log entries to ignore,
thereby eliminating entries that will cause Email Alerts to send out email
reports.
CDP/NSS Administration Guide
476
Email Alerts
Email format
The email body contains the messages return by the triggers. The alert text starts
with the category followed by the actual message coming from the system log. The
first 30 lines are displayed. If the email body is more than 16 KB, it will be
compressed and sent as an attachment to the email. The signature defined during
Email alerts setup appears at the end of email body.
Limiting repetitve Emails
To limit repetitive emails, you have the option to limit the number of email alerts for
the same event ID. By using the -memorize parameter for the syslogcheck.pl
trigger, you can have the Email Alerts module memorize IDs and timestamps of
events for which an alert is sent.
In this case, an event detected with the same event ID as an event in the previous
interval, will not trigger an email alert for that same event. However, if an event is
detected several times during the current checking interval, all those events are
reported in the email that is sent for that interval.
The parameter -memorize for the syslogcheck.pl trigger allows you to set the
trigger memorization logic and set the number of hours to remember each event.
The default value is 24 hours that results in sending alerts for the same event once a
day.
Script/program trigger information
Email Alerts uses script/program triggers to perform various types of error checking.
By default, FalconStor includes several scripts/programs that check for low system
memory, changes to the CDP/NSS XML configuration file, and relevant new entries
in the system log.
Custom email
destination
You can specify an email address to override the default Target Email or a text
subject to override the default Subject. To do this:
1. Right-click on your storage server and select Email Alerts --> Trigger tab.
CDP/NSS Administration Guide
477
Email Alerts
2. For an existing trigger, highlight the trigger and click Edit.
The alternate email address along with the Subject is saved to the $ISHOME/
etc/callhome/trigger.conf file when you have finished editing.
Note: If you specify an email address, it overrides the return code. Therefore, no
attachment will be sent, regardless of the return code.
New script/
program
Return codes
The trigger can be a shell script or a program (Java, C, etc.). If you create a new
script/program, you must add it in to the $ISHOME/etc/callhome/
trigger.conf file so that Email Alerts knows of its existence.
Return codes determine what happens as a result of the scripts/programs
execution. The following return codes are valid:
0: No action is required and no email is sent.
1: Email Alerts sends email without any attachments.
2: Email Alerts attaches all files in $ISHOME/etc and $ISHOME/log to the
email.
3: Email Alerts sends the X-ray file as an attachment (which includes all files
in $ISHOME/etc and $ISHOME/log). Because of its size (minimum of 2
MB), it is recommended that you do not attach the X-ray file with each
notification email.
The $ISHOME/etc directory contains a CDP/NSS configuration file (containing
virtual device, physical device, HBA, database agent, etc. information). The
$ISHOME/log directory contains Email Alerts logs (containing events and output of
triggers).
Output from
trigger
Sample script
In order for a trigger to send useful information in the email body, it must redirect its
output to the environment variable $IPSTORCLHMLOG.
The following is the content of the storage server status check trigger,
ipstorstatus.sh:
CDP/NSS Administration Guide
478
Email Alerts
#!/bin/sh
RET=0
if [ -f /etc/.is.sh ]
then
. /etc/.is.sh
else
echo Installation is not complete. Environment profile is missing in
/etc.
echo
exit 0 # don't want to report error here so have to exit with error
code 0
fi
$ISHOME/bin/ipstor status | grep STOPPED >> $IPSTORCLHMLOG
if [ $? -eq 0 ] ; then
RET=1
fi
exit $RET
If any CDP/NSS module has stopped, this trigger generates a return code of 2 and
sends an attachment of all files under $ISHOME/etc and $ISHOME/log.
CDP/NSS Administration Guide
479
CDP/NSS Administration Guide
BootIP
FalconStors boot over IP services, powered by IPStor, for Windows and Linuxbased storage servers allows you to maximize business continuity and return on
investment. BootIP enables IT Managers to provision disk storage and its related
services to achieve maximum return on investment (ROI).
BootIP leverages the proven SAN management infrastructure and storage services
available in FalconStors network storage infrastructure to ensure business
continuity, high availability and effective disaster recovery planning.
BootIP Set up
Setting up BootIP involves several steps, which are outlined below:
1. Prepare a sample computer with the operating system and all the applications
installed.
2. Install CDP or NSS on a server computer.
3. Install Microsoft iSCSI initiator boot version and DiskSafe on the sample
computer.
The Microsoft iSCSI Software Initiator enables connection of a Windows host to
an external iSCSI storage array using Ethernet NICs. For boot version, using
configurations to boot Windows Server 2003/vista/2008 hosts.
When installing Microsoft iSCSI Software Initiator, check the item Configure
iSCSI Network Boot Support and select the network interface driver for the NIC
that will be used to boot via iSCSI.
4. Install the FalconStor Management Console.
You can also create a boot image for client computers that do not have disks. To do
this, you need to prepare a computer to be used for your boot image.
1. Make sure everything is installed on the computer, including the operating
system and the applications that the client computers will use.
2. Once you have prepared the computer, use DiskSafe to backup the computer to
create a boot image for diskless client computers.
3. After preparing the boot image, create TimeMarks from the boot image, and then
mount the TimeMarks as individual TimeViews and respectively assign them to
the diskless computers.
4. Configure the diskless computers to boot up from the network.
Using DiskSafe can help you to clone a boot image from the sample computer and
put the image on an IPStor-managed virtual disk. You can then set up the BootIP
from the server and use the boot image to boot the diskless client computers.
CDP/NSS Administration Guide
480
BootIP
Prerequisites
A valid Operating System (OS) image must be prepared for iSCSI remote boot. The
conditions of a valid OS image for an iSCSI boot client are listed below:
The OS must be one of the following:
Windows 2003 with Microsoft iSCSI initiator boot version installed.
Windows Vista SP1 with Microsoft iSCSI initiator enabled manually.
Windows 2008 with Microsoft iSCSI initiator enabled.
The network adapter used by remote boot must be certified by Microsoft for
iSCSI Boot Component Test.
In Local Area Connection Properties of this network adapter, Internet
Protocol (TCP/IP) must be checked.
In Windows 2003, make sure the iSCSI Boot (BootIP) sequence is correct
using command c:\iscsibcg /verify /fix
Make sure the network interface card is the first boot device in the client
machines BIOS.
In addition to a valid OS image and client BIOS configuration, the mirrored
iSCSI disk should set the following properties before remote boot:
Assign LUN 0 to the iSCSI disk used for remote boot.
The iSCSI disk must be assigned to the first iSCSI target with the
smallest target ID.
If the iSCSI disk contains Windows 2008 or Windows Vista OS, the
iSCSI disks disk signature changed by DiskSafe during backup must be
changed back to the original signature to match the local disk backed
up by DiskSafe. You can use the following IPStor iscli command to
change the disk signature:
# iscli setvdevsignature -s 127.0.0.1 -v VID F
Note: The VID should be the virtual device ID of the iSCSI disk.
CDP/NSS Administration Guide
481
BootIP
Creating a boot image for a diskless client computer
To create a boot image that can be used to boot up a single diskless client computer,
follow the steps below:
1. Prepare the storage and user access for the storage server from the FalconStor
Management Console. For details, see Initializing the configuration of the
storage Server.
2. Enable the IPStor BootIP via the FalconStor Management Console. For details,
seeEnabling the BootIP from the FalconStor Management Console.
3. Create a boot image by using DiskSafe to clone a virtual disk and set up the
BootIP properties from FalconStor Management Console. For details, see Using
DiskSafe to clone a boot image, Setting the BootIP properties,and Setting the
Recovery Password
4. Shutdown the sample computer and remove the system disk.
5. Boot up the iSCSI disk remotely on the original client computer. For details, see
Remote boot the diskless computer
6. Use the System Preparation Tool to configure the automatic deployment
windows OS on your remote boot client computer. For details, refer to Setting
the BootIP properties.
7. Create a TimeMark of boot image. For details, see Creating a TimeMark.
8. Create a TimeView from the TimeMark. For details, see Creating a TimeView.
9. Assign the TimeView to this SAN client. For details, see Assigning a TimeView
to a diskless client computer.
10. Set up the BootIP properties from the FalconStor Management Console. For
details, see Setting the BootIP properties.
11. Boot up the diskless computer client remotely.
CDP/NSS Administration Guide
482
BootIP
Initializing the configuration of the storage Server
Initializing the configuration of the storage server involves several steps, including:
Entering the license keycodes
Preparing the storage and adding your virtual device to the storage pool
Creating an IPStor user account
Selecting users who will have access the storage pool you have created.
Enabling the BootIP from the FalconStor Management Console
You will need to enable the BootIP function before you can use it. You must also set
the BootIP properties of the SAN clients. To do this:
1. Log into the storage server from the FalconStor Management Console.
2. Right-click on the [HostName] and select Options Enable BootIP
3. If you have external DHCP, DHCP will not be enabled on the storage server.
Therefore, keep the Enable DHCP option unchecked.
4. Click OK to start the BootIP daemon.
Using DiskSafe to clone a boot image
You can use DiskSafe to clone a boot image to be used at a later date. To do this:
1. While running DiskSafe on the sample computer, right-click on Disks and select
Protect.
2. Click Next to launch the Protect Disk Wizard
3. Choose the system disk and Click Next
4. Click New Disk
5. Click Add Server
6. Enter the storage server name (or IP), User name and Password; Check the
iSCSI protocols. Then click OK.
7. Click OK to allocate Disk.
8. Click Next to continue finishing the following wizard setting
9. After synchronizing finished, Right-Click on the disk you protected and select
Advanced -->Take Snapshot
When the disk is protected from DiskSafe, an IPStor-managed virtual disk within
the boot image will be generated and assigned to the sample computer from the
FalconStor Management Console.
CDP/NSS Administration Guide
483
BootIP
Setting the BootIP properties
To set the BootIP properties, follow the instructions below.
1. From FalconStor Management Console, navigate to SAN Clients.
2. Right-Click on the Client host name and select Boot properties.
The Boot Properties dialog box appears.
3. Select the Boot type as BootIP.
The options become available.
4. Uncheck Boot from the local disk.
5. Optional: Select the Boot from Local Disk check box if you want the computer to
boot up locally by default.
6. Type the Mac address of remote boot client and click OK.
Setting the Recovery Password
Once you have finished setting DiskSafe protection, you can set two authentication
modes for remote boot:
Un-authentication mode.
CHAP mode.
Set the Recovery password from the iSCSI user management
To set the Recovery password from the iSCSI user management, follow the
instructions below:
1. Right-Click on the [Server Host Name], select iSCSI Users.
An iSCSI user management window displays.
2. Select the appropriate iSCSI user.
3. Click Reset CHAP secret, type the secret, confirm it and click OK.
Set the authentication and Recovery password from iSCSI client properties
You can also set the authentication and Recovery password from iSCSI client
properties. To do this:
1. Navigate to [Client Host Name] and expand it.
2. Right-Click on iSCSI and select Properties.
An iSCSI Client Properties window displays.
3. Select User Access to set authentication.
4. Optional: Select Allow unauthenticated access. The user neednt to authenticate
for remote boot.
CDP/NSS Administration Guide
484
BootIP
5. Optional: Select users who can authenticate for the client.
You will be prompted to enter the user name, CHAP secret and confirm CHAP
secret. You will also be prompted to type the Recovery password for remote
boot.
6. Click OK.
Note: Mutual CHAP secret is not currently supported for iSCSI authentication.
Remote boot the diskless computer
For Windows 2003
To enable your client computer to boot remotely, you need to configure the BIOS of
the computer and set the network interface card (NIC) as the first boot device. For
details, about configuring the BIOS, refer to the user documentation of your main
board.
1. After shutdown the sample computer, remove the system disk.
2. Boot up the diskless sample computer.
3. The client will boot from network and get the IP from DHCP server.
4. Click F8 to enter into boot menu.
5. If you didnt click F8, the default auto-selection should be Remote Boot (gPXE),
Click Enter
6. Then, it will start booting remotely.
For Windows Vista/2008
If the iSCSI disk contains Windows 2008 or Windows Vista OS, the iSCSI disks disk
signature changed by DiskSafe during backup must be changed back to the original
signature so that it is the same as the local disk backed up by DiskSafe.
You can use the following IPStor iscli command to change the disk signature:
# iscli setvdevsignature -s 127.0.0.1 -v VID F.
VID is the virtual ID of mirror disk or Time View device. You can confirm the VID from
the SAN Resource mirror disk or from the TimeView you assigned for remote
boot General tab in the FalconStor Management Console.
CDP/NSS Administration Guide
485
BootIP
Using the Sysprep tool
Sysprep is a Microsoft tool that allows you to automate a successful Windows
operating system deployment on multiple computers. Once you have performed the
initial setup steps on a single machine, you can run Sysprep to prepare the sample
computer for cloning.
The Factory mode of Sysprep is a method of pre-configuring installation options to
reduce the number of images to maintain. You can use the Factory mode to install
additional drivers and applications at the stage after the reboot that follows Sysprep.
Normally, running Sysprep as the last step in the pre-installation process prepares
the computer for delivery. When rebooted, the computer displays Windows
Welcome or MiniSetup.
By running Sysprep with the factory option, the computer reboots in a network
enabled state without starting Windows Welcome or MiniSetup. In this state,
Factory.exe processes its answer file, Winbom.ini, and performs the following
actions:
1. Copies drivers from a network source to the computer.
2. Starts Plug and Play enumeration.
3. Stages, installs, and uninstalls applications on the computer from source files
located on either the computer or a network source.
4. Adds customer data.
For Windows 2003:
To prepare a reference computer for Sysprep deployment in Windows 2003, follow
these steps:
1. On a reference computer, install the operating system and any programs that
you want installed on your destination computers.
2. Click Start, click Run, type cmd, and then click OK.
3. At the command prompt, change to the root folder of drive C, and then type cmd
Sysprep.
4. Open the Deploy.cab file and Copy the Sysprep.exe file and the Setupcl.exe file
to the Sysprep folder.
If you are using the Sysprep.inf file, copy this file to the Sysprep folder. In order
for the Sysprep tool to function correctly, the Sysprep.exe file, the Setupcl.exe
file, and the Sysprep.inf file must all be in the same folder
For remote boot, add LegacyNic=1 into Sysprep.inf file under [Unattended]
section.
5. To run the Sysprep tool, type the following command at the command prompt:
CDP/NSS Administration Guide
486
BootIP
Cmd: Sysprep /optional parameter
Note: For a list of parameters, see the "Sysprep parameters" section. http://technet.microsoft.com/en-us/library/cc758953.aspx
If you run the Sysprep.exe file from the %systemdrive%\Sysprep folder, the
Sysprep.exe file removes the folder and the contents of the folder.
6. On the system preparation tool, choose the shutdown mode as shutdown and
click Reseal to prepare the computer.
The computer should shutdown itself.
7. Optional: You can use Snapshot Copy or TimeView to assign them to the other
clients and remote boot to initialize the other systems of windows 2003.
Using the Setup Manager tool to create the Sysprep.inf answer file
Once you have automated the deployment of windows 2003, you can use the
sysprep.ini to customize the windows initial settings, such as user name,
organization, host name, product key, networking component, workgroup, timezone, etc.
To install the Setup Manager tool and to create an answer file, follow these steps:
1. Navigate to the Deploy.cab file that you replaced and double-click on it to open it.
2. On the Edit menu, click Select All
3. On the Edit menu, click Copy to Folder.
4. Click Make New Folder and enter a name for the Setup Manager folder. For
example, type setup manager, and then press Enter.
5. Click Copy.
6. Open the new folder that you created, and double-click the Setupmgr.exe file.
The Windows Setup Manager Wizard launches.
7. Follow the instructions in the wizard to create a new answer file.
8. Select the Sysprep setup to generate the sysprep.inf
9. Select Yes, fully automate the installation.
Later, you will be prompted to enter the license key code.
10. Select to automatically generate computer name or specify a computer name.
11. Save the sysprep.inf to the C:\Sysprep\
12. Click Finish to exit the Setup Manager wizard.
CDP/NSS Administration Guide
487
BootIP
For Windows Vista/2008
Use the Windows system Image Manager to create the Sysprep.xml answer file
In order to begin creating a Sysprep.xml file, you will need to load a Windows Image
File (WIM), Install the Automated Installation Kit (AIK) for Windows Vista SP1 and
Windows Server 2008:
http://www.microsoft.com/downloads/details.aspx?FamilyID=94bb6e34-d890-493281a5-5b50c657de08&DisplayLang=en
Prepare a reference computer for Sysprep deployment
1. On a reference computer, install the operating system and any programs that
you want installed on your destination computers.
2. Use DiskSafe to clone the system disk to the storage server
3. Boot the mirror disk remotely (Setting related BootIP configuration)
4. Open the Windows system Image Manager (Start --> All Programs --> Microsoft
Windows AIK --> Windows System Image Manager)
5. Copy Install.wim from the product installation package (source) to your disk.
6. Create a catalog on the WAIK.
7. On the File menu, click Select Windows Image.
8. Navigate to the location where you saved install.wim, and then click Open.
You are prompted to select an image.
9. Select the appropriate version of windows Vista/2008, and then click OK.
10. On the File menu, click New Answer File.
11. If a message displays that a catalog does not exist, click OK to create one.
12. From the windows image, choose the proper component.
13. From the Answer file, you can set the following options:
Auto-generate a computer name
Add or edit Organization and Owner Information
Set the language and locale
Set the initial tasks screen not to show at logon
Set server manager not to show at logon
Set the Administrator password
Create a second administrative account and set the password
Run a post-image configuration script under the administrator account at
logon
Set automatic updates to not configured (to be configured post-image)
Configure the network location
Configure screen color/resolution settings
Set the time zone
CDP/NSS Administration Guide
488
BootIP
1. Press Control + S and choose C:\windows\system32\sysprep\ as the
save location and file name as sysprep.xml.
2. Click Save to continue.
3. Navigate to C:\Windows\System32\Sysprep and enter one of the following:
sysprep /generalize /oobe /shutdown /unattend:sysprep.xml
or
sysprep /generalize /audit /shutdown /unattend:sysprep.xml
Note: /generalize must be run. After reboot, a new SID is created and the clock
for windows activation resets.
To apply the settings in auditSystem and auditUser, boot to Audit mode by using the
sysprep /audit command. The machine will shutdown and you can use Snapshot
Copy from the FalconStor Management Console to clone the mirror disk and remote
boot to initialize the other systems of windows Vista/2008.
Creating a TimeMark
Once your boot image has been created and is on the storage server, it can be used
as a base image for your diskless client computers. You will need to create separate
boot images for each computer that you want to boot up remotely.
In order to create a separate boot image for a computer, you need to create a
TimeMark of the base image first, then create a TimeView from the TimeMark. The
TimeView can be assigned to an individual client computer for remote boot.
To create a TimeMark of the base boot image:
1. Launch the FalconStor Management Console if you have not done so yet.
2. Select your virtual disk under SAN Resources.
3. Right-click the on the disk and select TimeMark --> Enable.
A message box appears, prompting you to create the SnapShot Resource for
your virtual disk.
4. Click OK and follow the instructions of the Create SnapShot Resource Wizard to
create the SnapShot Resource.
5. Click Finish when you are done with the creation process. The Enable TimeMark
Wizard appears.
6. Click Next and specify the schedule information if you want to create TimeMarks
regularly.
You can skip the next two steps if you have specified the schedule information
as TimeMarks will be created automatically based on your schedule.
7. Click Finish when you are done.
The Wizard closes and you are returned to the main window of FalconStor
Management Console.
CDP/NSS Administration Guide
489
BootIP
8. From the FalconStor Management Console, right-click your virtual disk and
select TimeMark --> Create.
The Create TimeMark dialog box appears.
9. Type a comment for the TimeMark and click OK.
The TimeMark is created.
Creating a TimeView
After creating a TimeMark of your base boot image, you can create a TimeView from
the TimeMark, and then assign the TimeView to a diskless computer for remote
boot.
To create a TimeView from a TimeMark:
1. Start the FalconStor Management Console - if it is not running yet.
2. Right-click your virtual disk and select TimeMark --> TimeView. The Create
TimeView dialog box appears.
3. Select the TimeMark from which you want to create a TimeView and click OK.
4. Type a name for the TimeView in the TimeView Name box and click OK.
The TimeView is created.
Note: Only one TimeView can be created per TimeMark. If you want to create
multiple TimeViews for multiple diskless computers, you will need to create
multiple TimeMarks from the base boot image first.
Assigning a TimeView to a diskless client computer
After creating a TimeView from your base boot image, you can assign it to a specific
diskless client computer so that the computer can be booted up remotely from the
TimeView.
To assign a TimeView to a client computer for remote boot, you must perform the
following tasks in FalconStor Management Console:
1. Add a SAN Client.
2. Assign the TimeView to the SAN Client.
3. Associate the SAN Client with a diskless computer and configure it for remote
boot.
Adding a SAN Client
1. Start the FalconStor Management Console if you have not done so yet.
2. Right-click SAN Clients and select Add.
The Add Client Wizard appears.
CDP/NSS Administration Guide
490
BootIP
3. Click Next and enter a name in the Client Name box.
4. Select SAN/IP as the protocol for the client and then click Next.
5. Review the settings and click Finish.
The SAN Client is added.
Assigning a TimeView to the SAN Client
1. Start the FalconStor Management Console if you have not done so yet.
2. Right-click the TimeView and select Assign.
The Assign a SAN Resource Wizard appears.
3. Click Next and assign LUN0 to the target.
4. Click Next and review the settings, then click Finish.
Note: Only LUN 0 is supported for iSCSI remote boot.
The BootIP boots the image that is assigned to the smallest target ID with LUN 0.
Recovering Data via Remote boot
DiskSafe is used to protect the clients system and data disks/partitions. In the event
of system failure, the client can boot up from the iSCSI disk or selected TimeView,
including the OS image, and restore the system or disk data to the local disk or new
disk using DiskSafe.
A valid operating system image is prepared for DiskSafe to clone to the iSCSI disk
for remote boot.
To recovery data using DiskSafe when the client boots up from an iSCSI disk, refer
to Remote boot the diskless computer on page 485.
1. After remotely booting, hot plug-in the local disk (original disk) to restore.
2. Rescan the disks from disk management.
3. Open the DiskSafe console and remove the existing system disk protection on
DiskSafe.
4. Create a new DiskSafe protection to the recovery disk by right-clicking on the
disk and selecting Protect.
5. Select remote boot disk (disk 0) to be the Primary disk then click Next.
For windows Vista/2008 only: Before recovering the system to the local disk, you
must flip the disk signature first for local boot. So, please type the IPStor
command # iscli setvdevsignature -s 127.0.0.1 -v VID -F'. VID should be the
virtual ID of remote boot disk.
CDP/NSS Administration Guide
491
BootIP
6. Check Allow mirror disks with existing partitions to restore to the original disk,
and then click Yes.
7. Select the original primary disk from the eligible mirror disks list and click Next.
4. The system will warn you that the mirror disk is a local disk.
8. Click Yes.
9. Finish the protect disk wizard and DiskSafe starts to synchronize the current
data to local disk.
10. Once synchronization has finished, and the restore process succeeds, you can
shutdown the server normally.
11. Disable the BootIP from the iSCSI client or setting Boot from local disk.
12. Local boot the client with the disk you restored.
13. Once the system successful boots up, open the DiskSafe Management Console
and remove the protection that you just created for recovery.
14. Re-protect the disk.
Note: After the remote boot, verify the status of services and applications to
make sure everything is up and ready after start up.
Make sure your boot up disk is from the FALCON IPSTOR DISK SCSI Disk Device.
To do this, navigate to Disk Management and right-click on the first disk (Disk 0). It
should show FALCON IPSTOR DISK SCSI Disk Device.
CDP/NSS Administration Guide
492
BootIP
Remotely booting the Linux Operating System
Remotely installing CentOS to an iSCSI disk
Remote boot the iSCSI disk and install the CentOS5.x on it. Before you begin, make
sure you have a CentOS 5.x installation package and have prepared a diskless
client computer with PXE boot supported NIC adapter.
Remote boot from the FalconStor Management Console
From the FalconStor Management Console:
1. Right-Click on SAN clients to Add a customized client name.
The Add Client Wizard displays.
2. Select iSCSI protocol and click Next
3. Click Add to add iSCSI initiator name and click OK
4. Check the iSCSI initiator name you created and click Next.
5. Set the authentication for the client to Allow unauthenticated access for the client
and click Next.
6. Keep the client IP address as empty and finish the Add client Wizard.
7. Create a New (empty) SAN resource with size 6 ~ 10 GB (depending upon the
size of the Linux system).
8. Assign the New SAN resource to the client machine
9. From the FalconStor Management Console, navigate to SAN Clients and rightclick on the client host name you added and select Boot properties.
10. The Boot Properties dialog box appears.
11. Select the Boot type as BootIP.
The options become available.
12. Keep the Boot from the local disk unchecked.
13. Type the Mac address of diskless client and click OK.
CDP/NSS Administration Guide
493
BootIP
Remote boot from the Client
1. For the diskless client, set the boot sequence in the BIOS to boot from PXE first
and then from the DVD ROM.
2. Boot up the client machine remotely and launch the CentOS 5.x installation
package at the same time.
After the remote boot, the installation package starts loading.
3. Select Advanced storage configuration when prompted to select the drive(s) to
use for installation,
4. Select Add iSCSI target and click Add drive.
The Enable network interface wizard appears.
5. Keep the default setting, and click OK.
The Configure iSCSI Parameters wizard appears.
6. Enter the Target IP Address (your storage server IP) and click Add target.
7. Click Yes to initialize the iSCSI disk and erase all data.
A sda disk (FALCON IPSTOR DISK) in drive list displays.
8. If you would like to Review and modify partitioning layout, check it and click Next.
9. Finish the installation setup wizard and install the OS.
10. Once the installation finishes, click Reboot and remote boot again to boot up the
CentOS on iSCSI disk.
Note: Per Microsoft http://technet.microsoft.com/zh-tw/library/
ee619722%28WS.10%29.aspx, PXE boot from iSCSI disk on client versions of
Windows, such as Windows Vista or Windows 7, are not supported.
BootIP and DiskSafe
If you plan to perform BootIP before using DiskSafe to protect the system are
running on Windows 2008 R2 environment, refer to the following Microsoft
knowledge base article: KB 976042, http://support.microsoft.com/kb/976042
to unbind the WFP Lightweight filter for NIC before protecting your system disk.
Remote boot and DiskSafe
To perform remote boot for DiskSafe version 3.7 snapshot images, there is no need
to perform a flip disk signature operation. You can simply mount the snapshot to the
TimeView, assign it to the corresponding SAN Client, and perform a remote boot.
CDP/NSS Administration Guide
494
CDP/NSS Administration Guide
Troubleshooting / FAQs
This section helps you through some frequently asked questions and issues that
may be encountered when setting up and running the CDP/NSS storage network,
including the following topics:
Logical resources
Storage Server
Failover
Network connectivity
Replication
TimeMark
SafeCache
Service-Enabled Devices
Cross-mirror failover on a virtual appliance
NIC Port Bonding
SNMP
Event log
Virtual devices
Multipathing method: MPIO vs. MC/S
BootIP
SCSI adapters and devices
Fibre Channel Target Mode and storage
Replication
iSCSI Downstream Configuration
Windows client debug information
Storage server X-ray
Error codes
CDP/NSS Administration Guide
495
Troubleshooting / FAQs
Frequently Asked Questions (FAQ)
The following tables contain some general and specific questions and answers that
may arise while managing your CDP or NSS servers.
Question
Answer
Why did my storage server not automatically
start after rebooting? What should I do?
If your CDP or NSS server detects a configuration
change during startup, autostart will not occur
without user interaction.
Typing YES allows the server to continue to start.
Typing NO prevents the server from starting.
Typing nothing (no user input) results in the server
aborting the auto start process.
If the server does not automatically start after a
reboot, you can manually start it from the command
line using the ipstor start all command.
Why are my snapshot resources marked offline?
Snapshot resources will be marked off-line if the
physical resource they have been created from is
disconnected from a single server in a failover set
prior to a failing over to the secondary server.
Why does it take so long (several minutes) for
my Solaris SAN client to load?
When the Client starts, it reads all of the LUN
entries in the /kernel/drv/sd.conf file. It can take
several minutes for the client to load if there are a
lot of entries. It is recommended that the /
kernel/drv/sd.conf file only contain entries
for LUNS that are physically present so that time is
not spent scanning LUNs that may not be present.
I used the rpm e command to uninstall the
storage server. How do I now remove the
IPStor directory and its subdirectories?
In order to remove them, execute the following
command from the /usr/local directory:
rm rf IPStor
How can I make sure information is updated
correctly if I change storage server IP
addresses using a third-party utility, like yast?
You can change storage server IP addresses,
through the FalconStor Management Console using
System Maintenance --> Network Configuration.
I changed the hostname of the storage server.
Why are all block devices now marked offline
and appear as foreign devices?
You cannot change the hostname of the storage
server if you are using block devices.
My IPStor directory and its subdirectories are
still visible after using rpm e to uninstall the
storage server. How do I remove them?
In order to remove them, execute the following
command from the /usr/local directory:
I changed a storage server IP addresses
using yast. Why was the information not
updated correctly?
Changing a storage server IP address using a thirdparty utility is not supported. You will need to
change storage server IP addresses via the console
System Maintenance --> Network Configuration.
rm rf IPStor
CDP/NSS Administration Guide
496
Troubleshooting / FAQs
NIC Port Bonding
Question
Answer
What if I need to change an IP address for NIC
port bonding?
During the bonding process, you will have the
option to enter/select a new IP address. Right-click
on the server and select System Maintenance -->
Bond NIC Port.
SNMP
Question
Answer
The trap, ucdShudown, appears as a raw
message at the management console. Is this
something to be concerned about?
When stopping the SNMP daemon, the daemon
itself will issue a trap, ucdShudown. You can ignore
the extra trap.
How do I load the MIB file?
To load the MIB file, navigate to $ISHOME/etc/
snmp/mibs/IPSTOR-MIB.TXT and copy the
IIPSTOR-MIB.TXT file to the machine running the
SNMP manager.
Event log
Question
Why is the event log displaying event
messages as numbers rather than text?
Answer
You may be low on space. Check to make sure that
there is at least 5 MB of free space on the file
system on which the console is installed. If not, free
up some space.
Virtual devices
Question
Why wont my virtual device expand?
Answer
You may have exceeded your quota. If you have a
set quota and have allocated a disk greater than
your quota, then if you enable any type of feature
that uses auto-expansion, such as Snapshot
Resource or CDP, those specific resources will not
expand, because the quota has been exceeded.
CDP/NSS Administration Guide
497
Troubleshooting / FAQs
Multipathing method: MPIO vs. MC/S
Question
When should I use Microsoft Multipath I/O
(MPIO) vs. Multiple Connections per Session
(MC/S) for multipathing?
Answer
While MPIO is usually the preferred method for
multipathing, there are a number of things to
consider when decinging to use MCS or Microsoft
MPIO for multipathing.
If your configuration uses hardware iSCSI HBA
then Microsoft MPIO should be used.
If your target does not support MCS, then
Microsoft MPIO should be used. (Most iSCSI
target arrays support Microsoft MPIO.)
If you need to specify different load balance
policies for different LUNs then Microsoft MPIO
should be used.
Reasons for using MCS include the following:
If your target does support MCS and you are
using the Microsoft software initiator driver then
MCS is the best option. There may be some
exceptions where you desire a consistent
management interface among multipathing
solutions and already have other Microsoft MPIO
solutions installed that may make Microsoft MPIO
an alternate choice in this configuration.
If you are using Windows XP or Windows Vista,
MCS is the only option since Microsoft MPIO is
only available with Windows Server SKUS.
What are the advantages and disadvantages
fo using each method?
The advantages of using Microsoft MPIO is that
MPIO is a tried and true method of multipathing
that supports software and hardware iSCSI
initiators (HBAs). MPIO also allows you to mix
protocols (iSCSI/FC). In addtion, each LUN can
have its own load balance policy. The disadvantage
is that an extra multipathing technology layer is
required.
The advantages of using MCS are that MCS is part
of the iSCSI specification and there is no extra
vendor multipathing software required.
The disadvantages of using MCS are that this
method is not currently supported by iSCSI initiator
HBAs, or for MS software initiator boot. Another
shortfall is the load balance policy is set on a persession basis; thus all LUNs in an iSCSI session
share the same load balance policy.
CDP/NSS Administration Guide
498
Troubleshooting / FAQs
What is the default MPIO timeout and how do I
change it?
The default MPIO timeout is 20 seconds. This is
usually enough time, but there are certain situations
where you may want to increase the timeout value.
For example, when configuring multipathing with
MPIO in a Windows 2008 environment, you may
need addtional time to enable Windows 2008 to
survive a failover taking more than 20 seconds.
To increase the timeout value, you will need to
modify the PDORemovePeriod, the setting that
controls the amount of time (in seconds) that the
multipath pseudo-LUN will continue to remain in
system memory, even after losing all paths to the
device.
To increase the timeout , follow the steps below:
// increase disk timeout from default 60 seconds
to 5 minutes
HKEY_LOCAL_MACHINE-SystemCurrentControlSet-Services-Disk-TimeOutValue:
300
// increase iSCSI timeout to from default 60
seconds to 5 minuGetting Startedtes
HKEY_LOCAL_MACHINE-SystemCurrentControlSet-Control-Class-{4D36E97Bxxxxxxxxxxxxxxxx}-xxxx-ParametersMaxRequestHoldTime: 300
// due to the increased disk timeout, enable
NOPOut to early detect connection failure
HKEY_LOCAL_MACHINE-SystemCurrentControlSet-Control-Class-{4D36E97Bxxxxxxxxxxxxxxxx}-xxxx-ParametersEnableNOPOut: 1
CDP/NSS Administration Guide
499
Troubleshooting / FAQs
BootIP
Question
Answer
Why does windows keep logging off during
remote boot?
This happens when you remote boot the mirror disk
and keep the local disk inside, Try to re-protect (or
re-sync) the local disk.
How do I confirm if the system has booted
remotely?
Go to Disk management and right-click on disk0. It
should show the disk is an FalconStor IPStor disk,
but not the local disk.
Can I change the IP address after remote
boot?
No, you cannot change the IP Address because
the iSCSI needs the original IP address for
communication.
Why do I sometimes see a blue screen after a
remote boot. Error code: 0x0000007B?
Check if the boot sequence is correct by typing the
following command on the sample computer before
remotely booting:
#iscsibcg /verify /fix
Why do the following messages display on the
screen during a PXE boot and not allow a boot
to the iSCSI disk?
Registered as BIOS driver 0x80
Booting from BIOS drive
Boot failed
Unregistering BIOS drive 0x80
No more network devices
Is iSCSI boot supported in an UEFI
environment?
These messages show that the system cannot boot
the disk successfully. Check to make sure you are
using the boot disk. And make sure the mirror disk
has been synced completely and that you have
protected the correct system disk or system
partition.
No, this version does not support iSCSI boot in an
Unified Extensible Firmware Interface BIOS (UEFI)
environment.
CDP/NSS Administration Guide
500
Troubleshooting / FAQs
SCSI adapters and devices
Since CDP and NSS relies on SCSI devices for storage, it is often helpful to be able
to discover the state of the SCSI adapters and devices locally attached to the
storage server. Verification requires that the administrator be logged into the storage
server. Refer to Log into the CDP/NSS appliance.
Question
Answer
How do I verify the healthy state of the
SCSI adapters that the storage server
is up?
If you do not see the appropriate driver for your SCSI
adapter, it may not have been loaded properly or it may
have been unloaded.
Once it is determined that the SCSI adapter and driver are
properly installed, the next step is to check to see if the
individual SCSI devices are accessible on the SCSI bus. To
check to see what devices are recognized by the storage
server, execute the following command on a CDP/NSS
Server.
cat /proc/scsi/scsi.
These commands display the SCSI devices attached to the
storage server. For example, you will see something similar
to the following:
[0:0:0:0]
/dev/sda
[0:0:1:0]
/dev/sdb
[2:0:1:0]
[2:0:2:0]
[2:0:3:0]
disk
3ware
Logical Disk 0
1.2
disk
3ware
Logical Disk 1
1.2
disk
disk
disk
IBM-PSG ST318203FC
IBM-PSG ST318203FC
IBM-PSG ST318304FC
!# B324 !# B324 !# B335 -
If the operating system cannot see a device, it may not have
been installed properly or it may have been replaced while
the storage server was running. If the Server was not
rebooted, Linux will not recognize the drive because it does
not have plug-and-play capabilities.
How do I replace a physical disk?
Remove the SCSI device from the Linux OS by executing:
echo "scsi remove-single-device x x x x">cat /proc/
scsi/scsi
(where x x x x stands for A C S L numbers: Adapter,
Channel, SCSI, and LUN number.)
Then execute the following to re-add the device so that
Linux can recognize the drive:
echo "scsi add-single-device x x x x">cat /proc/
scsi/scsi.
(where x x x x stands for A C S L numbers: Adapter,
Channel, SCSI, and LUN number.)
CDP/NSS Administration Guide
501
Troubleshooting / FAQs
Question
How do I ensure that the SCSI drivers
are loaded on a Linux SAN Client?
Answer
To ensure that the SCSI drivers are loaded on a Linux
machine, type the following command for Turbo Linux:
modprobe <SCSI card name>
For example: modprobe aic7xxx
For Caldera Open Linux, type: insmod scsi_mod
What if I have LUNs greater than
zero?
By default, Linux will not automatically discover devices with
LUNs greater than zero. You must either manually add
these devices or you can edit your modules.conf file to
automatically scan them. To do this:
1. Type the following command to edit the modules.conf
file: vi /etc/modprobe.conf
2. If necessary, add the following line to modprobe.conf:
option scsi_mod max_luns=x
where x is the LUN number that you want the server to
scan up to.
3. After exiting from vi, make a new image file.
mkinitrd newimage.img X
where 'X' is the kernel version (such as 2.4.21-IPStor)
and newimage can be any name.
4. Make a new entry to point to the new .img file you
created in the step above and make this your default.
use /boot/grub/grub.conf
5. Save and close the file.
6. Reboot the machine so that the scan will take place.
7. Verify that all LUNs have been scanned by typing: cat
/proc/scsi/scsi
CDP/NSS Administration Guide
502
Troubleshooting / FAQs
Failover
Question
Answer
How can I verify the health status of a
server in a failover configuration?
You can verify the health status of a server by connecting to
the server via SSH using the heartbeat address, and
running the sms command.
Fibre Channel Target Mode and storage
Question
What is VSA?
Answer
Some storage devices (such as EMC Symmetric
storage controller and older HP storage) use VSA
(Volume Set Addressing) mode. This addressing
method is used primarily for addressing virtual
buses, targets, and LUNs.
If your client requires VSA to access a broader
range of LUNs, you must enable it for the client.
This can be done via the Fibe Channel Client
Properties screen by selecting the Options tab.
Incorrect use of VSA can lead to problems seeing
the LUNs (disks) at the HBA level. If the HBA
cannot see the disks, the storage server is not able
to access and manage them. This is true both ways:
(1) the storage requires VSA, but it is not enabled
and (2) the storage does not use VSA, but it is
enabled.
For upstream, you can set VSA for the client at the
time it is created or you modify the setting
afterwards by right-clicking on the client.
CDP/NSS Administration Guide
503
Troubleshooting / FAQs
Question
What is Persistent binding?
Answer
Persistent binding is automatically configured for all
QLogic HBAs connected to storage device targets
upon the discovery of the device (via a Console
physical device rescan with the Discover New
Devices option enabled). However, persistent
binding will not be SET until the HBA is reloaded.
You can reload HBAs using the IPStor start
hba or IPStor restart all commands.
The Console will display the Persistent Binding Tab
for QLogic Fibre Channel HBAs even if the HBAs
were not loaded using those commands. In
addition, you will not be able to enable Fibre
Channel target mode on those HBAs. To resolve
this, load the driver using the IPStor start hba
or IPStor restart all commands.
How can I determine the WWPN of my Client?
here are a couple of methods to determine the
WWPN of your clients:
1. Most Fibre Channel switches allow
administration of the switch through an Ethernet
port. These administration applications have
utilities to reveal or allow you to change the
following: Configuration of each port on the
switch, zoning configurations, the WWPNs of
connected Fibre Channel cards, and the current
status of each connection. You can use this
utility to view the WWPN of each client
connected to the switch.
2. When starting up your client, there is usually a
point at which you can access the BIOS of your
Fibre Channel card. The WWPN can be found
there.
3. The first time a new client connects to the
storage server, the following message appears
on the server screen:
FSQLtgt: New Client WWPN Found: 21
00 00 e0 8b 43 23 52
CDP/NSS Administration Guide
504
Troubleshooting / FAQs
Question
Is ALUA supported?
Answer
Yes, Asymmetric Logical Unit Access (ALUA) is
fully supported for both targets and initiators.
Upstream: ALUA support is included for
QLogic Fibre Channel and iSCSI targets
with implicit mode only.
Downstream: ALUA support is included for
QLogic Fibre Channel and iSCSI initiators
with explicit or implicit modes.
Replication
Question
Is replication supported between
version 7.00 and earlier versions of
CDP/NSS?
Answer
The following replication matrix will help you determine
which versions are supported for replication.
CDP/NSS Administration Guide
505
Troubleshooting / FAQs
FalconStor Management Console
Question
Answer
Why am I getting an error while attempting to
install the FalconStor Management Console?
If you experience an error installing the FalconStor
Management Console, select the Install Windows
Console link again and select Save Target or Save
link in the browser. Then right-click installation
package name and select Properties. In the
Program Compatibility tab, check Run this program
as administrator.
Now that i have installed the console, why
wont it launch?
The console might not launch under the following
conditions:
Systems with display settings configured to use
16 colors.
The install path contains characters such as !, %,
{, }.
Font specified in font.properties not found
message is displayed. This indicates that the jdk
font.properties file is not properly set for the Linux
operating system. To fix this, change the
font.properties files to get the correct symbol font
name. To do this, replace all lines containing -symbol-medium-r-normal--*-%d-*-*-p-*-adobefontspecific with --standard symbols l-medium-rnormal--*-%d-*-*-p-*-urw-fontspecific.
The console needs to be run from a directory with
write access. Otherwise, the host name
information and message log file retrieved from
the storage server will not be able to be saved to
the local directory. As a result, the console will
display event messages as numbers and console
options will not be able to be saved.
iSCSI Downstream Configuration
Question
Does the CDP/NSS software iSCSI initiator
have a target limitation? Will I have the same
limitation when using a hardware iSCSI
initiator?
Answer
The CDP/NSS software iSCSI initiator has a
limitation of 32 targets.
When using a hardware iSCSI initiator you will not
have this limitation.
CDP/NSS Administration Guide
506
Troubleshooting / FAQs
Question
Answer
Why is there a 32 target limitation when using
the software iSCSI initiator?
The reason for this limitation is that when the
software iSCSI initiator logs into a target it creates a
new SCSI host per iSCSI target to which it is
connected.
How can I get more information on properly
configuring my CDP/NSS appliance to use
dedicated iSCSI downstream storage using an
iSCSI initiator (software HBA)?
Refer to Configuring iSCSI software initiator for
details regarding the requirements and procedures
needed to properly configure a CDP/NSS device to
use dedicated iSCSI downstream storage using an
iSCSI initiator (software HBA).
How can I get more information on properly
configuring my CDP/NSS appliance to use
dedicated iSCSI downstream storage using a
hardware HBA?
Refer to Configuring iSCSI hardware HBA for
details regarding the requirements and procedures
needed to properly configure a CDP/NSS device to
use dedicated iSCSI downstream storage using a
hardware iSCSI HBA.
Which HBAs can I use on my NSS or CDP
appliance?
Only QLogic iSCSI HBA's are currently supported
on a CDP or NSS appliance.
What utility can I use to configure the iSCSI
HBAs on my NSS or CDP appliance and
where can I get it?
The QLogic "iscli" (SANSurfer CLI) utility is
provided on the appliance to configure the iSCSI
HBA's.
The QLogic SANSurfer CLI is located at "/opt/
QLogic_Corporation/SANsurferiCLI/". To configure
the HBA, run "iscli" from the path as shown below:
[root@demonstration ~]# /opt/
QLogic_Corporation/SANsurferiCLI/iscli
Does the hardware initiator require any special
configuration for multipath support?
The hardware initiator does not require any special
configuration for multipath support. The only
configuration required is to connect multiple HBA
ports to a downstream iSCSI target. The driver used
for the QLogic iSCSI HBA is specially handled by
CDP/NSS for multipath
CDP/NSS Administration Guide
507
Troubleshooting / FAQs
Power control option
Question
What causes a failure to communicate with a
power control device?
Answer
Failure to communicate to your power control
devices may be caused by one the following
reasons:
Authentication error (password and/or username
is incorrect)
Network connectivity issue
Server power cable is unplugged
Wrong information used for power control device
such as incorrect IP
Protecting data in a Windows environment
Question
How do I protect my data in a windows
environment?
Answer
FalconStor DiskSafe for Windows protects
Windows application servers, desktops, and laptops
(referred to as hosts) by copying the local disks or
partitions to a mirroranother local disk or a
remote virtual disk managed by a storage server
application such as CDP.
Refer to the DiskSafe User Guide for further
information.
Protecting data in a Linux environment
Question
How do I protect my data in a Linux
environment?
Answer
FalconStor DiskSafe for Linux is a disk mirroring
backup and recovery solution designed to protect
data from disaster or accidental loss on Linux
platform. Local disks and remote virtual disks
managed by a storage server application such as
CDP can be used for protection. However features
such as snapshots are available only when a mirror
disk is a virtual CDPVA disk. Linux LVM logical
volume protection is also supported by DiskSafe.
Refer to the DiskSafe User Guide for further
information.
CDP/NSS Administration Guide
508
Troubleshooting / FAQs
Protecting data in an AIX environment
Question
How do I protect my data in an AIX
environment?
Answer
FalconStor provides AIX scripts to simplify and
automate the protection and recovery process of
logical volumes on AIX platforms. Once you have
prepared the AIX host machine, you can:
Install AIX FalconStor Disk ODM Fileset
Install the AIX SAN Client and Filesystem Agent
Download and configure the protection and
recovery scripts for AIX LVM.
Refer to the FalconStor Knowledge Base for
additional information.
Protecting data in an HP-UX environment
Question
Answer
How do I protect my servers/data in an HP-UX
environment?
Protecting your servers in a HP Unix environment
requires that you establish a mirror relationship
between the HP-UX (LVM and vxVM) Volume
Group's Logical Volumes and the mirror LUNs from
the CDP/NSS appliance.
To protect your data:
Install the HP-UX file system Snapshot Agent
Confirm that the package installation was
successful by listing system installed packages:
swlist | grep VxFSagent
Authenticate the client to the storage server by
running ipstorclient monitor.
Use the FalconStor-provided HP-UX scripts to
simplify and automate the protection and
recovery process of logical volumes on HP-UX
platforms. Download these scripts to configure
the protection and recovery scripts for HP-UX
LVM.
Use the ssh_setup script to create a ssh public/
private key between the HP-UX and the CDP
server.
Refer to the FalconStor Knowledge Base for
additional information.
CDP/NSS Administration Guide
509
Troubleshooting / FAQs
Logical resources
The following table describes the icons that are used to show the status of logical
resources:
Icon
Description
This icon indicates a warning, such as:
Virtual device offline (or has incomplete segments)
Mirror is out of sync
Mirror is suspended
TimeMark rollback failed
Replication failed
One or more supporting resources is not accessible (SafeCache,
CDP, Snapshot resource, HotZone, etc.)
This icon indicates an alert, such as:
Replica in disaster recovery state (after forcing a replication
reversal)
Cross-mirror need to be repaired on the virtual appliance
Primary replica is no longer valid as a replica
Invalid replica
If you see one of these icons, check through the tabs to determine the problem.
Network connectivity
Storage servers, clients and consoles are all attached to one another through an
Ethernet network. In order for all of the components to work properly together, their
network connectivity should be configured properly.
To test connectivity between machines (servers, clients and consoles,) there are
several things that can be done. This example shows a user testing connectivity
from a client or console to a server named knox.
To test connectivity from one machine to the storage server, you can execute the
ping utility from a command line prompt. For example, if your storage server is
named knox, execute:
ping knox
CDP/NSS Administration Guide
510
Troubleshooting / FAQs
If the storage server is running and attached to the network, you should receive a
response like this:
Pinging knox [10.1.1.99] with 32 bytes of data:
Reply from 10.1.1.99: bytes=32 time<10ms TTL=255
Reply from 10.1.1.99: bytes=32 time<10ms TTL=255
Reply from 10.1.1.99: bytes=32 time<10ms TTL=255
Reply from 10.1.1.99: bytes=32 time<10ms TTL=255
Ping statistics for 10.1.1.99:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms
If the Server is not available, you may get a response like this:
Pinging knox [10.1.1.99] with 32 bytes of data:
Request timed out.
Request timed out.
Request timed out.
Request timed out.
Ping statistics for 10.1.1.99:
Packets: Sent = 4, Received = 0, Lost = 4 (100% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms
This means that either the machine is not running, or is not properly attached to the
network. If you get a response like this:
Unknown host knox.
This means that your machine cannot find the storage server by name. There could
be two reasons for this. First, it may be that the storage server is not running or
connected to the network, and therefore has not registered itself to the name service
on your network.
Second, it may be that the storage server is running, but is not known by name,
possibly because the name service, such as DNS, is not running, or your machine is
not referring to the proper name service.
Refer to your networks reference material on how to configure your networks name
service.
If your storage server is available, you can execute the following command on the
Server to verify that the CDP/NSS ports are both up:
netstat a |more
CDP/NSS Administration Guide
511
Troubleshooting / FAQs
Both ports 11576 and 11577 should be listed. In addition, port 11576 should be
listening.
Linux SAN
Client
You may see the following message when executing ./IPStorclient start or
./IPStorclient restart if the Linux Client cannot locate the storage server on the
network:
Creating IPStor Client Device [FAILED]
Failed to connect to Storage Server 0, -1
To resolve, restart the services on both the storage server and the Linux Client.
Jumbo frames support
To determine if a machine supports jumbo frames, use the ping utility from a
command line prompt to ping with the packet size. If your storage server is named
knox, execute one of the following commands:
On Linux systems:
ping s 8000 knox
On Windows 2000 systems:
ping l 8000 knox
Diagnosing client connectivity issues
Problems connecting clients to their SAN resources may occur due to several
causes, including network configuration and storage server configuration.
Check the General Info tab for the Client in the Console to see if the Client
has been authenticated. In order for a Client to be able to access storage,
you must establish a trusted relationship between the Client and Server and
you must assign storage resources to the Client.
If you make any Client configuration changes in the Console, you must
restart the Client in order for the changes to take effect.
Clients may not achieve the maximum throughput when writing over gigabit.
If you are noticing slower than expected speeds when writing over gigabit,
you can do the following:
Turn on TCP window scaling on the storage server:
/proc/sys/net/ipv4/tcp_window_scaling
1 is on. 0 is off.
On Windows, go to Run and type regedit. Add the following:
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\
Parameters]
"Tcp1323Opts"=dword:00000001
CDP/NSS Administration Guide
512
Troubleshooting / FAQs
"GlobalMaxTcpWindowSize"=dword:01d00000
"TcpWindowSize"=dword:01d00000
To see if the storage server client has connectivity to the storage server over the
Ethernet network, refer to Network connectivity.
Windows Client
Problem
The SAN Client hangs when the storage
containing its virtual disk goes offline.
Cause/Resolution
To prevent the CDP/NSS Client from hanging when
there is a storage problem on the storage server,
change the default I/O error response sense key
from medium error to unit not ready by the running
the following command:
echo "IPStor set-parameter default-ioerror-sense-key 2 4 0" >
/proc/IPStor/IPStor
Windows client debug information
You can configure the amount of detail about the storage server Clients activity and
performance that will be written to the Windows Event Viewer.
In addition, you can enable a system tracer. When enabled, the trace information will
be logged to a file called FSNTrace.log located in the \FalconStor\IPStor\Logs
directory.
1. To filter the events and/or configure the tracer, select Tools --> Options.
CDP/NSS Administration Guide
513
Troubleshooting / FAQs
2. To filter the events being written to the Event Viewer, select one of the levels in
the Log Level field.
Note that regardless of which level you choose, there are several events that will
always be written to the Event Viewer (driver not loaded, service failed to start,
service started, service stopped).
Five levels are available for use:
Off No activity will be recorded.
Errors only Only errors will be recorded.
Brief Errors and warnings will be recorded.
Detailed (Default) Errors, warnings and informational messages will be
recorded.
Trace This is the highest level of activity tracing. Debugging messages
will be written to the trace log. In addition, all errors, warnings and
informational messages will be recorded in the Event Viewer.
3. If you select the Trace level, specify which portions of the storage server Client
will be traced.
Warning: Adjusting these parameters can impact system performance. They
should not be adjusted unless directed to do so by FalconStor technical support.
CDP/NSS Administration Guide
514
Troubleshooting / FAQs
Clients with iSCSI protocol
Problem
Cause/Resolution
(iSCSI protocol) After rebooting, the client
loses its file shares.
This is a timing issue. To reconnect to shares:
Open a command prompt and type the following for
commands:
net stop browser
net stop server
net start server
net start browser
You may want to create a batch file to do this.
(iSCSI protocol) Intermittent iSCSI
disconnections on the client.
or
The client cannot see the disk.
The Microsoft iSCSI initiator has a default retry
period of 60 seconds. Changing it to 300 seconds
will sustain the disk for five minutes during network
disconnection events, meaning applications will not
be disrupted by temporary network problems (such
as during a failover or recovery).
This setting is changed through the registry.
1. Go to Start --> Run and type regedit.
2. Find the following registry key:
HKEY_LOCAL_MACHINE\system\CurrentControlSet\co
ntrol\class\4D6E97B-xxxxxxxxx\<iscsi adapter
interface>\parameters\
where iscsi adapter interface corresponds to the
adapter instance, such as 0000, 0001, .....
3. Right-click Parameters and select Export to
create a backup of the parameter values.
4. Double-click MaxRequestHoldTime.
5. Pick Decimal and change the Value data to 300.
6. Click OK.
7. Reboot Windows for the change to take effect.
The Microsoft iSCSI initiator fails to connect to
a target.
The Microsoft iSCSI initiator can only connect to an
iSCSI target if the target name is no longer than 221
characters. It will fail to connect if the target name is
longer than this.
CDP/NSS Administration Guide
515
Troubleshooting / FAQs
Clients with Fibre Channel protocol
Problem
An initiator times out with the following
message:
FStgt: SCSI command aborted.
Cause/Resolution
Certain Fibre Channel hosts and/or HBAs are not
as aggressive as others, which can affect the
balancing of each host's pending commands. We
recommend that the value for the Execution Throttle
(QLogic) or Queue Depth (Emulex) for all client
initiators using the same target(s) not exceed 240.
If an initiator's Execution Throttle or Queue Depth is
configured too high, it could result in slow response
time from the storage subsequently causing the
initiator to timeout.
To resolve this issue, decrease the value of the
initiator's Execution Throttle or Queue Depth.
Linux SAN Client
Problem
You see the following message when viewing
the Client's current configuration:
Cause/Resolution
This is an informational message and can be
ignored.
On command 12 data received 36 is not
equal to data expected 256.
You see the following message when
executing
./IPStorclient start or ./IPStorclient restart:
The SAN Client cannot locate the storage server on
the network. To resolve this, restart the services on
both the storage server and the Linux Client.
Creating IPStor Client Device [FAILED]
Failed to connect to Storage Server 0, 1
You see the following message continuously:
SCSI: Aborting due to timeout: PID
######..
You cannot un-assign devices while a Linux client is
accessing those devices (i.e. mounted partitions).
CDP/NSS Administration Guide
516
Troubleshooting / FAQs
NetWare SAN Client
The following tools are available for troubleshooting problems with the NetWare
SAN Client:
ISCMD command log: Run the ISCMD command with option DEBUG=2.
The debugging message will be written to the log file ISCMD.LOG located in
the directory SYS:\SYSTEM.
For example: ISCMD Start Server=serverIPAddress Debug=2
IPStor SAN client trace log: Run the command SANDRV +debug +ip3 on
the NetWare System Console. The trace log will be written to the log file
TRACELOG.XML located in the directory SYS:\SYSTEM.
Problem
Cause/Resolution
Perform an iscmd addserver server=
and it shows that it connects and is
communicating, but when an iscmd start
server= is performed an error message is
received that the server is not authenticated
and it will NOT start. Performing an iscmd
showserver shows its not connected to the
server.
1. Stop the client service with # sanoff
2. On the NetWare server, run # regedit to enter
the registry editor.
3. # cd Software
4. # rd FalconStor
5. Go to the FalconStor Management Console to
unassign the devices from the client and delete the
client.
6. Add the client to the server with the # iscmd
addserver command. If the client can not be
authenticated even if a correct password is used,
you need to reset it with passwd command in
Linux.
7. Assign devices to the client using the exact same
LUN order as before.
8. Connect to the server from the NetWare client.
CDP/NSS Administration Guide
517
Troubleshooting / FAQs
Storage Server
Storage server X-ray
The X-ray feature is a diagnostic tool used by your Technical Support team to help
solve system problems. Each X-ray contains technical information about your
server, such as server messages and a snapshot of your server's current
configuration and environment. You should not create an X-ray unless you are
requested to do so by your Technical Support representative.
To create the X-ray file for multiple servers:
1. Right click on the Servers node in the console and select X-ray.
A list of all of your storage servers displays.
2. Select the servers for which you would like to create an X-ray and click OK.
If the server is not listed, click the Add button to add the server to the list.
CDP/NSS Administration Guide
518
Troubleshooting / FAQs
3. Select the X-ray options based upon the discussion with your Technical Support
representative and set the file name..
Filter out and include
only storage server
messages from the
System Event Log.
To To create an X-ray for an individual server:
1. Right click on the server in the console and select X-ray.
The X-ray options screen (shown above) displays.
2. Select the X-ray options based upon the discussion with your Technical Support
representative and set the file name.
CDP/NSS Administration Guide
519
Troubleshooting / FAQs
Failover
Problem
Cause/Resolution
After restarting failover servers, CDP/NSS
starts but will not come back online.
This can happen if:
There was a communication problem (i.e.network
error) between the servers.
Both failover servers were down and then only
one is brought up.
If failover was suspended and you restart one of
the servers.
To resolve this:
1. At a Linux command prompt, type sms to
determine if the system is in a ready state.
2. As soon as it becomes ready, type the following:
IPStorsm.sh recovery
After failover, when you connect to the newlypromoted primary server, the failover status is
not correct.
You are connecting to the server with an IP address
that is not part of the failover configuration or with
the heartbeat IP address and you are seeing the
status from before the failover. You should only use
those IP addresses that are configured as part of
the failover configuration to connect to the Server in
the Console.
You need to perform recovery on the near-line
server when it is set up as a failover pair and
is in a failed state.
When performing a near-line recovery and the
Near-Line server is setup in a failover configuration,
always add the first and second nodes of the
failover set to the primary for recovery.
Select the proper initiators for recovery
Assign both nodes back to the primary for
recovery.
Note: There are cases where the server WWPN
may not show up in the list since the machine
maybe down and the particular port is not logged
into the switch. In this situation, you must know the
complete WWPN of your recovery initiator(s). This
is important in cases where you need to manually
enter the WWPN into the recovery wizard to avoid
any adverse effects during the recovery process.
A server failure and failover occurred and the
standby initiator assumed the WWPN of the
failed servers target WWPN. losing the
connection to the near-line mirror disk.
When adding a near-line mirror to a device, make
sure you do not select initiators that have already
been set as a standby initiator during failover setup.
Doing so will cause loss of connection to the mirror
disk in the event of failover causing mirror to break.
CDP/NSS Administration Guide
520
Troubleshooting / FAQs
Problem
Cause/Resolution
Failover partner fails to take over when
primary network connection associated with
iSCSI client fails.
The partner server has a network connection failure
on the same subnet preventing it from successfully
taking over.
Failover partner fails to take over when
primary server fails.
You can manually trigger failover from the console
by right-clicking on the server and selecting Failover
--> Start take over <server name>.
The IP address for the primary server conflicts
with the secondary servers IP address.
For example, both Storage Cluster Interlink
ports share the same IP address.
The primary servers network interface is using an
IP address that is being used by the same interface
on the partner server.
Check the IP addresses being used by your
servers. Modify the IP address on one of the
servers by right-clicking on the server and selecting
System Maintenance --> Configure Network. Refer
to the Network configuration section for details.
The Storage Cluster Interlink connection is
broken in a failover pair and both servers
cannot be synchronized
Use the Sync Standby Devices menu item to
manually synchronize both servers metadata after
the Storage Cluster Interlink is reconnected.
Failover has been suspended on server B and
restarts server A. After the server is restarted,
it does not come up automatically, but is in a
ready state.
Attempt to login via the console and bring up the
server. Type YES in the popup window that displays
to forcefully bring up the server.
Failover is suspended on server A and server
B for maintenance. Both servers are powered
off. After the maintenance period both servers
are restarted but they do not come up
automatically.
Cross-mirror failover on a virtual appliance
Problem
Cause/Resolution
During cross-mirror configuration, the system
reports a mismatch of physical disks on the
two appliances even though you are sure that
the configuration of the two appliances is
exactly the same, including the ACSL, disk
size, CPU and memory.
An iSCSI initiator must be installed on the storage
server and is included on FalconStor cross-mirror
appliances. If you are not using a FalconStor crossmirror appliance, you must install the iSCSI initiator
RPM from the Linux installation media before
running the IPStorinstall installation script. The
script will update the initiator.
CDP/NSS Administration Guide
521
Troubleshooting / FAQs
Replication
Problem
Replication is set to use compression but
replication fails. You may see error messages
in the syslog like this:
__alloc_pages: 4-order allocation failed
(gfp=0x20/0)
IOCORE1 expand_deltas, cannot allocate for
65536 bytes
IOCORE1 write_replica, error expanding
deltas, return -EINVAL
IOCORE1 replica_write_parser, server
returned -22
IOCORE1 replica_read_post, stopping
because status is -22
Replication Primary server cannot connect to
the Replica server due to different TCP
Protocols.
The primary server console event log will print
messages like shown below:
Mar 12 10:56:28 fs18626 kernel: MTCP2
ctrl hdr's signature is mismatch with
00000000, please check the network
protocol(MTCP2).
Cause/Resolution
Compression requires 64K of contiguous memory.
If the memory in the storage server is very
fragmented, it will fail to allocate 64K. When this
happens, replication will fail.
Check your Replication MTCP Version from the
FalconStor Management Console by Clicking on
the ServerName and the General Tab
Both servers should have same versions of MTCP
(either 1 or 2)
If you see two different versions, contact Technical
Support.
You perform a role reversal and get the
following error:
"The group for replica disks on
the target server is no longer
valid. Reversal cannot proceed".
If you attempt to perform a role reversal on a
resource that belongs to a non-replication group,
the action will fail. To resolve this issue, remove the
resource from the group and perform the role
reversal.
Replication fails.
Do not initiate a TimeMark copy while replication is
in progress. Doing so will result in the failure of both
processes.
Replication fails for a group member.
If replication fails for one group member, it is
skipped and replication continues for the rest of the
group. In order for the group members that were
skipped to have the same TimeMark on its replica,
you will need to remove them from the group, use
the same TimeMark to replicate again, and then rejoin the group.
CDP/NSS Administration Guide
522
Troubleshooting / FAQs
TimeMark
Problem
Cause/Resolution
TimeMark rollback of a raw device fails.
Do not initiate a TimeMark rollback to a raw device
while data is currently being written to the raw
device. The rollback will fail because the device will
fail to open.
TimeMark copy fails.
Do not initiate a TimeMark copy while replication is
in progress. Doing so will result in the failure of both
processes.
SafeCache
Problem
Cause/Resolution
A physical resource has failed (for example,
the disk was unplugged or removed) but the
resources in the SafeCache group are not
marked offline.
If a physical resource has failed prior to the cache
being flushed, the resources in the SafeCache
group will not be marked offline until after a rescan
has been performed.
The primary resource has failed and you
attempt to disable the cache, but the cache is
unable to flush data back to the primary
resource. A dialogue box displays N/A as the
number of seconds needed to flush the cache.
The cache is unable to flush the data due to a
problem with data transfer from the cache to the
primary resource.
The SafeCache resource has failed and you
attempted to resume the SafeCache. The
resume appears to be successful, however,
the client cannot write to the virtual device.
The client can only write to the virtual device when
the SafeCache resource is restored. However, the
SafeCache remains in a suspended state. You
should suspend and resume the cache from the
Console to return the cache status to normal and
operational.
Command line interface
Problem
Failed to resolve storage server to a valid IP
address.
Error: 0x09022004
Cause/Resolution
The storage server hostname is not resolvable on
both the client side and the server side. Add the
server name to the hosts file to make it resolvable
or use the IP address in commands.
CDP/NSS Administration Guide
523
Troubleshooting / FAQs
Service-Enabled Devices
Problem
Cause/Resolution
An unassigned physical device does not show
the Service-enabled Device option when you
try to set the device category.
If you see that the GUID for this device is
fa1cff00..., the device cannot be supported as a
Service-enabled Device. This is because the device
does not support the mandatory SCSI page codes
that are used to determine the actual GUID for the
device.
A Service-enabled device (SED) is marked
"Incomplete" on the primary server and the
client that normally connects to the SED
resource has lost access to the disk.
In a failover configuration, you should not change
the properties of a SED used by a primary server to
"Unassigned" on the secondary server. If this
occurs, you should do the following:
1. Delete the offline SAN resource.
2. Service-enable the physical disk.
3. Re-create the SAN resource.
4. Re-assign the SAN resource back to the client.
CDP/NSS Administration Guide
524
Troubleshooting / FAQs
Error codes
The following table contains a description of some common error codes.
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
1005
Error
Out of kernel resources.
Failed to get major number
for the SCSI device.
Too many Linux device
drivers installed.
Type cat /proc/devices for a list
and see if any can be removed.
1006
Error
Failed to allocate memory.
Memory leak from
various modules in the
Linux OS, most likely
from network adapter or
other third party
interface drivers.
Check knowledge base for
known memory leak problems in
various drivers.
1008
Error
Failed to set up the
network connection due to
an error in SANRPCListen.
Another application has
port UDP 11577 open.
Confirm using netstat -a and
then remove or reconfigure the
offending application.
1016
Critical
Primary virtual device
[Device number] has failed
and mirror is not in sync.
Cannot perform swap
operation.
Physical device
associated with primary
virtual device may have
had a failure.
Check physical device status
and all connections, including
cables and switches, and
downstream driver log.
1017
Critical
Secondary virtual device
[Device #] has failed.
A mirror device has
failed.
Check drive, cable, and adapter.
1022
Error
Replication has failed for
virtual device [Device
number] -- [Device #].
The network might have
a problem.
Check connectivity between
primary and replica, including
jumbo frame configuration if
applicable.
1023
Error
Failed to connect to
physical device [Device
number]. Switching to alias
to [ACSL].
An adapter/cable might
have a problem.
Check for a loose or damaged
cable on the affected drive.
1030
Error
Failed to start replication -replication is already in
progress for virtual device
[Device number].
Only one replication at a
time per device is
allowed.
Try later.
1031
Error
Failed to start replication -replication control area not
present on virtual device
[Device number].
The configuration might
not be valid.
Check configuration, restart the
console, or re-import the
affected drive.
1032
Error
Failed to start replication -replication control area
has failed for virtual device
[Device number].
A drive may have failed.
Check the physical drive for the
first virtual drive segment.
CDP/NSS Administration Guide
525
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
1033
Error
Failed to start replication -a snapshot is in progress
for virtual device [Device
number].
There is a raw device
backup or snapshot
copy in progress.
Do not open raw devices or
perform snapshot copy when
replication is occurring.
1034
Error
Replication failed for
virtual device [Device
number] -- the network
transport returned error
[Error].
The network might have
a problem.
Check connectivity between the
primary and replica, including
jumbo frame configuration if
applicable.
1035
Error
Replication failed for
virtual device [Device
number] -- the local disk
failed with error [Error].
There is a drive failure.
Check all physical drives
associated with the virtual drive.
1036
Error
Replication failed for
virtual device [Device
number] -- the local
snapshot used up all of the
reserved area.
Snapshot reserved area
is insufficient on the
primary server.
Add additional snapshot
reserved area.
1037
Error
Replication failed for
virtual device [Device
number] -- the replica
snapshot used up all of the
reserved area.
Snapshot reserved area
is insufficient on the
primary server.
Add additional snapshot
reserved area.
1038
Error
Replication failed for
virtual device [Device
number] -- the local server
could not allocate memory.
Memory is low.
Check system memory usage.
1039
Error
Replication failed for
virtual device [Device
number] -- the replica disk
failed with error [Error].
Replication failed
because of the
indicated error.
Based on the error, remove the
cause, if possible.
1040
Error
Replication failed for
virtual device [Device
number] -- failed to set the
replication time.
The configuration might
not be valid.
Check the ipstor.dat file on the
replica server.
1043
Error
A SCSI command
terminated with a nonrecoverable error condition
that was most likely
caused by a flaw in the
medium or an error in the
recorded data. Please
check the system log for
additional information.
This is most likely
caused by a flaw in the
media or an error in the
recorded data.
Check the system log for
additional information. Contact
the hardware manufacturer for a
diagnostic procedure.
CDP/NSS Administration Guide
526
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
1044
Error
A SCSI command
terminated with a nonrecoverable hardware
failure (for example,
controller failure, device
failure, parity error, etc.).
Please check the system
log for additional
information.
This is a general I/O
error that is not media
related. This can be
caused by a number of
potential failures,
including controller
failure, device failure,
parity error, etc.
Check the system log for
additional information. Contact
the hardware manufacturer for a
diagnostic procedure.
1046
Error
Rescan replica has failed
for virtual device [Device
number] -- the local device
failed with error.
There is a drive failure.
Check all physical drives
associated with the virtual drive.
1047
Error
Rescan replica has failed
for virtual device [Device
number] -- the replica
device failed with error.
There is a drive failure.
Check all physical drives
associated with the virtual drive.
1048
Error
Rescan replica has failed
for virtual device [Device
number] -- the network
transport returned error.
Network problem.
Check connectivity between
primary and replica, including
jumbo frame configuration if
applicable.
1049
Error
Rescan replica cannot
proceed -- replication
control area not present on
virtual device [Device #].
The configuration might
not be valid.
Check configuration, restart GUI
Console, or re-import the
affected drive.
1050
Error
Rescan replica cannot
proceed -- replication
control area has failed for
virtual device [Device #].
There is a drive failure.
Check the physical drive for the
first virtual drive segment.
1051
Error
Rescan replica cannot
proceed -- a merge is in
progress for virtual device
[Device number].
A merge is occurring on
the replica server.
No action is required. A retry will
be performed when the retry
delay expires.
1052
Error
Rescan replica failed for
virtual device [Device
number] -- replica status
returned.
The configuration might
not be valid.
Check the ipstor.dat file on the
replica server.
1053
Error
Rescan replica cannot
proceed -- replication is
already in progress for
virtual device [Device #].
Only one replication is
allowed at a time for a
device.
Try again later.
CDP/NSS Administration Guide
527
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
1054
Error
Replication cannot
proceed -- a merge is in
progress for virtual device
[Device number].
A merge is occurring on
source.
No action is required. A retry will
be performed when the retry
delay expires.
1055
Error
Replication failed for
virtual device [Device
number] -- replica status
returned [Error].
The configuration might
not be valid.
Check the ipstor.dat file on the
replica server.
1056
Error
Replication role reversal
failed for virtual device
[Device number] -- the
error code is [Error].
The configuration might
not be valid for
replication role reversal.
Check the ipstor.dat file on the
replica server.
1059
Error
Replication failed for
virtual device [Device
number] -- start replication
returned [Error].
The configuration might
not be valid.
Check the ipstor.dat file on the
replica server.
1060
Error
Rescan replica failed for
virtual device [Device
number] -- start scan
returned [Error]
The configuration might
not be valid.
Check the ipstor.dat file on the
replica server.
1061
Critical
I/O path failure detected.
Alternate path will be used.
Failed path (A.C.S.L):
[ACSL]; New path
(A.C.S.L): [ACSL].
An alias is in use due to
primary path failure.
Check the primary path from the
server to the physical device.
1066
Error
Replication cannot
proceed -- snapshot
resource area does not
exist for remote virtual
device [Device ID].
The snapshot resource
for the replica is no
longer there. It may
have been removed
accidentally.
From the Console. log into the
replica server and check the
state of the snapshot resource
for the replica. If deleted
accidentally, restore it back.
1067
Error
Replication cannot
proceed -- unable to
connect to replica server
[Server name].
Either the network
connection is down or
the replica server is
down.
From the console, log into the
replica server and check the
state of the server at the replica
site. Determine and correct
either the network or server
problem.
1068
Error
Replication cannot
proceed -- group [Group
name] is corrupt.
The group configuration
is not consistent or is
missing.
Try to restart server modules or
recreate the group.
CDP/NSS Administration Guide
528
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
1069
Error
Replication cannot
proceed -- virtual device
[Device ID] no longer has
a replica or the replica
device does not exist.
The designated replica
device is no longer on
the replica server. Most
likely the replica drive
was either promoted or
deleted while the
primary server was
down.
Check the replica server first. If
the replica exists, the
configuration may be corrupted
and you need to call Technical
Support. If the drive was
promoted or deleted, you have
to remove replication from the
primary and reconfigure. If the
drive was deleted, create a new
replica. If the drive was
promoted, you can assign it
back as a replica but you have to
determine if new data was
written to the drive while it was
promoted, and decide if you
want to preserve the data. Once
assigned back as the replica, it
will be resynchronized with the
original primary drive.
1070
Error
Replication cannot
proceed -- replication is
already in progress for
group [Group name].
The snapshot group is
in the middle of
replication already. Only
one replication
operation can be
running at a given time
for each group.
Wait for the process to complete
or change the replication
schedule.
1071
Error
Replication cannot
proceed -- Remote vid %1
does not exist or is not a
replica device.
The replica was not
valid when replication
was triggered. The
replica might have been
removed without the
primary.
Remove the replication setup
from the primary and reconfigure
the replication.
1072
Error
Replication cannot
proceed -- missing a
remote replica device in
group [Group name].
One of the replica
drives in the snapshot
group is missing.
Replication must be
able to be performed for
the entire snapshot
group or it will not
proceed.
See 1069.
1073
Error
Replication cannot
proceed -- unable to open
configuration file.
Failed to open the
configuration file to get
replication
configuration, possibly
because the system
was busy.
Check system disk status.
Check system status.
CDP/NSS Administration Guide
529
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
1074
Error
Replication cannot
proceed -- unable to
allocate memory.
Memory allocation for
replication information
failed possibly because
the system was busy.
Check system status.
1075
Error
Replication cannot
proceed -- unexpected
error %1.
Replication failed with
the listed error.
Check system status.
1078
Error
Replication cannot
proceed -- mismatch
between our snapshot
group [Group name] and
replica server.
The Snapshot group in
the source server has
different virtual drives
than the replication
server. This may be due
to an altered
configuration when a
servers was down.
This is highly unusual situation.
The cleanest way to fix this is to
remove replication for devices in
the group, remove the group,
recreate the group, and
configure replication again.
1079
Error
Replication for group
[Group name] has failed
due to error on virtual
device [Device ID].
One or more virtual
drives in the group
failed during replication.
Check the log to determine the
nature of the error. In case o a
physical disk failure, the disk
must be replaced, and data must
be restored from the backup. In
case o a communication failure,
replication will continue when
the problem is resolved, and the
schedule starts again.
1080
Error
Replication cannot
proceed -- failed to create
TimeMark on virtual device
[Device ID].
The replication process
was not able to create a
snapshot. This may be
due to various causes,
including low system
memory, low and
improper configuration
parameters for
automatic snapshot
resources, or physical
storage is depleted.
Check the snapshot resource
issues, check the og for other
errors, and check the maximum
number of TimeMark configured.
1081
Error
Replication cannot
proceed -- failed to create
common TimeMark for
group [Group name].
One of the virtual drives
in the group failed to
create snapshot for
replication. See 1080
for details.
See 1080.
1082
Warning
Replication for virtual
device [Device ID] has
manually aborted by user.
Replication was
stopped by the user.
None.
CDP/NSS Administration Guide
530
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
1083
Warning
Replication for group
[Group name] has
manually aborted by user.
Replication was
stopped by the user.
None.
1084
Warning
A SCSI command
terminated with a
recovered error condition.
Please check system log
for additional information.
This is most likely
caused by a flaw in the
media or an error in the
recorded data.
Check the system log for
additional information. Contact
the hardware manufacturer for a
diagnostic procedure.
1085
Error
HotZone for virtual device
[Device ID] has been autodisabled due to an error.
Physical device failure.
Check physical devices
associated with HotZone.
1087
Error
Primary virtual device
[Device number]. has
failed, swap to secondary.
The mirrored device
had a physical error so
a mirror swap occurred.
Check physical device.
1090
Warning
CDR resource %1 set to
delta mode -- replication
configuration changes,
scan, or replication
initiation happened.
Obsolete, replaced by
1096 and 1097
See 1096 and 1097.
1096
Warning
Replication for virtual
device %1 has set to delta
mode -- %2.
Replication for the
virtual device switched
to delta mode due to an
operation triggered by
the user, such as a
configuration change or
replica rescan, or due to
a replication I/O error,
out of space, out of
memory condition, etc.
Check device status for I/O error
or disk space usage error.
Replication for the
virtual device switched
to delta mode due to an
operation triggered by
the user, such as a
configuration change or
replica rescan, or due to
a replication I/O error,
out of space, out of
memory condition, etc.
Check device status of the group
members for I/O error, disk
space usage error.
Failed to get the delta of
the resource to replicate
possibly due to too
many pending
processes.
Retry later.
1097
1098
Warning
Error
Replication for group %1
has set to delta mode -%2. {Affected members:
%3 }
Replication cannot
proceed -- Failed to get
virtual device delta
information for virtual
device %1.
Probable Cause
Suggested Action
Increase memory or reduce the
concurrent activities for memory
or other type of errors.
Increase memory or reduce the
concurrent activities for memory
or other type of errors.
CDP/NSS Administration Guide
531
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
1099
Error
Replication cannot
proceed -- Failed to get
virtual device delta
information for virtual
device %1 due to device
offline.
Failed to get the delta of
the resource to replicate
due to the resource
being offline.
Check device status and bring
the device back online.
1100
Error
Replication cannot
proceed -- Failed to
communicate with replica
server to trigger replication
for virtual device %1.
Failed to connect to the
replica server or
exchange replication
information with the
replica server to start
replication for the virtual
device.
Check connectivity between the
primary and replica servers.
Replication cannot
proceed -- Failed to
communicate with replica
server to triggr replication
for group %1.
Failed to connect to the
replica server or
exchange replication
information with the
replica server to start
replication for the
group.
Check connectivity between the
primary and replica servers.
Replication cannot
proceed -- Failed to
update virtual device meta
data for virtual device %1.
Failed to update virtual
device metadata to start
replication possibly due
to a device access error
or the system being
busy.
Check virtual device status.
1101
1102
Error
Error
Check if the replica server is
busy. Readjust the schedule to
avoid too many operations from
occurring at the same time.
Check if the replica server is
busy. Readjust the schedule to
avoid too many operations from
occurring at the same time.
Check system status.
1103
Error
Replication cannot
proceed -- Failed to initiate
replication for virtual
device %1 due to server
busy.
Failed to start
replication for virtual
device because the
system was busy.
Check system status.
1104
Error
Replication cannot
proceed -- Failed to initiate
replication for group %1
due to server busy.
Failed to start
replication for group
because the system
was busy.
Check system status.
1201
Warning
Kernel memory is low. Add
more memory to the
system if possible. Restart
the host if possible.
Too many processes for
the current resources.
Add more memory to the system
if possible. Restart the host if
possible.
1203
Error
Path failed to trespass to
[Path].
All downstream storage
paths had failures.
Check storage status and path
connections.
CDP/NSS Administration Guide
532
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
1204
Error
Failed to add path group.
ACSL: [Path].
Downstream storage
path failure.
Check storage status and path
connections.
1206
Error
Failed to activate path:
[Path].
Downstream storage
path failure.
Check storage status and path
connections.
1207
Error
Critical path failure
detected. Path [Path] will
be removed.
Downstream storage
path failure.
Check storage status and path
connections.
1208
Warning
Path [Path] does not
belong to active path
group.
Tried to use a nonactive path to access
storage.
Use only active paths.
1210
Warning
No valid path is available
for device [Device ID].
Downstream storage
path failure.
Check storage status and path
connections.
1211
Warning
No valid group is available.
Unexpected path
configuration.
Check path group configuration.
1212
Warning
No active path group
found. Storage
connectivity failure. Check
cables, switches and
storage system to
determine cause. GUID:
[GUID].
Storage connectivity
failure.
Check cables, switches and
storage system to determine
cause.
1214
Error
Failed to add path: [Path].
Downstream storage
path failure.
Check storage status and path
connections.
1215
Warning
CLARiiON storage path
trespassing.
Downstream storage
path failure or manual
trespass.
Check storage status and path
connections.
1216
Warning
T300 storage path
trespassing.
Downstream storage
path failure or manual
trespass.
Check storage status and path
connections.
1217
Warning
HSG80 storage path
trespassing.
Downstream storage
path failure or manual
trespass.
Check storage status and path
connections.
1218
Warning
MSA1000 storage path
trespassing.
Downstream storage
path failure or manual
trespass.
Check storage status and path
connections.
1230
Error
TimeMark [TimeMark]
cannot be created during
disk rollback. Time-stamp:
[Time].
Disk rollback is in
progress.
Wait until disk rollback is
complete.
CDP/NSS Administration Guide
533
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
3003
Error
Number of CCM
connections has reached
the maximum limit %1.
There are too many CCM
Consoles open.
Close the CCM GUI on
different machines.
3009
Error
CCM could not create a
session with client %1.
There may be a network issue
or not enough memory for
CCM module to create a
communication thread with the
client.
Check network
communication and client
access from the server; try
to restart the ccm module on
the server.
3010
Error
List of the clients cannot
be retrieved from the
server.
CCM module cannot get the
list of SAN clients by executing
internal CLI commands.
This is very unlikely to
happen; check the
executable iscli is present in
$ISHOME/bin.
3014
Error
CCM service cannot be
started.
CCM RPC service could not
be created.
Try to restart the ccm
module on the server.
3016
Error
User name or password
is longer than the
maximum limit %1.
in the CCM user or password
string for connecting to the
server is too long.
Enter a string within the limit.
3017
Warning
The version information
of the message file
cannot be retrieved.
The Event Log message file is
missing.
This is very unlikely to
happen; check the existence
of $ISHOME/etc/msg/
english.msg.
3020
Error
CCM service cannot be
created on the server as
socket creation failed.
A TCP socket could not be
created or set for CCM
service.
This is very unlikely to
happen; try to restart the
ccm module on the server.
3021
Error
CCM service cannot be
created on the server as
socket settings failed.
A TCP socket option could not
be set for CCM service.
This is very unlikely to
happen; try to restart the
ccm module on the server.
3022
Error
CCM service cannot be
created on the server as
socket binding to port %1
failed.
Another process may be using
the same port number.
Identify the process using
the ccm port and stop it.
3023
Error
CCM service cannot be
created on the server as
TCP service creation
failed.
A TCP service could not be
created for the CCM service.
This is very unlikely to
happen; try to restart the
ccm module on the server.
3024
Error
CCM service cannot be
created on the server as
service registration
failed.
Binding CCM service to RPC
callback function failed when
CCM module started.
This is very unlikely to
happen; try to the restart the
ccm module on the server.
CDP/NSS Administration Guide
534
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
7001
Error
Patch %1 failed -environment profile is
missing in /etc.
Unexpected loss of
environment variables defined
in /etc/.is.sh on the server.
Check server package
installation.
7002
Error
Patch %1 failed -- it
applies only to build %2.
The server is running a
different build than the one for
which the patch is made.
Get the patch, if any, for your
build number or apply the
patch on another server that
has the expected build
number.
7003
Error
Patch %1 failed -- you
must be the root user to
apply the patch.
The user account running the
patch is not the root user.
Run the patch with the root
account.
7004
Warning
Patch %1 installation
failed -- it has already
been applied.
You tried to apply the same
patch again.
None.
7005
Error
Patch %1 installation
failed -- prerequisite
patch %2 has not been
applied.
A previous patch is required
but has not been applied.
Apply the required patch
before applying this one.
7006
Error
Patch %1 installation
failed -- cannot copy new
binaries.
Unexpected error on the
binary file name or path in the
patch.
Contact Tech Support.
7008
Warning
Patch %1 rollback failed - there is no original file
to restore.
This patch has not been
applied or has already been
rolled back.
None.
7009
Error
Patch %1 rollback failed - cannot copy back
previous binaries.
Unexpected error on the
binary file name or path in the
patch.
Contact Tech Support.
7010
Error
Patch %1 failed -- the file
%2 has the patch level
%3, higher than this
patch. You must rollback
first %4.
A patch with a higher level has
already been applied that
conflicts with this patch.
Roll back the higher-level
patch, apply this patch, and
then reapply the higher-level
patch.
7011
Error
Patch %1 failed -- it
applies only to kernel
%2.
You tried to apply the patch to
a server that is not running the
expected OS kernel.
Apply the patch on a server
that has the expected
kernel.
10001
Error
Insufficient privilege (uid:
[UID]).
Server modules are not
running with root privilege.
Log in to the server with the
root account before starting
server modules.
CDP/NSS Administration Guide
535
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
10002
Error
The server environment
is corrupt.
The configuration file in the /
etc directory, which provides
the server home directory and
other environmental
information, is either corrupted
or deleted.
Determine the cause of such
corruption and correct the
situation. Perform regular
backups of server
configuration data so it can
be restored.
10003
Error
Failed to initialize
configuration [File name].
During the initialization
process, one or more critical
processes experienced a
problem. This is mostly due to
system drive failure, storage
hardware failure, or system
configuration corruption.
Check the storage devices
connectivity; check the
system drive for an error
using OS-provided utilities
such as fsck; check for
existence of server
environment variable file in /
etc.
10004
Error
Failed to get SCSI device
information.
An error occurred when
accessing the SCSI devices
during startup. Most likely due
to storage connectivity failure
or hardware failure.
Check the storage devices,
e.g., power status; controller
status, etc. Check the
connectivity, e.g., cable
connectors. With Fibre
Channel switches, even if
the connection status light
indicates that the connection
is good, it is still not a
guarantee. Push the
connector in to make sure.
Check the specific storage
device using OS-provided
utilities such as hdparm.
10005
Error
A physical device will not
be available because we
cannot create a Global
Unique Identifier for it.
Physical SCSI device is not
qualified because it does not
support proper SCSI Inquiry
pages.
Get supported storage
devices.
10006
Error
Failed to write
configuration [File name].
An error was encountered
when writing the server
configuration file to the system
drive. This can only happen if
the system drive runs out of
space, is corrupted, or has a
hardware failure
Check the system drive
using OS-provided utilities.
Free up space if necessary.
Replace the drive if it is not
reliable.
10054
Error
Server FSID update
encountered an error.
10059
Error
Server persistent binding
update encountered an
error.
There is a conflict on ACSL.
Use a different ACSL for
binding.
CDP/NSS Administration Guide
536
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
10100
Error
Failed to scan new SCSI
devices.
An error occurred when
adding newly discovered SCSI
devices to the system. This is
most likely due to unreliable
storage connectivity, hardware
failure, or system resources
running low.
See 10004 for information
about checking storage
devices. If system resources
are low, run 'top' to check
the process that is using the
most memory. If physical
memory is below the server
recommendation, install
more memory to the system.
If the OS is suspected to be
in a bad state due to
unexpected failure in either
hardware or software
components, restart the
server machine.
10101
Error
Failed to update
configuration [File name].
An error was encountered
when updating the server
configuration file to the system
drive. This can only happen if
the system drive is corrupted
or has a hardware failure.
Check the system drive
using OS-provided utilities.
10102
Error
Failed to add new SCSI
devices.
An error occurred when
adding newly discovered SCSI
devices to the system. This is
most likely due to unreliable
storage connectivity, hardware
failure, or system resources
are running low.
See 10004.
10200
Warning
Configuration [File name]
exists.
A configuration file already
exists when installing the
software, possibly from a
previous installation. The
configuration file will be
reused.
If there is reason to believe
the existing configuration
should not be used, e.g., the
file is suspected to be
corrupted, remove the
$ISHOME directory before
reinstallation.
10210
Error
Marked virtualized PDev
[GUID] OFFLINE, guid
does not match SCSI
guid [GUID].
A physical device has a
different GUID written on the
device header than the record
in the configuration file. This is
most likely caused by old
drives being imported without
proper initialization, or much
unlikely, due to the corruption
of the configuration or the
device header.
Check the physical
connection of the storage,
and the storage system. If
problem persists, call tech
support.
CDP/NSS Administration Guide
537
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
10211
Warning
Marked Physical Device
[%1] OFFLINE because
its wwid %2 does not
match scsi wwid %3,
[GUID: %4].
The physical storage device is
not the one registered
previously.
Check if the storage device
has been replaced. If not
rescan devices.
10212
Error
Marked PDev [GUID]
OFFLINE because scsi
status indicate OFFLINE.
The physical storage system
response indicates the specific
device is off-line. It may have
been removed, turned off, or
malfunctioning.
Check the storage system,
and all the cabling. After the
problem is corrected, rescan
on the adapter where the
drive is connected. Limit the
scope of the scan to that
SCSI address.
10213
Error
Marked PDev [GUID]
OFFLINE because it did
not respond correctly to
inquiry.
Physical device failure or
unqualified device for SCSI
commands.
Check physical device.
10214
Error
Marked PDev [GUID]
OFFLINE because its
GUID is an invalid FSID.
The GUID in the header of the
drive does not match the
unique ID, called the FSID,
which is based on the external
properties of the physical
drive. It may be caused by
drives changed while the
server is down.
Make sure all drives are not
changed without using the
console to eliminate them
from the virtual resource list
first. Also never allow other
applications to directly
access the physical drives
without going through the
server.
10215
Error
Marked PDev [GUID]
OFFLINE because its
storage capacity has
changed.
The physical drive geometry,
including the number of
sectors, is different from the
original record.
Rescan the drive to
establish its properties.
10240
Error
Missing SCSI Alias
[A,C,S,L].
One of the existing SCSI paths
for the device is not
accessible. This is most likely
caused by a storage cable
being disconnected, or Fibre
Channel switch re-zoned, or
failure of one of the storage
ports.
Check cabling and storage
system. After situation is
corrected, rescan the
adapter connected to the
drive, and limit the scope to
that path.
10241
Error
Physical Adapter
[Adapter number] could
not be located in /proc/
scsi/.
The adapter driver could be
unloaded.
Check the loaded drivers.
CDP/NSS Administration Guide
538
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
10242
Suggested Action
Critical
Duplicate Physical
Adapter number [Adapter
number] in /proc/scsi/.
In various Linux kernel version
a defect existed which may
cause the same adapter
number being assigned to two
different adapters. This can be
very dangerous because it
may cause data to be
overwritten.
Do not repeatedly load and
unload the Fibre Channel
drivers and the server
modules individually. That
can confuse the system.
Load and unload all the
drivers together.
10244
Error
Invalid FSID, device
[Device ID] LUN in FSID
[FSID] does not match
actual LUN.
The FSID is generated with
the LUN of the device. Once a
device is used by the server, it
is not allowed to have the LUN
changed on the storage
configuration.
Do not change the LUN of a
virtualized drive. Revert
back to the original LUN in
the storage configuration.
10245
Error
Invalid FSID, Generate
FSID %1 does not match
device acsl:%2 GUID
%3.
The physical storage device is
not the one registered
previously.
Check if the storage device
has been replaced. If not
rescan devices.
10246
Error
Failed to generate FSID
for device acsl:[A C S L],
can't validate FSID.
The physical drive does not
present valid data for unique
ID generation, even the inquiry
pages exist.
Only use this type of drive
for virtual drive, and not
SED.
10247
Error
Device (acsl:[A C S L])
GUID is blank, can't
validate FSID.
Some process may have
erased the disk header. This
can be due to the accidental
erase by fdisk or format.
Never access the virtual
drives by passing the server.
10250
Warning
Remove scsi alias %1
from %2 because their
categories are different.
There might have been a
hardware configuration
change.
Check if the device has
changed or has failed.
10251
Warning
Remove scsi alias %1
from %2 because their
GUIDs are different.
There might have been a
hardware configuration
change.
Check if the device has
changed or has failed.
10254
Error
Import logical resources
failed.
This might be caused by a disk
IO failure.
Check storage devices.
10257
Error
CDP Journal (GUID: %1)
of virtual device %2 (ID
%3) need repair.
CDP Journal expansion failed
for the virtual device. Repair is
needed.
Call support to investigate
and repair the CDP Journal
in question.
CDP/NSS Administration Guide
539
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
11000
Error
Failed to create socket.
This kind of problem should
rarely happen. If it does, it may
indicate network configuration
error, possibly due to system
environment corruption. It is
also possible that the network
adapter failed or is not
configured properly. It is also
possible that the network
adapter driver has problem.
Restart the network. If
problem persists, restart the
OS, or restart the machine
(turn off then turn on the
machine). If problem still
persists, you may need to
reinstall the OS. If that is the
case may sure you properly
save all the server
configuration information
before proceeding.
11001
Error
Failed to set socket to reuse address.
System network configuration
error, possibly due to system
environment corruption.
See 11000.
11002
Error
Failed to bind socket to
port [Port number].
System network configuration
error, possibly due to system
environment corruption.
See 11000.
11003
Error
Failed to create TCP
service.
System network configuration
error, possibly due to system
environment corruption.
See 11000.
11004
Error
Failed to register TCP
service (program:
[Program name], version:
[Version number]).
System network configuration
error, possibly due to system
environment corruption.
See 11000.
11006
Error
The server
communication module
failed to start.
Most likely due to the server
port is being occupied by
either a previous unexpected
failure of the communication
module, or another application
is using the TCP port.
Restart the OS and try
again. If problem persists,
use the OS-provided utilities,
such as netstat to check the
port used.
11007
Warning
There is not enough disk
space available to
successfully complete
this operation and
maintain the integrity of
the configuration file.
There is currently %1 MB
of disk space available.
The server requires %2
MB of disk space to
continue.
The available space on the
disk holding the configuration
file is not enough.
Increase disk space.
11030
Error
Auto save configuration:
cannot setup contab.
OS cron package
configuration error.
Check /etc/crontab for
configuration and cron
daemon.
CDP/NSS Administration Guide
540
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
11031
Error
Auto save configuration:
cannot create the running
script [Script name].
Access right issue or not
enough space.
Check system configuration
11032
Error
Auto save configuration:
cannot connect to ftp
server [FTP site] port
[FTP port].
The connection to the FTP site
is not possible because the
address or port is not valid or
the FTP service is not running
on the server.
Check network connectivity
to the FTP site by manually
running an FTP session.
Also, check that FTP is
activated on the server.
11033
Error
Auto save configuration:
cannot login to the ftp
server with the user [User
name].
The indicated user name is not
valid.
Get a valid user name that
can connect to the FTP
server.
11034
Error
Auto save configuration:
directory [Path] doesn't
exist.
The directory to back up the
configuration files on the FTP
site is absent.
Check the FTP site and the
auto save configuration.
11035
Error
Auto save configuration:
failed to copy [File] to ftp
server.
The user may not have write
access to the directory.
Check access rights of the
user on the directory on the
FTP site.
11036
Error
Auto save configuration:
failed to delete old files
[Filename] from ftp
server.
The user may not have the
Write access to the directory.
Check access rights of the
user on the directory on the
FTP site.
11101
Error
SAN Client ([host name]):
Failed to add SAN Client.
This error is most likely due to
a system configuration error or
system resources running low.
Check OS resources using
provided utilities such as
top.
11103
Error
SAN Client (%1):
Authentication failed.
The user account to connect to
the server is not valid.
Check user account and
password.
11104
Error
There are too many SAN
Client connections.
The number of simultaneous
connections exceeded the
supported limit that the current
system resources can handle.
Stop some client
connections.
11106
Error
SAN Client ([host name]):
Failed to log in.
Access account might be
invalid.
Check user name and
password.
11107
Error
SAN Client ([host name]):
Illegal access.
The client host attempted to
perform an operation beyond
its granted privileges. was
tried.
This almost never happens.
Record the message and
monitor the system. If this
happens repeatedly, the true
cause should be
investigated to prevent
security breaches.
CDP/NSS Administration Guide
541
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
11112
Error
SAN Client ([host name]):
Failed to parse
configuration file [File
name].
The configuration file is not
readable by the server.
If there is a valid
configuration file saved, it
can be restored to the
system. Make sure to use
reliable storage devices for
critical system information.
11114
Error
SAN Client ([host name]):
Failed to allocate
memory.
System resources are running
low. This may be due to too
little memory installed for the
system or some runaway
process that is consuming too
much of the memory.
Use top to check the
process that is using the
most memory. If physical
memory is below the server
recommendation, install
more memory to the system.
11115
Warning
SAN Client ([host name]):
License conflict -Number of CPU's
approved: [Number of
CPU], number of CPU's
used: [Number of CPU].
The number of client attached
to the server exceeded the
licensed number allowed.
Obtain additional license key
codes.
11222
Error
Console ([host name]):
Failed to remove SAN
Client (%2) from virtual
device %3.
Failed to unassign a virtual
device from the client possibly
due to a configuration update
failure.
Check system disk status,
system status. If
configuration repository is
configured, check
configuration repository
status.
11201
Error
There are too many
Console connections.
Too many GUI consoles are
connected to the particular
server. This is a highly unlikely
condition.
None.
11202
Error
Console ([host name]):
Illegal access.
The console host attempts to
perform an operation beyond
its granted privileges. was
tried.
See 11107.
11203
Error
Console ([host name]):
SCSI device re-scanning
has failed.
An eAn error occurred when
adding newly discovered SCSI
devices to the system. This is
most likely due to unreliable
storage connectivity, hardware
failure, or system resources
are running low.
See 10100.
CDP/NSS Administration Guide
542
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
11204
Error
Console ([host name]):
SCSI device checking
has failed.
An error occurred when
accessing the SCSI devices
when console requests the
server to check the known
storage devices. Most likely
due to storage connectivity
failure or hardware failure.
Check the storage devices,
e.g., power status; controller
status, etc. Check the
connectivity, e.g., cable
connectors. For Fibre
Channel switches, even if
the connection status light
indicates the connection is
good, it is still not a
guarantee. Push the
connector in to make sure.
Check the specific storage
device using OS-provided
utilities such as hdparm.
11211
Error
Console ([host name]):
Failed to save file [file
name].
An error was encountered
when writing the server
configuration file to the system
drive. This can only happen if
the system drive ran out of
space, or it is corrupted, or
there is hardware failure in the
system drive.
See 10006.
11212
Error
Console ([host name]):
Failed to create index file
[file name] for Event Log.
Failed to create an index file
for the event log retrieval. Most
likely due to insufficient system
disk space.
Free up disk space or add
additional disk space to
system drive.
11216
Error
Console ([host name]):
Out of system resources.
Failed to fork process.
The server is low in memory
resources for normal
operation.
See 11114.
11219
Error
Console ([host name]):
Failed to add virtual
device [Device number].
Failed to create a virtual drive
due to either system
configuration error, or storage
hardware failure, or system
resource access failure.
Check system resource,
such as memory, system
disk space, and storage
device connectivity, i.e.,
cable connection.
11220
Error
Console ([host name]):
Failed to remove virtual
device [Device number].
When a virtual drive is deleted,
all of the associated resources
must also be handled,
including the replica resource,
if exists. If the replica server is
not reachable at the moment,
the remove will not be
successful.
Check the log for specific
reason of the failure. If the
replica is not reachable, the
condition must be corrected
before trying again.
CDP/NSS Administration Guide
543
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
11221
Error
Console ([Host name]):
Failed to add SAN Client
([Client name]) to virtual
device [Device ID].
Failed to create a SAN Client
entity due to either system
configuration error, or storage
hardware failure, or system
resource access failure. This
should rarely happen.
Check system resource,
such as memory, system
disk space. Check the
syslog for specific reason of
the failure.
11233
Error
Console ([host name]):
Failed to map the SCSI
device name for [A C S
L].
The mapping of the SCSI
address, namely the adapter,
channel, SCSI ID, and LUN
(ACSL), is no longer valid.
This must be due to sudden
failure, or improper removal, or
change of storage devices in
the server .
See 11204. Check and
restore the physical
configuration to proper state
if changed improperly.
11234
Error
Console ([host name]):
Failed to execute
"hdparm" for [Device
number].
Failed to perform the device
throughput test for the given
device. This can be due to the
OS being in a bad state such
that the program cannot be run
or the storage device failed.
Run the hdparm program
from the server console
directly. Check storage
devices as described in
11204.
11237
Error
Console ([user name]):
Failed to get file /usr/
local/ipstor/etc/[host
name]/ipstor.dat.cache
This message can be shown
when the server Console tries
to query the server status
(such as replication status).
The RPC server retrieves this
information in the /usr/local/
ipstor/etc/<host>/
ipstor.dat.cache file. It will fail if
the file is in use by other
server processes. A
subsequent retry should be
able to open it successfully.
The Console automatically
retries the query 3 seconds
later until it succeeds. The
retry will stop when the
Console is closed.
CDP/NSS Administration Guide
544
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
11240
Error
Console ([host name]):
Failed to start the server
module.
When any server process
cannot be started, it is most
likely due to insufficient system
resources, an invalid state left
by a server process that may
not have been stopped
properly, or an unexpected OS
process failure that left the
system in a bad state. This
should happen very rarely. If
frequent occurrence is
encountered, there may be
external factors that contribute
to the behavior that must be
investigated and removed
before running the server.
If system resources are low,
use top to check the process
that is using the most
memory. If physical memory
is below the server
recommendation, install
more memory to the system.
If the OS is suspected in bad
state due to unexpected
failure in either hardware of
software components,
restart the server machine to
make sure the OS is in a
healthy state before trying
again.
11242
Error
Console ([host name]):
Failed to stop the server
module.
When any server process
cannot be stopped, it is most
likely due to insufficient system
resources, an invalid state left
by a server process that may
not have been stopped
properly, or an unexpected OS
process failure that left the
system in a bad state. This
should happen very rarely. If
frequent occurrence is
encountered, there may be
external factors that contribute
to the behavior that must be
investigated and removed
before running the server.
See 11240.
11244
Error
Console ([host name]):
Failed to access the
server administrator list.
Failed to retrieve the list of
server administrators / users /
iSCSI users possibly due to
the system being busy or file
open error.
Check event log for actual
cause.
11245
Error
Console ([host name]):
Failed to add user %2.
The server administrator or
user ID, or password is not
valid.
Check system setting for
user and password policy to
see if the user ID and
password conform to the
policy.
11247
Error
Console ([host name]):
Failed to delete user %2.
User ID is not valid.
Check if user exists; look at
log message for possible
cause, and try again.
CDP/NSS Administration Guide
545
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
11249
Error
Console ([host name]):
Failed to reset password
for user %2.
Password is not valid.
Check to see if other
administrators already
deleted the user.
11251
Error
Console ([host name]):
Failed to update
password for user %2.
Password is not valid.
Check to see if other
administrators already
deleted the user.
11253
Error
Console ([host name]):
Failed to modify virtual
device %2.
Failed to expand virtual device
possibly due to a device error
or the system being busy.
Check device status and
system status.
11257
Error
Console ([host name]):
Failed to add SAN Client
([Host name]).
See 11101.
See 11101.
11259
Error
Console ([host name]):
Failed to delete SAN
Client (%2).
Specified client could not be
deleted possibly due to
configuration update failure.
Check system disk and
Configuration Repository if
configured.
11261
Error
Console ([Host name]):
Failed to get SAN Client
connection status for
virtual device [Device ID].
Failed to inquire about the
SAN Client connection status
due to either a system
configuration error, storage
hardware failure, or system
resource access failure. This
should rarely happen.
Check system resource,
such as memory, system
disk space. Check the
syslog for specific reason of
the failure.
11262
Error
Console ([host name]):
Failed to parse
configuration file [File
name].
See 11112.
See 11112.
11263
Error
Console ([host name]):
Failed to restore
configuration file [File
name].
See 10006.
See 10006.
11266
Error
Console ([host name]):
Failed to erase partition
of virtual device [Device
number].
Storage hardware failure.
See 10004.
11268
Error
Console ([host name]):
Failed to update meta
information of virtual
device %2.
This may be due to a disk
being offline or a disk error.
Check disk status.
11270
Error
Console ([host name]):
Failed to add mirror for
virtual device [Device
number].
This is most likely due to a
storage device hardware error.
See 10004.
CDP/NSS Administration Guide
546
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
11272
Error
Console ([host name]):
Failed to remove mirror
for virtual device %2.
This may be due to a mirror
disk error or the system being
busy.
Check disk status and try
again.
11274
Error
Console ([host name]):
Failed to stop mirroring
for virtual device %2.
This may be due to the system
being busy.
Retry later.
11276
Error
Console ([host name]):
Failed to start mirror
synchronization for
virtual device %2.
This may be due to a mirror
disk error or the system being
busy.
Check disk status and try
again.
11278
Error
Console ([host name]):
Failed to swap mirror for
virtual device [Device
number].
This is most likely due to
storage device hardware error.
See 10004.
11280
Error
Console ([host name]):
Failed to create shared
secret for IPStor Server
%2.
Secure communication
channel information for a
failover setup, a replication
setup, or a Near-line mirroring
setup could not be created.
Check if specified IP
address can be reached
from the failover secondary
server, replication primary
server, or Near-line server.
11282
Error
Console ([host name]):
Failed to change device
category for physical
device [Device number]
to [Device number].
Storage hardware failure.
See 10004.
11285
Error
Console ([host name]):
Failed to execute failover
command (%2).
Failed to execute the
command to start failover or
stop failover.
Check system log message
for actual cause.
11287
Error
Console ([host name]):
Failed to set failover
mode ([Mode]).
The system resources are low,
or the OS is in a unstable state
possibly due to previous
unexpected error condition.
See 11240.
11289
Error
Console ([host name]):
Failed to restart IPStor
Server module.
Failed to restart IPStor server
modules for failover setup or
NAS operations.
Check system log messages
for possible cause.
11291
Error
Console ([host name]):
Failed to update meta
information of physical
device [Device number].
Storage hardware failure.
See 10004.
11294
Error
Console ([host name]):
Failed to get host name.
See 11000.
See 11000.
CDP/NSS Administration Guide
547
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
11293
Error
Console ([host name]):
Failed to swap IP
address from [IP
address] to [IP address].
11295
Error
Console ([host name]):
Invalid configuration
format.
See 11112.
See 11112.
11296
Error
Console ([host name]):
Failed to resolve host
name -- %2.
Host name could not be
mapped to IP address on
replication primary server
during replication setup.
Check the accuracy of the
hostname entered for
replication target server and
the network configuration
between the replication
primary server and target
server.
11299
Critical
Failed to save the
configuration to
configuration repository.
Please check the storage
connectivity.
Configuration file on the
configuration repository could
not be updated possibly due to
offline device, disk failure.
Check system log messages
for possible cause.
11300
Error
Invalid [User name ([User
name]) used by client at
IP address [IP address].
An invalid user name is used
to log in the server, either from
the client host or the IPStor
console.
Make sure the correct user
name is used. The correct
user names are the root, or
the admin users created
using the "Administrator"
option. If many unexplained
occurrences of this message
are in the log, then may be
someone was deliberately
trying to gain unauthorized
access by guessing the user
credential. In that case
investigate, start with the
source IP address.
11301
Error
Invalid password for user
([User name]) used by
client at IP address [IP
address].
The incorrect password was
used during authentication
from the IPStor Console, or
from the client host during
adding of the server.
Make sure the correct user
name and password pair is
used. If many unexplained
occurrences of this message
are in the log, then may be
someone was deliberately
trying to gain unauthorized
access by guessing the
password. In that case
investigate, start with the
source IP address.
CDP/NSS Administration Guide
548
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
11302
Error
Invalid passcode for
machine ([Host name])
used by client at IP
address [IP address].
The incorrect shared secret
was used by the client host to
connect the server. This is
most likely due to the server
was reinstalled and the
credential file is changed. This
may be also due to someone
was deliberately trying to gain
data access by guessing the
shared secret, although highly
unlikely.
From the client host, delete
the server and add it back
again to resynchronize with
the shared secret.
11303
Error
Authentication failed in
stage [%1] for client at IP
address [IP address].
The incorrect login was used
by the client host to connect
the server.
From the client host, delete
the server, add it back again,
and use the correct login.
11306
Error
The IPStor Administrator
group does not exist.
IPStor Administrator Group
does not exist in the system
possibly due to improper
installation or upgrade.
Contact Tech Support for
possible cause and fixes.
11307
Error
User %1 at IP address
%2 is not a member of
the IPStor Administrator's
group.
It might be a typo when user
typed in the ID and password
to log in.
Check user ID and
password and make sure
there is no possibility for
unauthorized login from that
IP address.
11308
Error
The IPStor Client group
does not exist.
IPStor Client group does not
exist in the system possibly
due to improper installation or
upgrade.
(OBSOLETE since IPStor 5.1)
Contact Tech Support for
possible cause and fixes.
11309
Error
User ID %1 at IP address
%2 is invalid.
It might be a typo when user
typed in the ID and password
to log in.
Check the user account and
retry.
11310
Error
IPStor Client User name
%1 does not match with
the client name %2.
User name does not match the
original user when resetting
the credential for the client.
Use the original user name
or ask IPStor Administrator
to reset the credential from
the client.
11408
Error
Synchronizing the
system time with [host
name]. A system reboot
is recommended.
The failover pair is detected to
have a substantial time
difference. It is recommended
to keep the failover pair closely
synchronized to avoid
potential problems and
confusion.
Set the correct time for both
machines in the failover pair.
11410
Warning
Enclosure Management:
%1
Physical enclosure might have
some failures.
Check enclosure
configuration.
CDP/NSS Administration Guide
549
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
11411
Error
Enclosure Management:
%1
Physical enclosure has some
failures.
Check enclosure
configuration.
11506
Error
Console ([host name]):
Failed to start replica
scanning for virtual
device %2.
This may be due to a
connection error or the system
being busy.
Check connectivity between
replication primary server
and target server. Check to
see if system is busy with
pending operations.
11508
Error
Console ([host name]):
Failed to set the
properties for the IPStor
Server.
Failed to update configuration
file for the new server
properties possibly due to disk
error or the system being busy.
Check system disk and
system status.
11510
Error
Console ([host name]):
Failed to save report -%2.
A report file could not be
saved possibly due to not
enough space or error on disk.
Check system disk status,
available space, and system
status.
11511
Error
Console ([host name]):
Failed to get the
information for the NIC.
Network interface information
could not be retrieved possibly
due to configuration error or
low system resources.
Check if network
configuration is configured
properly. Also check if
system memory is running
low for allocation.
11512
Error
Console ([host name]):
Failed to add a replica for
device %2 to IPStor
Server %3 (watermark:
%4 MB, time: %5,
interval: %6, watermark
retry: %7, suspended:
%8).
Failed to configure replication
on the primary server possibly
due to the system being busy.
Check system log messages
for actual cause.
11514
Error
Console ([host name]):
Failed to remove the
replica for device %2
from IPStor Server %3
(watermark: %4 MB,
time: %5, interval: %6,
watermark retry: %7,
suspended: %8).
Failed to remove replication
configuration on the primary
server when deleting
replication setup possibly due
to the system being busy.
Check system log messages
for actual cause.
11516
Error
Console ([host name]):
Failed to create the
replica device [Device
number].
Failed to create the replica for
the source virtual device. This
is most likely due to problem in
the remote server.
Check the hardware and
software condition in the
remote replica server to
make sure it is running
properly before trying again.
CDP/NSS Administration Guide
550
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
11518
Error
Console ([host name]):
Failed to start replication
for virtual device [Device
number].
The most like cause for the
replication to fail to start is the
remote server is not
reachable, or in a bad state.
Check the hardware and
software condition in the
remote replica server to
make sure it is running
properly before trying again.
11520
Error
Console ([host name]):
Failed to stop replication
for virtual device %2.
This may be due to the system
being busy.
Check system log messages
for actual cause and retry.
11522
Error
Console ([host name]):
Failed to promote replica
device %2 to a virtual
device.
This may be due to the system
being busy.
Check system log messages
for actual cause and retry.
11524
Error
Console ([host name]):
Failed to run IPStor
Server X-Ray.
See 11240.
See 11240.
11530
Error
Console ([host name]):
Failed to back up
configuration files.
Failed to retrieve configuration
on the server to return to the
console for "Save
Configuration" operation. File
might be updated or the
system is busy.
Check system status.
11532
Error
Console ([host name]):
Failed to restore
configuration files.
Failed to restore configuration
from previous saved
configuration file possibly due
to configuration conflict or the
system being busy.
Check if the saved
configuration is outdated.
Check system status.
11534
Error
Console ([host name]):
Failed to reset the umap
for virtual device [Device
number].
Storage hardware failure.
See 10004.
11535
Error
|Console ([host name]):
Failed to update the
replication parameters for
virtual device %2 to
IPStor Server %3
(watermark: %4 MB,
time: %5, interval: %6,
watermark retry: %7,
suspended: %8).
Failed to update replication
properties possibly due to the
system being busy.
Check if system is busy and
retry.
11537
Error
Console ([host name]):
Failed to claim physical
device [Device number].
This may due to the version of
the IPStor server limits the
support the storage capacity.
Check license agreement for
the version of IPStor server.
CDP/NSS Administration Guide
551
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
11539
Error
Console ([host name]):
Failed to import physical
device [Device number].
Storage hardware failure.
See 10004.
11541
Error
Console ([host name]):
Failed to save event
message (ID: %2).
Failed to add Event message
from Console / CLI for
replication, snapshot
expansion etc. possibly due to
not enough space on the
system disk.
Check system disk status
and available space.
11542
Error
Console ([host name]):
Failed to remove replica
device %2.
Failed to delete replica disk
possibly due to the system
being busy.
Check system status and
retry
11544
Error
Console ([host name]):
Failed to modify replica
device %2.
Failed to expand replica disk
possibly due to the system
being busy.
Check system status and
retry.
11546
Error
Console ([host name]):
Failed to mark the
replication for virtual
device %2.
Failed to mark replication in
sync possibly due to
connectivity between the
primary server and target
server or due to the system
being busy.
Check connectivity and
system status. Try again.
11548
Error
Console ([host name]):
Failed to determine if
data was written to virtual
device %2.
Failed to check if the virtual
device has been updated
possibly due to a device error
or the system being busy.
Check device status and
system status.
11553
Error
Console ([host name]):
Failed to get login user
list.
The list of users could not be
retrieved from the system.
Check system status.
11554
Error
Console ([host name]):
Failed to set failover
option
<selfCheckInterval: %d
sec>.
Failed to set failover options
on primary server possibly due
to failover module stopped or
disk error.
Check failover module
status.
Check system disk status.
11556
Error
Console ([host name]):
Failed to start snap copy
from virtual device
[Device number] to virtual
device [Device number].
This may happen if another
process is performing I/O with
the snapshot requirements,
such as an backup operation.
It is also possibly due to
storage hardware failure.
Check to see if another
process is using the
snapshot. See 10004 if
storage failure is suspected.
11560
Error
Console ([host name]):
Failed to get licenses.
License keycode information
could not be retrieved.
Check system disk and
system status.
11561
Error
Console ([host name]):
Failed to add license %2.
The license is not valid.
Check license key code
validity.
CDP/NSS Administration Guide
552
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
11563
Error
Console ([host name]):
Failed to remove license
%2.
The license is not valid.
Check license key code
validity.
11565
Error
Console ([host name]):
Failed to check licenses - option mask %2.
The license is not valid.
Check license key code
validity.
11567
Error
Console ([host name]):
Failed to clean up
failover server directory
%2.
This may be due to a disk error
or the system being busy
when the failover setup was to
be removed.
Check system disk and
system status.
11568
Error
Console ([host name]):
Failed to set (%2) I/O
Core for failover -- Failed
to create failover
configuration.
Failed to notify IOCore of
failover setup or removal
possibly due to the system
being busy.
Reconfigure failover if this
happens during failover
setup.
11569
Error
Console ([host name]):
Failed to set [Device
number] to Fibre Channel
mode [Mode].
This is possibly due to the
Fibre Channel driver is not
properly loaded, or the wrong
version of the driver is loaded.
IPStor FC target mode
requires the FalconStor
version of the driver to be
used. The driver name should
be qla2x00fs.o.
Use lsmod to check the
qla2x00fs driver to make
sure it is loaded. If is, check
to make sure it is the correct
revision. The correct revision
should be located in the
ipstor/lib directory.
11571
Error
Console ([host name]):
Failed to assign Fibre
Channel device %2 to
%3; rolled back changes
Failed to assign virtual device
to Fibre Channel target. All
intermediary configuration
changes were rolled back and
the configuration remained
unchanged.
Check LUN conflict, disk
status, system status, Fibre
Channel Target module
status.
11572
Error
Console ([host name]):
Failed to assign Fibre
Channel device %2 to
%3; could not roll back
changes.
Failed to assign virtual device
to Fibre Channel target.
However, the configuration
was partially updated.
Check LUN conflict, disk
status, system status, Fibre
Channel Target module
status.
May need to restart Fibre
Channel Target Module to
resolve the configuration
conflict.
CDP/NSS Administration Guide
553
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
11574
Error
Console ([host name]):
Failed to unassign Fibre
Channel device %2 from
%3 and returned %4;
rolled back changes.
Failed to unassign virtual
device from Fibre Channel
target. All intermediary
configuration changes were
rolled back and the
configuration remained
unchanged.
Check Fibre Channel Target
module status.
11575
Error
Console ([host name]):
Failed to unassign Fibre
Channel device %2 from
%3 (not rolled back) and
returned %4; could not
roll back changes.
Failed to unassign virtual
device from Fibre Channel
target. However, the
configuration is partially
updated.
Check Fibre Channel Target
module status.
May need to restart Fibre
Channel Target Module to
resolve the configuration
conflict.
11577
Error
Console ([host name]):
Failed to get Fibre
Channel target
information.
This may be due to a Fibre
Channel target module status.
Check Fibre Channel Target
module status.
11578
Error
Console ([host name]):
Failed to get Fibre
Channel initiator
information.
See 11569.
See 11569.
11581
Error
Console ([host name]):
Failed to set NAS option
[Option].
Failed to start NAS processes.
Possibly due to lack of
resources. See 11240.
See 11240.
11583
Error
Console ([host name]):
Failed to update Fibre
Channel client ([Host
name]) WWPNs.
See 11569.
See 11569.
11585
Error
Console ([host name]):
Failed to set Fibre
Channel option %2.
Fibre Channel option could not
be enabled or disabled.
Check system status.
11587
Error
Console ([host name]):
Failed to demote virtual
device %2 to a replica.
Fail to convert back a virtual
device (a promoted replica) to
a replica.
Check if virtual device is
online or if system is busy.
11590
Error
Out of disk space to
expand snapshot storage
for virtual device [Device
ID].
There is no more storage left
for automatic expansion of the
snapshot resource, which just
reached the threshold usage.
Add additional storage to
IPStor. Physical storage
must be prepared for virtual
drive before it is qualified to
be allocated for snapshot
resources.
CDP/NSS Administration Guide
554
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
11591
Error
Failed to expand
snapshot storage for
virtual device [Device ID]:
maximum segment
exceeded (error code
[Return code]).
The virtual drive has an upper
limit on the number of physical
segments. The drive has been
expanded so many times that
it exceeded the limit.
Do not expand drives in toosmall increments.
Consolidate the segments
by mirroring or creating a
snapshot copy to another
virtual drive with fewer
segments before expanding
again.
11594
Error
Console ([host name]):
Failed to set CallHome
option %2.
Email Alert option could not be
enabled or disabled.
Check system status.
11598
Error
Out of disk space to
expand CDP journal
storage for %1.
The CDP Journal could not be
expanded due to insufficient
disk space.
Add more storage.
11599
Error
Failed to expand CDP
journal storage for %1:
maximum segment
exceeded (error code
%2).
The CDP Journal resource
could not be expanded due to
the maximum supported
segments.
Currently up to 64 segments
are supported; in order to
prevent this from happening,
create a bigger CDP journal
to avoid frequent
expansions.
11605
Error
Failed to create character
device to map TimeMark
%1 for virtual device %2.
Failed to map a raw device
interface for virtual device to
perform backup, snapshot
copy, or TimeMark copy.
Check virtual device and
snapshot resource status.
11608
Error
Console ([host name]):
Failed to proceed copy/
rollback TimeMark
operation with client
attached on the target
virtual device [Device ID].
The device to be rolled back is
still assigned to client hosts.
IPStor requires the device to
be guaranteed to have no I/O
during roll back. Therefore the
device cannot be assigned to
any hosts.
Unassign the virtual device
before rolling back.
11609
Error
[Task name] Failed to
create TimeMark for
virtual device [Device ID]
while the last creation/
client notification is in
progress.
The last snapshot operation,
including the notification
process, is still in progress.
This may be caused by to
short an interval between
snapshots, or the snapshot
notification is held up due to
network or client applications.
Adjust the frequency of
TimeMark snapshots.
Determine the actual time it
takes for snapshot
notification to complete,
which is application and data
activity dependent.
11610
Error
Console ([host name]):
Failed to create
TimeView for virtual
device %2 TimeMark %3.
The TimeView resource could
not be created for the virtual
device possibly due to a
device error.
Check to see if the virtual
device and snapshot
resource are online.
CDP/NSS Administration Guide
555
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
11613
Error
Console ([host name]):
Failed to enable
TimeMark for device %2.
The TimeMark option could
not be enabled for the virtual
device possibly due to a
device error.
Check to see if the virtual
device and snapshot
resource is online.
11615
Error
Console ([host name]):
Failed to disable
TimeMark for device %2.
The TimeMark option for
virtual device could not be
disabled possibly due to the
system being busy.
Retry later.
11618
Error
Failed to select
TimeMark %1 for virtual
device %2: TimeMark %3
has already been
selected.
TimeMark is already selected
for another operation.
Wait for the completion for
the other operation.
11619
Error
Failed to select
TimeMark %1 character
device for virtual device
%2.
The TimeMark could not be
selected for raw device
backup possibly due to the
system being busy.
Check system status and
retry later.
11621
Error
Failed to create
TimeMark %1 for virtual
device %2.
The TimeMark for this virtual
device could not be created
possibly due to a device error
or the system being busy.
Check device status and
system status.
11623
Error
Failed to delete
TimeMark %1 for virtual
device %2.
The specified TimeMark for
virtual device could not be
removed possibly due to a
device error, pending
operation, or the system being
busy.
Check device status and
system status. Retry later.
11625
Error
Failed to copy TimeMark
%1 of virtual device %2
as virtual device %3.
TimeMark failed to copy from
source virtual device to target
virtual device possibly due to a
device error or the system
being busy.
Check device status and
system status. Retry later.
11627
Error
Failed to roll back to
TimeMark timestamp %1
for virtual device %2.
TimeMark rollback for virtual
device failed possibly due to a
device error or the system
being busy.
Check device status and
system status. Retry later.
11631
Error
Failed to expand
snapshot storage for
virtual device %1 (error
code %2).
Automatic snapshot resource
expansion failed possibly due
to a device error, quota
reached, out-of-space, or the
system being busy.
Check log messages for
possible cause.
CDP/NSS Administration Guide
556
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
11632
11633
Type
Error
Error
Text
Probable Cause
Console ([host name]):
Failed to set failover
option on secondary
server
<heartbeatInterval: %2
sec,
autoRecoveryInterval:
%3 sec>.
Failed to update failover
options with auto-recovery
mode on secondary server.
Console ([host name]):
Failed to set failover
option on secondary
server
<heartbeatInterval: %2
sec,
autoRecoveryInterval:
disabled>.
Failed to update failover
options without auto-recovery
mode on secondary server.
Suggested Action
Check failover module
status.
Check system disk status.
Check failover module
status.
Check system disk status.
11637
Error
Failed to expand CDP
journal storage for %1
(error code %2).
The CDP Journal could not be
expanded possibly due to not
enough space or an error on
the disk.
Check disk status, available
space and system status.
11638
Error
Failed to expand CDP
journal storage for %1.
The virtual device is
assigned to user %2. The
quota for this user is %3
MB and the total size
allocated to this user is
%4 MB, which exceeds
the limit.
The CDP Journal resource
could not be expanded due to
the storage quota limit being
exceeded.
Increase storage quota for
the specified user.
11639
Error
The virtual device %1 is
assigned to user %2. The
quota for this user is %3
MB and the total size
allocated to this user is
%4 MB. Only %5 MB will
be added to the CDP
Journal.
The CDP Journal resource
was expanded with a smaller
increment size than usual due
to user quota limit.
Increase storage quota for
the specified user.
11640
Error
Failed to expand
snapshot resource for
virtual device %1. The
virtual device is assigned
to user %2. The quota for
this user is %3 MB and
the total size allocated to
this user is %4 MB, which
exceeds the limit.
The snapshot resource was
not expanded due to quota
limit exceeded.
Increase storage quota for
the specified user.
CDP/NSS Administration Guide
557
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
11641
Error
The virtual device %1 is
assigned to user %2. The
quota for this user is %3
MB and the total size
allocated to this user is
%4 MB. Only %5 MB will
be added to the snapshot
resource.
The snapshot resource was
expanded with a smaller
increment size than usual due
to user quota limit.
Increase storage quota for
the specified user.
11643
Error
[Task %1] Failed to
create TimeMark for
virtual device %2 while
notification to client %3
for other resource is in
still progress.
Snapshot notification to the
same client for other virtual
devices is still pending.
Retry later.
11644
Error
Take TimeView
[TimeView name] id
[Device ID] offline
because the source
TimeMark has been
deleted.
The TimeMark snapshot which
the TimeView is based on has
been deleted. The TimeView
image therefore is set to OFFLINE because it is no longer
accessible.
Remove the TimeView from
the resource.
11645
Error
Console ([host name]):
Failed to create
TimeView: virtual device
[Device ID] already have
a TimeView.
For each TimeMark snapshot,
only one TimeView interface
can be created.
None.
11649
Error
Failed to convert inquiry
string on SCSI device
%1.
The inquiry string contains
invalid information.
Check the device
configuration.
11655
Error
Bad capacity size for
SCSI device %1.
Failed to get capacity
information from the device.
Check the storage.
11656
Warning
Discarded scsi device
%1, unsupported Cabinet
ID.
The Cabinet ID of the device is
not supported.
Check storage device
definition.
11657
Warning
Discarded scsi device
%1, missing "%2" vendor
in inquiry string.
The disk is not from one of the
supported vendor.
Check storage device
definition.
11658
Warning
SCSI device %1 storage
settings are not optimal.
Please check the storage
settings.
Storage settings are not
optimal.
Check the storage.
11659
Warning
Discarded scsi device
%1, exceeded maximum
supported LSI LUN %2.
Number of LSI device LUNs
exceeds the maximum
supported value.
Check storage configuration.
CDP/NSS Administration Guide
558
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
11660
Error
Failed to allocate a %1
MB DiskSafe mirror disk
in storage pool %2.
There is only %3 MB free
space left in that storage
pool.
The DiskSafe mirror disk could
not be created due to
insufficient storage space.
Add more storage.
11661
Error
Failed to expand the
DiskSafe mirror disk by
%1 MB for user %2. The
total size allocated for
this user would be %3
MB and this exceeds the
user's quota of %4 MB.
The DiskSafe mirror disk could
not be expanded due to user
quota.
Increase storage quota for
the specified user.
11662
Error
Failed to create a %1 MB
DiskSafe snapshot
resource. There is not
any storage pool with
enough free space.
The Snapshot resource could
not be created for DiskSafe
mirror disk due to insufficient
storage in storage pool
assigned for DiskSafe.
Add more storage to the
DiskSafe storage pool.
11665
Error
Console ([host name]):
Failed to enable backup
for virtual device %2.
The backup option could not
be enabled for the virtual
device possibly due to a
device error or the maximum
number of virtual devices that
can be enabled for backup has
reached.
Check that the maximum
limit of 256 virtual devices
that can be enabled for
backup has not been
reached. Also, check the
disk status and system
status.
11667
Error
Console ([host name]):
Failed to disable backup
for virtual device %2.
The backup option could not
be disabled for the virtual
device possibly due to a
device error.
Check disk status and
system status.
11668
Error
Console ([host name]):
Failed to stop backup
sessions for virtual
device %2.
The raw device backup
session for the virtual device
could not be stopped possibly
due to the system being busy.
Check system status.
11672
Error
Console ([host name]):
Virtual device %2 cannot
join snapshot group %3
group id %4.
The virtual device could not be
added to the group possibly
because a snapshot operation
was in progress.
Check if a snapshot
operation is pending for the
virtual device or the group.
Check disk status and
system status.
11673
Error
Console ([host name]):
Virtual device %2 cannot
leave snapshot group %3
group id %4.
Virtual device could not be
removed from the group
possibly because a snapshot
operation was in progress.
Check if a snapshot
operation is pending for the
virtual device or the group.
Check disk status and
system status.
CDP/NSS Administration Guide
559
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
11676
Error
Console ([host name]):
Failed to resize NAS file
system on virtual device
%2.
The NAS file system could not
be resized automatically after
expansion using system
commands.
Try offline resize.
11681
Error
Console ([host name]):
Failed to resume Cache
Resource %2 (ID: %3).
SafeCache usage for the
virtual device could not be
resumed possibly due to a
device error.
Check disk status and
system status.
11683
Error
Console ([host name]):
Failed to suspend cache
Resource %2 (ID: %3).
SafeCache usage for the
virtual device could not be
suspended possibly due to a
device error.
Check disk status and
system status.
11684
Error
Console ([host name]):
Failed to reset cache on
target device %2 (ID: %3)
for %4 copy.
SafeCache could not be reset
for the snapshot copy target
resource possibly due to the
system being busy.
Check if system is busy and
retry later.
11686
Error
Console ([host name]):
Failed to add %2
Resource %3 (ID: %4).
The specified resource could
not be created possibly due to
a device error.
Check disk status and
system status.
11688
Error
Console ([host name]):
Failed to delete %2
Resource %3 (ID: %4).
The specified resource could
not be created possibly due to
the system being busy.
Check if system is busy.
11690
Error
Console ([host name]):
Failed to resume
HotZone resource %2
(ID: %3).
HotZone usage could not be
resumed possibly due to disk
error.
Check system disk status
and system status.
11692
Error
Console ([host name]):
Failed to suspend
HotZone resource %2
(ID: %3).
HotZone usage could not be
suspended possibly due to
disk error.
Check system disk status
and system status.
11694
Error
Console ([host name]):
Failed to update policy
for HotZone resource %2
(ID: %3).
The HotZone policy could not
be updated possibly due to
disk error.
Check system disk and
system status.
11695
Error
Console ([host name]):
Failed to get HotZone
statistic information.
HotZone statistics information
could not be retrieved from log
file.
Check HotZone log, disk
status, system status.
11696
Error
Console ([host name]):
Failed to get HotZone
status.
The HotZone status could not
be retrieved possibly due to
disk error.
Check HotZone device
status.
CDP/NSS Administration Guide
560
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
11701
Error
Console ([host name]):
Failed to reinitialize
snapshot resource (ID:
%2) for virtual device (ID:
%3).
The snapshot resource could
not be reinitialized possibly
due to disk error.
Check if the snapshot
resource is online.
Check if the system is busy.
11706
Error
Console ([host name]):
Failed to shrink snapshot
resource for resource %2
(ID: %3).
Shrinking of the snapshot
resource failed possibly due to
the system being busy.
Check system status and
retry.
11707
Warning
Deleting TimeMark %1
on virtual device %2 to
maintain snapshot
resource threshold is
initiated.
A TimeMark was deleted after
a failed expansion in order to
maintain the snapshot
resource threshold.
Check disk status, available
space.
Check if system is busy.
Try manual expansion if it is
necessary.
11708
Error
Failed to get TimeMark
information to roll back to
TimeMark %1 for virtual
device %2.
TimeMark information could
not be retrieved for rollback
possibly due to a pending
TimeMark deletion operation.
Retry later.
11711
Error
Copying CDP journal
data to %1 %2 (ID: %3)
failed to start. Error: %4.
CDP Journal data could not be
copied possibly due to the
system being busy.
Check system status.
11713
Error
Copying CDP journal to
%1 %2 (ID: %3) failed to
complete. Error: %4.
Copying CDP Journal data
failed possibly due to the
system being busy.
Check system status.
11715
Error
Console ([host name]):
Failed to suspend CDP
Journal Resource %2
(ID: %3).
The CDP Journal for the
resource could not be
suspended possibly due to the
system being busy.
Check system status.
11716
Error
Console ([host name]):
Failed to get information
for license activation.
License registration
information could not be
retrieved.
Check license is registered
and the public key is not
missing.
11717
Error
Console ([host name]):
Failed to activate license
(%2).
License registration failed.
Check connectivity to
registration server; check file
system is not read-only for
creation of intermediary files.
11730
Error
Console ([host name]):
Failed to suspend mirror
for virtual device %2.
Mirror synchronization could
not be suspended possibly
due to the system being busy.
Check system status.
CDP/NSS Administration Guide
561
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
11738
Error
Console ([host name]):
Failed to update the
replication parameters for
virtual device %2 to
IPStor Server %3
(compression: %4,
encryption: %5,
MicroScan: %6)
Replication properties could
not be updated.
Check system disk status
and system status.
11740
Warning
[Task %1] Snapshot
creation for %2 %3 will
proceed even if the Nearline mirror is out-of-sync
on server %4.
A snapshot is going to be
created while the mirror is outof-sync in a Near-line setup.
Synchronize the mirror of
the primary disk on the
primary server.
11741
Warning
[Task %1] Snapshot
creation / notification for
%2 %3 will proceed even
if the Near-line mirroring
configuration cannot be
retrieved from server %4.
The system tries to connect to
the primary server to obtain
the client configuration
information when a snapshot
is taken. The snapshot will still
be created but the client will
not be notified.
Check connectivity between
the primary server and Nearline server.
Check if the primary server
is busy.
11742
Warning
[Task %1] Snapshot
creation / notification for
%2 %3 will proceed even
if the Near-line mirroring
configuration is invalid on
server %4.
The system attempts to check
the primary server's
configuration when a snapshot
is taken on the Near-line
server. If it cannot, the
snapshot will still be taken, but
the data might not be valid.
Check primary disk
configuration and status.
11761
Error
Console ([host name]):
Failed to updated mirror
policy.
Virtual device mirroring policy
could not be updated possibly
due to the system being busy.
Check system status and
retry later.
11770
Error
Console ([host name]):
Failed to get mutual chap
user list.
The list of iSCSI Mutual CHAP
Secret users could not be
retrieved.
Check system status and
retry later.
11771
Error
Console ([host name]):
Failed to reset mutual
chap secret for user %2.
The iSCSI Mutual CHAP
Secret for a user could not be
reset by root.
Check system status and
retry later.
11773
Error
Console ([host name]):
Failed to update mutual
chap secret for user %2.
The iSCSI Mutual CHAP
Secret for a user could not be
updated by root.
Check system status and
retry later.
11775
Error
Console ([host name]):
Failed to add mutual
chap secret user %2.
The iSCSI Mutual CHAP
Secret for a user could not be
added by root possibly due to
disk problem or the system
being busy.
Check system disk and
system status.
CDP/NSS Administration Guide
562
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
11777
Error
Console ([host name]):
Failed to delete mutual
chap secret user %2.
The iSCSI Mutual CHAP
Secret for a user could not be
deleted by root.
Check log message for
possible cause.
11295
Error
Console ([host name]):
Invalid configuration
format.
The configuration file is not
readable by the server.
If there is a valid
configuration file saved, it
can be restored to the
system. Make sure to use
reliable storage devices for
the critical system
information.
11296
Error
Console ([host name]):
Failed to resolve host
name.
Host name could not be
mapped to IP address on the
primary server during
replication setup.
Check the accuracy of the
hostname entered for
replication target server and
the network configuration
between the replication
primary server and target
server.
11299
Critical
Failed to save the server
configuration to the
Configuration Repository.
Check the storage
connectivity and if
necessary, reconfigure
the Configuration
Repository.
Configuration file on the
configuration repository could
not be updated possibly due to
offline device, disk failure.
Check system log messages
for possible cause.
11300
Error
Invalid user name (%1)
used by client at IP
address %2.
An invalid user name is used
to log in to the server, either
from the client host or the
console.
Make sure the correct user
name is used. If many
unexplained occurrences of
this message are in the log,
then may be someone was
deliberately trying to gain
unauthorized access by
guessing the user credential.
In that case investigate, start
with the source IP address.
Invalid user name (%1) used
by client at IP address %2.
CDP/NSS Administration Guide
563
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
11301
Error
Invalid password for user
(%1) used by client at IP
address %2.
The incorrect password was
used during authentication
from the Console, or from the
client host when adding the
server.
Make sure the correct user
name and password pair is
used. If many unexplained
occurrences of this message
are in the log, then may be
someone was deliberately
trying to gain unauthorized
access by guessing the
password. In that case
investigate, start with the
source IP address.
11302
Error
Invalid passcode for
machine (%1) used by
client at IP address %2.
The incorrect shared secret
was used by the client host to
connect the server. This is
most likely due to the server
was reinstalled and the
credential file is changed. This
may be also due to someone
was deliberately trying to gain
data access by guessing the
shared secret, although highly
unlikely.
From the client host, delete
the server and add it back
again to resynchronize with
the shared secret.
11303
Error
Authentication failed in
stage %1 for client at IP
address %2.
The incorrect login was used
by the client host to connect
the server.
From the client host, delete
the server, add it back again,
and use the correct login.
11306
Error
The server Administrator
group does not exist.
Server Administrator Group
does not exist in the system
possibly due to improper
installation or upgrade.
Contact Technical Support
for possible cause and fixes.
11307
Error
User %1 at IP address
%2 is not a member of
the server Administrator's
group.
It might be a typo when user
typed in the ID and password
to log in.
Check user ID and
password and make sure
there is no possibility for
unauthorized login from that
IP address.
11309
Error
User ID %1 at IP address
%2 is invalid.
It might be a typo when user
typed in the ID and password
to log in.
Check the user account and
retry.
11310
Error
The client User name %1
does not match with the
client name %2.
User name does not match the
original user when resetting
the credential for the client.
Use the original user name
or ask server Administrator
to reset the credential from
the client.
CDP/NSS Administration Guide
564
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
11506
Error
Console ([host name]):
Failed to start replica
scanning for virtual
device %2.
It is possibly due to a
connection error or the system
being busy.
Check connectivity between
replication primary server
and target server. Check to
see if system is busy with
pending operations.
11508
Error
Console ([host name]):
Failed to set the
properties for the server.
Failed to update configuration
file for the new server
properties possibly due to disk
error or the system being busy.
Check the system disk and
the system status.
11510
Error
Console ([host name]):
Failed to save report -%2.
A report file could not be
saved possibly due to not
enough space or error on disk.
Check the system disk
status, available space, and
the system status.
11511
Error
Console ([host name]):
Failed to get the
information for the NIC.
Network interface information
could not be retrieved possibly
due to a configuration error or
low system resources.
Check if network
configuration is configured
properly. Also check if the
system memory is running
low.
11512
Error
Console ([host name]):
Failed to add a replica for
device %2 to The server
%3 (watermark: %4 MB,
time: %5, interval: %6,
watermark retry: %7,
suspended: %8).
Failed to configure replication
on the primary server possibly
because the system was busy.
Check the system log
messages for actual cause.
11514
Error
Console ([host name]):
Failed to remove the
replica for device %2
from The server %3
(watermark: %4 MB,
time: %5, interval: %6,
watermark retry: %7,
suspended: %8).
Failed to remove replication
configuration on the primary
server when deleting
replication setup possibly
because the system was busy.
Check the system log
messages for actual cause.
11516
Error
Console ([host name]):
Failed to create the
replica device %2.
Failed to create the replica for
the source virtual device. This
is most likely due to problem in
the remote server.
Check the hardware and
software condition in the
remote replica server to
make sure it is running
properly before trying again.
11518
Error
Console ([host name]):
Failed to start replication
for virtual device %2.
The remote server is not
reachable, or is in a bad state.
Check the hardware and
software condition in the
remote replica server to
make sure it is running
properly before trying again.
CDP/NSS Administration Guide
565
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
11520
Error
Console ([host name]):
Failed to stop replication
for virtual device %2.
It is possibly because the
system was busy.
Check the system log
messages for actual cause
and retry.
11522
Error
Console ([host name]):
Failed to promote replica
device %2 to a virtual
device.
It is possibly because the
system was busy.
Check the system log
messages for actual cause
and retry.
11524
Error
Console ([host name]):
Failed to run the server
X-Ray.
When any server process
cannot be started, it is most
likely due to insufficient system
resources, invalid state left by
a server process that may not
have been stopped properly,
or due to an unexpected OS
process failure that left the
system in a bad state. This
should happen very rarely. If
frequent occurrences are
encountered, there must be
external factors that
contributed to the behavior
that must be investigated and
removed before running the
server.
If the system resources are
low, run 'top' to check the
process that is using the
most memory. If the physical
memory is below the server
recommendation, install
more memory on the
system. If the OS is
suspected to be in a bad
state due to an unexpected
failure in either hardware of
software components,
restart the server machine.
11530
Error
Console ([host name]):
Failed to back up
configuration files.
Failed to retrieve configuration
on the server to return to the
console for 'Save
Configuration' operation. File
might be updated or the
system is busy.
Check the system status.
11532
Error
Console ([host name]):
Failed to restore
configuration files.
Failed to restore configuration
from previous saved
configuration file possibly due
to configuration conflict or the
system being busy.
Check if the saved
configuration is outdated.
Check the system status.
CDP/NSS Administration Guide
566
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
11534
Error
Console ([host name]):
Failed to reset the umap
for virtual device %2.
A storage hardware failure
happened.
Check the storage devices,
e.g., power status; controller
status, etc. Check the
connectivity, e.g., cable
connectors. With fibre
channel switches, even the
connection status light
indicates the connection is
good, it is still not a
guarantee. Push the
connector in to make sure.
Check the specific storage
device using OS-provided
utilities such as 'hdparm'.
11535
Error
Console ([host name]):
Failed to update the
replication parameters for
virtual device %2 to The
server %3 (watermark:
%4 MB, time: %5,
interval: %6, watermark
retry: %7, suspended:
%8).
Failed to update replication
properties possibly because
the system was busy.
Check if the system is busy
and retry.
11537
Error
Console ([host name]):
Failed to claim physical
device %2.
This server version limits the
storage capacity.
Check license agreement
and key codes.
11539
Error
Console ([host name]):
Failed to import physical
device %2.
A storage hardware failure
happened.
Check the storage devices,
e.g., power status; controller
status, etc. Check the
connectivity, e.g., cable
connectors. With fibre
channel switches, even the
connection status light
indicates the connection is
good, it is still not a
guarantee. Push the
connector in to make sure.
Check the specific storage
device using OS-provided
utilities such as 'hdparm'.
11541
Error
Console ([host name]):
Failed to save event
message (ID: %2).
Failed to create an Event
message from Console or CLI
for replication, snapshot
expansion etc. possibly due to
not enough space on the
system disk.
Check the system disk
status and available space.
CDP/NSS Administration Guide
567
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
11542
Error
Console ([host name]):
Failed to remove replica
device %2.
Failed to delete the replica
disk possibly because the
system was busy.
Check the system status
and retry.
11544
Error
Console ([host name]):
Failed to modify replica
device %2.
Failed to expand the replica
disk possibly because the
system was busy.
Check the system status
and retry.
11546
Error
Console ([host name]):
Failed to mark the
replication for virtual
device %2.
Failed to mark replication in
sync possibly due to
connectivity issues between
the primary and the target
servers or the system being
busy.
Check the connectivity and
the system status. Try again.
11548
Error
Console ([host name]):
Failed to determine if
data was written to virtual
device %2.
Failed to check if the virtual
device has been updated
possibly due to device error or
the system being busy.
Check the device and the
system status.
11553
Error
Console ([host name]):
Failed to get login user
list.
The list of users could not be
retrieved from the system.
Check the system status.
11554
Error
Console ([host name]):
Failed to set failover
option
<selfCheckInterval: %d
sec>.
Failed to set failover options
on the primary server possibly
due to the failover module
stopped or a disk error
occurred.
Check the failover module
status. Check the system
disk status.
11556
Error
Console ([host name]):
Failed to start initializing
snapshot copy from
virtual device %2 to
virtual device %3.
This may happen if another
process is performing I/O with
the snapshot requirements,
such as an backup operation.
It is also possibly due to a
storage hardware failure.
Check to see if another
process is using the
snapshot. Check storage
devices.
11560
Error
Console ([host name]):
Failed to get licenses.
The license key code
information could not be
retrieved.
Check the system disk and
system status.
11561
Error
Console ([host name]):
Failed to add license %2.
The license is not valid.
Check the license key code
validity.
11563
Error
Console ([host name]):
Failed to remove license
%2.
The license is not valid.
Check the license key code
validity.
11565
Error
Console ([host name]):
Failed to check licenses - option mask %2.
The license is not valid.
Check the license key code
validity.
CDP/NSS Administration Guide
568
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
11567
Error
Console ([host name]):
Failed to clean up
failover server directory
%2.
It is possibly due to a disk error
or the system being busy
when failover setup was to be
removed.
Check the system disk and
the system status.
11568
Error
Console ([host name]):
Failed to set (%2) I/O
Core for failover -- Failed
to create failover
configuration.
Failed to notify IO Core of
failover setup or removal
possibly because the system
was busy.
Reconfigure failover if this
happens during failover
setup.
11569
Error
Console ([host name]):
Failed to set %2 to Fibre
Channel mode %3.
This is possibly due to the fibre
channel driver not properly
loaded, or the wrong version of
the driver is loaded.
Run 'lsmod' to check the qla
driver to make sure it is
loaded and it is the correct
revision located in
$ISHOME/lib/modules/
<kernel>/scsi
11571
Error
Console ([host name]):
Failed to assign Fibre
Channel device %2 to
%3 (rolled back).
Failed to assign virtual device
to Fibre Channel target. All
intermediary configuration
changes were rolled back and
the configuration remained
unchanged.
Check LUN conflict, disk
status, system status, Fibre
Channel Target module
status.
11572
Error
Console ([host name]):
Failed to assign Fibre
Channel device %2 to
%3 (not rolled back).
Failed to assign virtual device
to Fibre Channel target.
However, the configuration
was partially updated.
Check LUN conflict, disk
status, system status, Fibre
Channel Target module
status. May need to restart
Fibre Channel Target
Module to resolve the
configuration conflict.
11574
Error
Console ([host name]):
Failed to unassign Fibre
Channel device %2 from
%3 (rolled back) and
returns %4.
Failed to unassign virtual
device from Fibre Channel
target. All intermediary
configuration changes were
rolled back and the
configuration remained
unchanged.
Check Fibre Channel Target
module status.
11575
Error
Console ([host name]):
Failed to unassign Fibre
Channel device %2 from
%3 (not rolled back) and
returns %4.
Failed to unassign the virtual
device from the Fibre Channel
target. However, the
configuration is partially
updated.
Check the Fibre Channel
Target module status; you
may need to restart the Fibre
Channel Target module to
resolve the configuration
conflict.
11577
Error
Console ([host name]):
Failed to get Fibre
Channel target
information4.
It is possibly due to the Fibre
Channel target module status.
Check the Fibre Channel
Target module status.
CDP/NSS Administration Guide
569
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
11578
Error
Console ([host name]):
Failed to get Fibre
Channel initiator
information.
This is possibly due to the fibre
channel driver not properly
loaded, or the wrong version of
the driver is loaded.
Run 'lsmod' to check the qla
driver to make sure it is
loaded and it is the correct
revision located in
$ISHOME/lib/modules/
<kernel>/scsi.
11581
Error
Console ([host name]):
Failed to set NAS option
%2.
Failed to start the NAS
processes. It is most likely due
to insufficient system
resources, invalid state left by
last running server processes
that may not have been
stopped properly, or due to an
unexpected OS processes
failure that left the system in a
bad state. This should happen
very rarely. If frequent
occurrences are encountered,
there must be external factors
that contributed to the
behavior that must be
investigated and removed
before running the server.
If system resources are low,
run 'top' to check the
process that is using the
most memory. If the physical
memory is below the server
recommendation, install
more memory on the
system. If the OS is
suspected to be in a bad
state due to unexpected
failure in either hardware of
software components,
restart the server machine.
11583
Error
Console ([host name]):
Failed to update Fibre
Channel client (%2)
WWPNs.
This is possibly due to the fibre
channel driver not properly
loaded, or the wrong version of
the driver is loaded.
Run 'lsmod' to check the qla
driver to make sure it is
loaded and it is the correct
revision located in
$ISHOME/lib/modules/
<kernel>/scsi.
11585
Error
Console ([host name]):
Failed to set Fibre
Channel option %2.
The Fibre Channel option
could not be enabled or
disabled.
Check the system status.
11590
Error
Out of disk space to
expand snapshot storage
for virtual device %1.
There are no more storage left
for automatic expansion of the
snapshot resource, which just
reached the threshold usage.
Add additional storage to
server. Physical storage
must be prepared for virtual
drive before it is qualified to
be allocated for snapshot
resources.
11591
Error
Failed to expand
snapshot storage for
virtual device %1:
maximum segment
exceeded (error code
%2).
The virtual drive has an upper
limit of number of physical
segments. The drive has been
expanded so many time it
exceeded the limit.
Do not expand drives in too
small increments.
Consolidate the segments
by mirroring or snapshot
copy to another virtual drive
with less segment before
expand again.
CDP/NSS Administration Guide
570
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
11594
Error
Console ([host name]):
Failed to set CallHome
option %2.
The Email Alert option could
not be enabled or disabled.
Check the system status.
11598
Error
Out of disk space to
expand CDP journal
storage for %1.
The CDP Journal could not be
expanded due to insufficient
disk space..
Add more storage.
11599
Error
Failed to expand CDP
journal storage for %1:
maximum segment
exceeded (error code
%2).
The CDP Journal resource
could not be expanded due to
the maximum supported
segments.
Currently up to 64 segments
are supported; in order to
prevent this to happen,
create a bigger CDP journal
to avoid frequent
expansions.
11605
Error
Failed to create character
device to map TimeMark
%1 for virtual device %2.
Failed to map a raw device
interface for virtual device to
perform backup, snapshot
copy, or TimeMark copy.
Check the virtual device and
snapshot resource status.
11608
Error
Console ([host name]):
Failed to proceed copy/
rollback TimeMark
operation with client
attached on the target
virtual device %2.
The device to be rolled back is
still assigned to client hosts. To
guarantee no I/O happens
during rollback, the device
should not be assigned to any
host.
Unassign the virtual device
before rollback.
11609
Error
[Task %1] Failed to
create TimeMark for
virtual device %2 while
the last creation/client
notification is in progress.
The last snapshot operation,
including the notification
process, is still in progress.
This can happen when the
time interval between
snapshots is too short, or the
snapshot notification is held up
by network or client
applications.
Adjust the frequency of
TimeMark snapshots.
Determine the actual time it
takes for snapshot
notification to complete,
which depends on
application and data activity.
11610
Error
Failed to create
TimeView for virtual
device %2 TimeMark %3.
The TimeView resource could
not be created for the virtual
device possibly due to device
error.
Check to see if the virtual
device and snapshot
resource are online.
11613
Error
Console ([host name]):
Failed to enable
TimeMark for device %2.
The TimeMark option could
not be enabled for the virtual
device possibly due to device
error.
Check to see if the virtual
device and snapshot
resource are online.
11615
Error
Console ([host name]):
Failed to disable
TimeMark for device %2.
The TimeMark option for
virtual device could not be
disabled possibly because the
system was busy.
Retry later.
CDP/NSS Administration Guide
571
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
11618
Error
Failed to select
TimeMark %1 for virtual
device %2: TimeMark %3
has already been
selected.
|The TimeMark is already
selected for another operation.
Wait for the completion of
the other operation.
11619
Error
Failed to select
TimeMark %1 character
device for virtual device
%2.
The TimeMark could not be
selected for raw device
backup possibly because the
system was busy.
Check the system status
and retry later.
11621
Error
Failed to create
TimeMark %1 for virtual
device %2.
The TimeMark could not be
selected for raw device
backup possibly because the
system was busy.
Check the device status and
the system status.
11623
Error
Failed to delete
TimeMark %1 for virtual
device %2.
The specified TimeMark for
virtual device could not be
removed possibly due to
device error, pending
operation, or the system being
busy.
Check the device status and
the system status. Retry
later.
11625
Error
Failed to copy TimeMark
%1 of virtual device %2
as virtual device %3.
The TimeMark failed to copy
from source virtual device to
target virtual device possibly
due to device error or the
system being busy.
Check device status and
system status. Retry later.
11627
Error
Failed to rollback
TimeMark to timestamp
%1 for virtual device %2.
|The TimeMark rollback for
virtual device failed possibly
due to device error or the
system being busy.
Check device status and
system status. Retry later.
11631
Error
Failed to expand
snapshot storage for
virtual device %1 (error
code %2).
Automatic snapshot resource
expansion failed possibly due
to device error, quota reached,
out-of-space, or the system
being busy.
Check log messages for
possible cause.
11632
Error
Console ([host name]):
Failed to set failover
option on secondary
server
<heartbeatInterval: %2
sec,
autoRecoveryInterval:
%3 sec>.
Failed to update failover
options with auto-recovery
mode on secondary server.
Check failover module
status. Check the system
disk status.
CDP/NSS Administration Guide
572
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
11633
Error
Console ([host name]):
Failed to set failover
option on secondary
server
<heartbeatInterval: %2
sec,
autoRecoveryInterval:
disabled>.
Failed to update failover
options without auto-recovery
mode on secondary server.
Check failover module
status. Check the system
disk status.
11637
Error
Failed to expand CDP
journal storage for %1
(error code %2).
CDP Journal could not be
expanded possibly due to not
enough space or error on the
disk.
Check disk status, available
space and system status.
11638
Error
Failed to expand CDP
journal storage for %1.
The virtual device is
assigned to user %2. The
quota for this user is %3
MB and the total size
allocated to this user is
%4 MB, which exceeds
the limit.
CDP Journal resource could
not be expanded due to
storage quota limit exceeded.
Increase storage quota for
the specified user.
11639
Error
The virtual device %1 is
assigned to user %2. The
quota for this user is %3
MB and the total size
allocated to this user is
%4 MB. Only %5 MB will
be added to the CDP
Journal.
CDP Journal resource was
expanded with a smaller
increment size than usual due
to user quota limit.
Increase storage quota for
the specified user.
11640
Error
Failed to expand
snapshot resource for
virtual device %1. The
virtual device is assigned
to user %2. The quota for
this user is %3 MB and
the total size allocated to
this user is %4 MB, which
exceeds the limit.
Snapshot resource was not
expanded due to quota limit
exceeded.
Increase storage quota for
the specified user.
11641
Error
The virtual device %1 is
assigned to user %2. The
quota for this user is %3
MB and the total size
allocated to this user is
%4 MB. Only %5 MB will
be added to the snapshot
resource.
Snapshot resource was
expanded with a smaller
increment size than usual due
to user quota limit.
Increase storage quota for
the specified user.
CDP/NSS Administration Guide
573
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
11642
Error
Failed to create
temporary TimeView
from TimeMark %1 to
copy TimeView data for
virtual device %2.
TimeMark might not be
available to create the
temporary TimeView or raw
device creation failed.
If TimeMark is still available,
try TimeMark copy again.
11643
Error
[Task %1] Failed to
create TimeMark for
virtual device %2 while
notification to client %3
for other resource is in
still progress.
Snapshot notification to the
same client for other virtual
devices is still pending.
Retry later.
11644
Error
Take TimeView %1 id %2
offline because the
source TimeMark has
been deleted.
The TimeMark snapshot which
the TimeView is based had
been deleted. The TimeView
image therefore is set to OFFLINE because it is no longer
accessible.
Remove the TimeView from
the resource.
11645
Error
Console ([host name]):
Failed to create
TimeView: virtual device
%2 already have a
TimeView.
For each TimeMark snapshot,
only one TimeView interface
can be created.
Do not attempt to create
several TimeViews from the
same TimeMark.
11649
Error
Failed to convert inquiry
string on SCSI device
%1.
The inquiry string contains
invalid information.
Check the device
configuration.
11655
Error
Bad capacity size for
SCSI device %1.
Failed to get capacity
information from the device.
Check the storage.
11656
Error
Discarded scsi device
%1, unsupported Cabinet
ID.
|The Cabinet ID of the device
is not supported.
Check the storage device
definition.
11657
Error
Discarded scsi device
%1, missing "%2" vendor
in inquiry string.
The disk is not from one of the
supported vendor.
Check the storage device
definition.
11658
Error
SCSI device %1 storage
settings are not optimal.
Check the storage
settings.
The storage settings are not
optimal.
Check the storage.
11659
Error
Discarded scsi device
%1, exceeded maximum
supported LSI LUN %2.
The number of LSI device
LUNs exceeds the maximum
supported value.
Check storage configuration.
CDP/NSS Administration Guide
574
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
11660
Error
Failed to allocate a %1
MB DiskSafe mirror disk
in storage pool %2.
There is only %3 MB free
space left in that storage
pool.
DiskSafe mirror disk could not
be created due to insufficient
storage space.
Add more storage.
11661
Error
Failed to expand the
DiskSafe mirror disk by
%1 MB for user %2. The
total size allocated for
this user would be %3
MB and this exceeds the
user's quota of %4 MB.
DiskSafe mirror disk could not
be expanded due to user
quota.
Increase storage quota for
the specified user.
11662
Error
Failed to create a %1 MB
DiskSafe snapshot
resource. There is not
any storage pool with
enough free space.
Snapshot resource could not
be created for DiskSafe mirror
disk due to insufficient storage
in storage pool assigned for
DiskSafe.
Add more storage to the
DiskSafe storage pool.
11665
Error
Console ([host name]):
Failed to enable backup
for virtual device %2.
Backup option could not be
enabled for virtual device
possibly due to device error or
max of virtual devices that can
be enabled for backup has
reached.
Check the maximum limit of
256 virtual devices that can
be enabled for backup has
not reached; check also disk
status and system status.
11667
Error
Console ([host name]):
Failed to disable backup
for virtual device %2.
Backup option could not be
disabled for virtual device
possibly due to device error.
Check disk status and
system status.
11668
Error
Console ([host name]):
Failed to stop backup
sessions for virtual
device %2.
Raw device backup session
for virtual device could not be
stopped possible because the
system was busy.
Check the system status.
11672
Error
Virtual device %2 cannot
join snapshot group %3
group id %4.
Virtual device could not be
added to the group possibly
because snapshot operation
was in progress.
Check if snapshot operation
is pending for the virtual
device or the group. Check
disk status and system
status.
11676
Error
Virtual device %2 cannot
leave snapshot group %3
group id %4.
Virtual device could not be
removed from the group
possibly because snapshot
operation was in progress.
Check if snapshot operation
is pending for the virtual
device or the group. Check
disk status and system
status.
CDP/NSS Administration Guide
575
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
11681
Error
Console ([host name]):
Failed to resume Cache
Resource %2 (ID: %3).
The SafeCache usage for
virtual device could not be
resumed possibly due to a
device error.
Check disk status and
system status.
11683
Error
Console ([host name]):
Failed to suspend cache
Resource %2 (ID: %3).
The SafeCache usage for
virtual device could not be
suspended possibly due to a
device error.
Check disk status and
system status.
11684
Error
Console ([host name]):
Failed to reset cache on
target device %2 (ID: %3)
for %4 copy.
The SafeCache could not be
reset for snapshot copy target
resource possibly because the
system was busy.
Check if system is busy and
retry later.
11686
Error
Console ([host name]):
Failed to add %2
Resource %3 (ID: %4).
The specified resource could
not be created possibly due to
a device error.
Check disk status and
system status.
11688
Error
Console ([host name]):
Failed to delete %2
Resource %3 (ID: %4).
|The specified resource could
not be created possibly
because the system was busy.
Check if system is busy.
11690
Error
Console ([host name]):
Failed to resume
HotZone resource %2
(ID: %3).
The HotZone usage could not
be resumed possibly due to a
disk error
Check the system disk
status and system status.
11692
Error
Console ([host name]):
Failed to suspend
HotZone resource %2
(ID: %3).
HotZone usage could not be
suspended possibly due to a
disk error.
Check the system disk
status and system status.
11694
Error
Console ([host name]):
Failed to update policy
for HotZone resource %2
(ID: %3).
HotZone policy could not be
updated possibly due to a disk
error.
Check the system disk and
system status.
11695
Error
Console ([host name]):
Failed to get HotZone
statistic information.
HotZone statistics information
could not be retrieved from log
file.
Check HotZone log, disk
status, system status.
11696
Error
Console ([host name]):
Failed to get HotZone
status.
HotZone status could not be
retrieved possibly due to a disk
error.
Check HotZone device
status.
11701
Error
Console ([host name]):
Failed to reinitialize
snapshot resource (ID:
%2) for virtual device (ID:
%3).
The snapshot resource could
not be reinitialized possibly
due to a disk error.
Check if the snapshot
resource is online. Check if
the system is busy.
CDP/NSS Administration Guide
576
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
11706
Error
Console ([host name]):
Failed to shrink snapshot
resource for resource %2
(ID: %3).
Shrinking snapshot resource
failed possibly because the
system was busy.
Check the system status
and retry.
11707
Error
Deleting TimeMark %1
on virtual device %2 to
maintain snapshot
resource threshold is
initiated.
TimeMark deletion to maintain
snapshot resource threshold
started after a failed
expansion.
Check disk status, available
space. Check if system is
busy. Try manual expansion
if it is necessary.
11708
Error
Failed to get TimeMark
information to rollback to
TimeMark %1 for virtual
device %2.
TimeMark information could
not be retrieved for rollback
possibly due to pending
TimeMark deletion operation.
Retry later.
11711
Error
Copying CDP journal
data to %1 %2 (ID: %3)
failed to start. Error: %4.
The CDP Journal data could
not be copied possibly
because the system was busy.
Check the system status.
11713
Error
Copying CDP journal to
%1 %2 (ID: %3) failed to
complete. Error: %4.
Copying CDP Journal data
failed possibly because the
system was busy.
Check the system status.
11715
Error
Console ([host name]):
Failed to suspend CDP
Journal Resource %2
(ID: %3).
The CDP Journal for resource
could not be suspended
possibly because the system
was busy.
Check the system status.
11716
Error
Console ([host name]):
Failed to get information
for license activation.
The license registration
information could not be
retrieved.
Check license is registered
and public key is not
missing.
11717
Error
Console ([host name]):
Failed to activate license
(%2).
|The license registration failed.
Check connectivity to
registration server; check file
system is not read-only for
creation of intermediary files.
11722
Error
Console ([host name]):
Failed to flush TimeView
cache data for TimeView
resource %2.
The snapshot resource or the
cache resource may be offline.
Check the snapshot and
cache resources status.
11730
Error
Console ([host name]):
Failed to suspend mirror
for virtual device %2.
Mirror synchronization could
not be suspended possibly
because the system was busy.
Check the system status.
CDP/NSS Administration Guide
577
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
11738
Error
Console ([host name]):
Failed to update the
replication parameters for
virtual device %2 to The
server %3 (compression:
%4, encryption: %5,
MicroScan: %6).
Replication properties could
not be updated.
Check the system disk
status and system status.
11740
Error
[Task %1] Snapshot
creation / notification for
%2 %3 will proceed while
the Near-line Mirror on
server %4 is out-of-sync.
A snapshot is going to be
created while the mirror is outof-sync in a Near-line setup.
Synchronize the mirror of
the primary disk on primary
server.
11741
Error
[Task %1] Snapshot
creation / notification for
%2 %3 will proceed while
the Near-line Mirroring
configuration checking
on server %4 failed.
The Near-line server could
connect to the primary server
to obtain the client
configuration information but
the client could not be notified
for snapshot. A snapshot will
still be created.
Check connectivity between
the primary server and the
Near-line server. Check if
the primary server is busy.
11742
Error
[Task %1] Snapshot
creation / notification for
%2 %3 will proceed while
the Near-line Mirroring
configuration on server
%4 is invalid.
The primary server
configuration check failed
when a snapshot was to be
taken on the Near-line server.
The snapshot is still be taken,
but the data might not be valid.
Check the primary disk
configuration and status.
11761
Error
Console ([host name]):
Failed to updated mirror
policy.
The virtual device mirroring
policy could not be updated
possibly because the system
was busy.
Check the system status
and retry later.
11770
Error
Console ([host name]):
Failed to get mutual chap
user list.
The list of iSCSI Mutual CHAP
Secret users could not be
retrieved.
Check the system status
and retry later.
11771
Error
Console ([host name]):
Failed to reset mutual
chap secret for user %2.
The iSCSI Mutual CHAP
Secret for a user could not be
reset by root.
Check the system status
and retry later.
11773
Error
Console ([host name]):
Failed to update mutual
chap secret for user %2.
The iSCSI Mutual CHAP
Secret for a user could not be
updated by root.
Check the system status
and retry later.
11775
Error
Console ([host name]):
Failed to add mutual
chap secret user %2.
The iSCSI Mutual CHAP
Secret for a user could not be
added by root possibly due to
a disk problem or the system
being busy.
Check the system disk and
system status.
CDP/NSS Administration Guide
578
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
11777
Error
Console ([host name]):
Failed to delete mutual
chap secret user %2.
The iSCSI Mutual CHAP
Secret for a user could not be
deleted by root.
Check the log message for
possible cause.
11900
Error
Failed to import report
request.
There is an invalid parameter
specified in the report request.
Check parameters for report
generation.
11901
Error
Failed to parse report
request %1 %2.
Report request parsing failed.
Check parameters for report
generation.
11902
Error
Undefined report type
%1.
Report type is invalid.
Check parameters for report
generation.
11910
Error
Failed to create report file
%2 (type %1).
Specified report type could not
be created possibly because
the system was busy, out-ofspace, or had a disk error.
Check system log message
for possible cause.
13300
Error
Failed to authenticate to
the primary server -Failover Module stopped.
The security credentials for the
failover operation are
corrupted or deleted. This will
not happen under normal
operating conditions.
Reconfigure the failover set
after re-establishing user
credentials, e.g., reset the
root password of both hosts
and then reconfigure failover
using the new root
credentials.
13301
Error
Failed to authenticate to
the local server -Failover Module stopped.
See 13300.
See 13300.
13302
Error
Failed to transfer primary
static configuration to
secondary.
Quorum disk failure.
Check failover quorum disk
status.
13303
Error
Failed to transfer primary
dynamic configuration to
secondary
Quorum disk failure.
Check failover quorum disk
status.
13307
Error
Failed to transfer primary
authentication
information to secondary.
Quorum disk failure.
Check failover quorum disk
status.
13308
Error
Invalid failover
configuration detected.
Failover will not occur.
The primary configuration file
is missing.
Check network to make sure
the config file could be
transferred.
13309
Error
Primary server failed to
respond to command
from secondary.
Quorum disk or
communication failure.
Check failover quorum disk
status and network.
CDP/NSS Administration Guide
579
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
13316
Error
Failed to add IP address
[IP address].
This kind of problem should
rarely happen. If it does, it may
indicate a network
configuration error, possibly
due to system environment
corruption. It is also possible
that the network adapter failed
or is not configured properly. It
is also possible that the
network adapter driver has a
problem.
Restart the network. If the
problem persists, restart the
OS or restart the machine
(turn off then turn on the
machine). If the problem still
persists, you may need to
reinstall the OS. If that is the
case, may sure you properly
save all IPStor configuration
information before
proceeding.
13317
Error
Failed to release IP
address [IP address].
During failover the system
may be holding on to the IP
address of the failed server for
longer than the failover
module can wait. This is not a
problem and the message can
be ignored.
None.
13319
Error
Failed to stop IPStor
Failover Module. Host
may need to reboot.
When any IPStor process
cannot be stopped, it is most
likely due to insufficient system
resources, an invalid state left
by a server process that may
not have been stopped
properly, or an unexpected OS
process failure that left the
system in a bad state. This
should happen very rarely. If
frequent occurrence is
encountered, there may be
external factors that contribute
to the behavior that must be
investigated and removed
before running the server.
See 11240.
13320
Error
Failed to update the
configuration files to the
primary server [Error].
See 13300.
See 13300.
CDP/NSS Administration Guide
580
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
13700
Error
Failed to allocate
memory -- Self-Monitor
Module stopped.
When any server process
cannot be started, it is most
likely due to insufficient system
resources, invalid state left by
a process that may not have
been stopped properly, or due
to an unexpected OS process
failure that left the system in a
bad state. This should happen
very rarely. If frequent
occurrence is encountered,
there may be external factors
that contributed to the
behavior that must be
investigated and removed
before running the IPStor
server.
If system resources are low,
use top to check the process
that is using the most
memory. If physical memory
is below the IPStor
recommendation, install
more memory to the system.
If the OS is suspected in bad
state due to unexpected
failure in either hardware of
software components,
restart the server machine to
make sure the OS is in a
healthy state before trying
again.
13701
Error
Failed to release IP
address [IP address].
See 13317.
See 13317.
13702
Error
Failed to add virtual IP
address: %1. Retrying
the operation.
There may be a network issue
for the primary server to get
back its virtual IP during
failback.
Check network
configuration.
13703
Error
Failed to stop IPStor SelfMonitor Module.
See 13319.
See 13319.
13704
Error
IPStor module failure
detected. Condition: %1.
The secondary server has
detected that one module has
been stopped on the primary.
Check primary server status.
13710
Critical
The Live Trial period has
expired for IPStor Server
[Server name]. Please
contact FalconStor or its
representative to
purchase a license.
The liveday live trial grace
period has been exceeded.
Contact FalconStor or a
representative to obtain
proper license.
13711
Critical
The following options are
not licensed: [IPStor
option]. Please contact
FalconStor or its
representative to
purchase a license.
The specific option is not
licensed properly.
Contact FalconStor or a
representative to obtain
proper license.
CDP/NSS Administration Guide
581
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
13800
Error
Primary server failure
detected. Failure
condition: [Error].
The primary server detected
failure condition as described,
which is being reported to the
secondary server. Waiting for
the secondary server to decide
rather it should take over.
None.
13804
Critical
Quorum disk failed to
release to secondary.
The virtual drive holding the
quorum is no longer available
due to the deletion of the first
virtual drive when the system
was in an inconsistent state.
This should rarely happen if
the server is not in an
experimental stage where
drives are created and
deleted randomly. Call
Technical Support if it
persists.
13817
Critical
Primary server failback
was unsuccessful. Failed
to update the primary
configuration.
The primary server failed to
restore from the failover
operation due to other
conditions.
Check the log for the specific
error conditions encountered
and correct the situation
accordingly.
13818
Critical
Quorum disk negotiation
disk failed.
The primary server failed to
access the quorum disk.
The secondary will take over
anyway. Shut down the
primary in order to avoid any
conflictby using a power
control or an auto shutdown
script.
13820
Warning
Failed to retrieve primary
server health information.
The secondary server cannot
receive a heartbeat from the
primary server. The secondary
server is now trying to
determine if the primary server
is down, or the secondary
server itself is being isolated
from the network by trying to
contact other network entities.
Check other messages in
the log for more details and
more precise picture of the
situation.
13821
Error
Unable to contact other
entities in network.
Assume failure in
secondary side. Failover
not initiated.
The secondary server failed to
receive a heartbeat from the
primary and also failed to
contact any other network
entities in the subnet.
Considering this network
problem, the secondary
decides to not take over the
primary.
Check the secondary server
network status.
CDP/NSS Administration Guide
582
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
13822
Critical
Secondary will not take
over because storage
connectivity is not 100%.
When the primary reports a
storage connectivity problem,
the secondary will try to
determine if it has better
connectivity. If it is not 100%
healthy, e.g., it fails to connect
to all storage devices, it will
not take over.
Check the storage
connectivity for both the
primary and secondary to
correct the situation. See
11204 for checking storage.
13827
Error
Failed to stop quorum
updating process. PID.
Maybe due to storage
device or connection
failure.
There may be a storage
device or connection failure.
Check storage connectivity.
13828
Warning
Almost running out of file
handlers (current
[Number of handles],
max [Number of
handles]).
The operating system is
running out of resource for file
handles.
Determine the appropriate
amount of memory required
for the current configuration
and applications. Check for
any process that is leaking
memory.
13829
Warning
Almost running out of
memory (current
[Number of KB] K, max
[Number of KB]).
The operating system is
running out of memory.
See 13828.
13830
Error
Get configuration file
from storage failed.
There may be a storage
device or connection failure.
Check storage connectivity.
13832
Error
Primary server operation
is resumed either by user
initiated action, or
secondary server is
suspended.
The failed server was forced to
come back.
Check the primary and
secondary server status.
13833
Error
Failed to backup file from
[source] to [target
location].
There may be a storage
device or connection failure.
Check storage connectivity.
13834
Error
Failed to copy file out
from Quorum repository.
There may be a storage
device or connection failure on
the quorum disk.
Check storage connectivity.
13835
Error
Failed to take over
primary.
The secondary server is not
completely functional.
Check secondary server
status.
13836
Error
Failed to get
configuration files from
repository. Check and
correct the configuration
disk.
There may be a storage
device or connection failure on
the quorum disk.
Check storage connectivity.
CDP/NSS Administration Guide
583
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
13841
Error
Secondary server does
not match primary server
status.
Takeover is in progress but the
primary server is not in DOWN
or READY status.
Check primary server status.
It may have temporarily
been in an inconsistent
state; if its status is still not
DOWN or READY, check if
the sm module is running.
13842
Warning
Secondary server will
takeover. Primary is still
down.
The primary server failed.
None.
13843
Error
Secondary server failed
to get original conf file
from repository before
failback.
There may be a storage
device or connection failure on
the quorum disk.
Check storage connectivity.
13844
Error
Failed to write to
repository.
There may be a storage
device or connection failure on
the quorum disk.
Check storage connectivity.
13845
Warning
Quorum disk failure
detected. Secondary is
still in takeover mode.
There may be a storage
device or connection failure on
the quorum disk.
Check storage connectivity.
13848
Warning
Primary is already shut
down. Secondary will
take over immediately.
Failover occurred.
None.
13849
Warning
One of the heartbeat
channels is down: IP
address [IP]
Lost heartbeat IP information.
Check network connections.
13850
Error
Secondary server can
not locate quorum disk.
Either the configuration is
wrong, or the drive is
offline.
There may be a storage
device or connection failure on
the quorum disk.
Check storage connectivity.
13851
Error
Secondary server can't
take over due to
[Reason]
The secondary server cannot
take over due to the indicated
reason.
Take action based om the
reason explanation.
13853
Error
Secondary notified
primary to go up because
secondary is unable to
take over.
The secondary server
detected a failure on the
primary but it also detected a
failure on itself so it did not
take over the primary. Perhaps
the primary was just booting
up.
Check the status of both
servers.
CDP/NSS Administration Guide
584
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
13856
Error
Secondary server failed
to communicate with
primary server through
IP.
There is a heartbeat
communication issue between
failover partners.
Check network connections.
13858
Critical
Secondary server failed
to communicate with
remote mirror.
There was a primary server
failure or network
communication is broken.
Check server and network
connections.
13860
Error
Failed to merge
configuration file.
This may be because of
inconsistent failover node
configuration files when
merging them after restore.
Check the server
configuration.
13861
Error
Failed to rename file from
%1 to %2.
The file name already exists or
the file system is inconsistent
or read-only.
Check the file system .
13862
Error
Failed to write file %1 to
repository.
There might be storage device
or connection failure on the
quorum disk.
Check storage connectivity.
13863
Critical
Primary server is
commanded to resume.
Forced primary recovery.
Check the status of failover
servers.
13864
Critical
Primary server operation
will terminate.
Forced primary down.
Check server status.
13877
Error
Secondary server failed
to take over.
Secondary server is not in a
good state.
Check secondary server.
13878
Error
Primary server has
invalid failover
configuration.
Server configuration is not
consistent.
Check failover setup
configuration.
13879
Critical
Secondary server
detected kernel module
failure; you may need to
reboot server %1.
Unexpected kernel module
error happened.
Reboot the secondary
server.
13880
Critical
Secondary server has
detected communication
module failure. Failover
is not initiated. error
[Error].
Unexpected error happened in
comm module so failover will
not occur.
Check server modules
status.
13881
Error
Secondary server will
terminate failover
module. error [Error].
Forced fm module to stop.
Check server status.
13882
Error
Primary server quorum
disk may have problem.
error [Error].
There might be a storage
device or connection failure on
the quorum disk.
Check storage connectivity.
CDP/NSS Administration Guide
585
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
13888
Warning
Secondary server is
temporarily busy.
The secondary server has a
heavy loadperhaps due toIO.
Check server status.
13895
Critical
Partner server failure
detected: [Failure]
(timestamp [Date Time])
This server detected the
specified failure condition on
the partner that can result in a
failover.
Check failover condition to
fix it.
15000
Error
Snapshot copy failed to
start because of invalid
input arguments.
The source virtual device or
the destination virtual device
cannot be accessed when
creating a snapshot and
copying it.
Check source and target
virtual devices.
15002
Error
Snapshot copy from
virtual device id %1 to id
%2 failed because it
could not open file %3.
The source virtual device or
the destination virtual device
cannot be accessed when
creating a snapshot and
copying it.
Check source and target
virtual devices.
15003
Error
Snapshot copy from
virtual device id %1 to id
%2 failed because it
failed to allocate (%3)
memory.
Memory is low.
Check server memory
amount and usage.
15004
Error
Snapshot copy from
virtual device id %1 to id
%2 failed because an
error occurred when
writing to file %3, errno is
%4.
The source virtual device or
the destination virtual device
cannot be accessed when
creating a snapshot and
copying it.
Check source and target
virtual devices.
15005
Error
Snapshot copy from
virtual device id %1 to id
%2 failed because an
error occurred when
lseek in file %3, errno is
%4.
The source virtual device or
the destination virtual device
cannot be accessed when
creating a snapshot and
copying it.
Check source and target
virtual devices.
15006
Error
Snapshot copy from
virtual device id %1 to id
%2 failed because an
error occurred when
reading from file %3,
errno is %4.
The source virtual device or
the destination virtual device
cannot be accessed when
creating a snapshot and
copying it.
Check source and target
virtual devices.
CDP/NSS Administration Guide
586
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
15008
Error
Snapshot copy from
virtual device id [Device
ID] to id [Device ID] might
have run out of snapshot
reserved area. Please
expand the snapshot
reserved area.
The snapshot copy operation
failed and is most likely due to
insufficient snapshot resource
area that cannot maintain the
snapshot.
Increase the snapshot
resource or create the
snapshot copy while the
virtual drive is not being
actively written to.
15016
Error
TimeMark copy failed to
start because of invalid
input arguments.
The source virtual device or
the destination virtual device
cannot be accessed when
copying an existing TimeMark.
Check source and target
virtual devices.
15018
Error
TimeMark copy from
virtual device id %1
snapshot image %2 to id
%3 failed because it
failed to open file %4.
The source virtual device or
the destination virtual device
cannot be accessed when
copying an existing TimeMark.
Check source and target
virtual devices.
15019
Error
TimeMark copy from
virtual device id %1
snapshot image %2 to id
%3 failed because it
failed to allocate (%4)
memory.
Memory is low.
Check server memory
amount and usage.
15020
Error
TimeMark copy from
virtual device id %1
snapshot image %2 to id
%3 failed because an
error occurred when
writing to file %4, errno is
%5.
The source virtual device or
the destination virtual device
cannot be accessed when
copying an existing TimeMark.
Check source and target
virtual devices.
15021
Error
TimeMark copy from
virtual device id %1
snapshot image %2 to id
%3 failed because an
error occurred when
lseek in file %4, errno is
%5.
The source virtual device or
the destination virtual device
cannot be accessed when
copying an existing TimeMark.
Check source and target
virtual devices.
15022
Error
TimeMark copy from
virtual device id %1
snapshot image %2 to id
%3 failed because an
error occurred when
reading from file %4,
errno is %5.
The source virtual device or
the destination virtual device
cannot be accessed when
copying an existing TimeMark.
Check source and target
virtual devices.
CDP/NSS Administration Guide
587
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
15024
Warning
TimeMark copy from
virtual device id [Device
ID] snapshot image
[TimeMark name] to id
[Device ID] might have
run out of snapshot
reserved area. Please
expand the snapshot
reserved area.
The TimeMark copy operation
failed and is most likely due to
insufficient snapshot resource
area that cannot maintain the
snapshot.
Increase the snapshot
resource or create a
TimeMark copy while the
virtual drive is not being
actively written to.
15032
Error
TimeMark rollback failed
to start because of invalid
input arguments.
The source virtual device or
the destination virtual device
cannot be accessed.
Check source and target
virtual devices.
15034
Error
TimeMark rollback for
virtual device id %1 to
snapshot image %2
failed because it failed to
open file %3.
The source virtual device or
the destination virtual device
cannot be accessed.
Check source and target
virtual devices.
15035
Error
TimeMark rollback for
virtual device id [Device
ID] to snapshot image
[TimeMark name] failed
because it failed to
allocate ([Kilobytes])
memory.
The memory resource in the
system is running low. The
system cannot allocate
enough memory to perform
the rollback operation.
Stop unnecessary
processes or delete some
TimeMarks and try again. If
this happens frequently,
increase the amount of
physical memory to
adequate level.
15036
Error
TimeMark rollback for
virtual device id %1 to
snapshot image %2
failed because an error
occurred when writing to
file %3, errno is %4.
The source virtual device or
the destination virtual device
cannot be accessed.
Check source and target
virtual devices.
15307
Error
TimeMark rollback for
virtual device id %1 to
snapshot image %2
failed because an error
occurred when lseek in
file %3, errno is %4.
The source virtual device or
the destination virtual device
cannot be accessed.
Check source and target
virtual devices.
15308
Error
TimeMark rollback for
virtual device id %1 to
snapshot image %2
failed because an error
occurred when reading
from file %3, errno is %4.
The source virtual device or
the destination virtual device
cannot be accessed.
Check source and target
virtual devices.
CDP/NSS Administration Guide
588
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
15040
Error
TimeMark rollback for
virtual device id [Device
ID] to snapshot image
[TimeMark name] might
have run out of snapshot
reserved area. Please
expand the snapshot
reserved area.
The snapshot resource area is
used for the rollback process.
If the resource is too low, it will
affect the rollback operation.
Expand the snapshot
resource to an adequate
level.
15041
Error
TimeMark rollback for
virtual device id %1 to
snapshot image %2
failed because an error
occurred while getting
TimeMark extents.
This might be due to snapshot
resource device error.
Check snapshot resource
device.
15050
Error
Server IO cpl call
UPDATE_TimeMark
failed on vdev id [Device
ID]: Invalid Argument
TimeMark related function call
returned error. For example, if
you get this error during
TimeMark copy, it is most likely
due to insufficient snapshot
resource space
Check the system log to take
an action based on the
related function call that has
failed. For TimeMark copy
failure, expand the snapshot
resource to an adequate
level. Check if TimeMark or
Replication successfully
completed. If not manually
run TimeMark or Replication
after expanding the
snapshot resource..
15051
Error
Server ioctl call %1 failed
on vdev id %2: I/O error
(EIO).
The virtual drive is not
responsive to IO requested by
the upper layer.
Try again after checking
devices.
15052
Error
Server ioctl call %1 failed
on vdev id %2: Not
enough memory space
(ENOMEM).
The virtual drive is not
responsive to the upper layer
calls because of low memory
condition.
Check system memory.
15053
Error
Server ioctl call %1 failed
on vdev id %2: No space
left on device (ENOSPC).
The virtual drive is not
responsive to the upper layer
calls because of not enough
free space.
Check free space on
physical and virtual devices.
15054
Error
Server ioctl call %1 failed
on vdev id %2: Already
existed (EEXIST).
The operation may have been
already executed or is
conflicting with an existing
item.
Check operation results.
CDP/NSS Administration Guide
589
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
15055
Error
Server ioctl call [Device
ID] failed on vdev id
[Device ID]: Device or
resource is busy
(EBUSY).
The virtual drive is busy with I/
O and not responsive to the
upper layer calls.
Try again when the system
is less busy or determine the
cause the high activity and
correct the situation if
necessary.
15056
Error
Server ioctl call %1 failed
on vdev id %2: Operation
still in progress
(EINPROGRESS).
The virtual drive is busy with I/
O and not responsive to the
upper layer calls.
Try again when the system
is less busy or determine the
cause the high activity and
correct the situation if
necessary.
16002
Error
Failed to create
TimeMark for group %1.
TimeMark cannot be created
on all group members.
Check group members.
16003
Error
Failed to delete
TimeMarks because they
are in rollback state.
TimeMarks are in rollback
state.
Try again.
16004
Error
Failed to delete
TimeMarks because
TimeMark operation is in
progress to get TimeMark
information.
TimeMark operation is in
progress.
Try again.
16010
Error
Group cache/CDP
journal is enabled for
virtual device %1, vdev
signature is not set for
vss.
Virtual device is not VSS
aware.
Select the right virtual device
for VSS operation.
16106
Error
Failed to update the
configuration of the
Primary Disk %1 for
Near-line Recovery.
Near-line storage device might
have a problem.
Check the server connection
of the near-line pair.
16107
Error
Failed to update the
configuration of the Nearline Disk %1 for Near-line
Recovery.
Near-line storage device might
have a problem.
Check the server connection
of near-line pair.
16108
Error
Failed to start TimeMark
rollback on Near-line
Disk %1 for Near-line
Recovery.
Near-line storage device might
have a problem.
Check the TimeMark status
and server connection of the
near-line pair.
16109
Error
Failed to assign the
Primary Server to Nearline Disk %1 to resume
the Near-line Mirroring
configuration.
The ioctl call may fail due to
server busy or assignment
error from Fibre Channel or
iSCSI depending on the
protocol.
Check the server status and
retry.
CDP/NSS Administration Guide
590
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
16110
Error
Failed to update the
configuration of the
Primary Disk %1 to
resume Near-line
Mirroring configuration.
Near-line storage device might
have a problem.
Check the server connection
of the near-line pair.
16111
Error
Failed to update the
configuration of the Nearline Disk %1 to resume
Near-line Mirroring
configuration.
Near-line storage device might
have a problem.
Check the server connection
of the near-line pair.
16120
Error
Failed to update the
configuration of the
Primary Disk %1 for
Near-line Replica
Recovery.
Storage device might have a
problem.
Check the server connection
of the near-line pair and
replica server.
16121
Error
Failed to update the
configuration of the Nearline Disk %1 for Near-line
Replica Recovery.
Storage device might have a
problem.
Check the server connection
of the near-line pair and
replica server.
16122
Error
Failed to update the
configuration of the Nearline Replica %1 for Nearline Replica Recovery.
Storage device might have a
problem.
Check the server connection
of the near-line pair and
replica server.
16123
Error
Failed to start TimeMark
rollback on Near-line
Replica %1 for Near-line
Replica Recovery.
Storage device might have a
problem.
Check the server connection
of the near-line pair and
replica server.
16124
Error
Failed to update the
configuration of the
Primary Disk %1 to
resume the Near-line
Mirroring configuration.
Storage device might have a
problem.
Check the server connection
of the near-line pair and
replica server.
16125
Error
Failed to update the
configuration of the Nearline Disk %1 to resume
the Near-line Mirroring
configuration.
Storage device might have a
problem.
Check the server connection
of the near-line pair and
replica server.
16126
Error
Failed to update the
configuration of the Nearline Replica %1 to
resume the Near-line
Mirroring configuration.
Storage device might have a
problem.
Check the server connection
of the near-line pair and
replica server.
CDP/NSS Administration Guide
591
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
16200
Error
Console ([host name]):
Failed to modify Fibre
Channel client (%2)
WWPN from %3 to %4.
There may be duplicate
WWPNs.
Check FC WWPNs.
16211
Error
Failed to add storage to
the thin disk %1 (error
code %2).
There may not be enough
storage available.
Check storage capacity.
16212
Error
Failed to add storage to
the thin disk %1. The
virtual device is assigned
to user %2. The quota for
this user is %3 MB and
the total size allocated to
this user is %4 MB, which
exceeds the limit.
Quota limit is reached.
Check user quota.
16213
Error
The virtual device %1 is
assigned to user %2. The
quota for this user is %3
MB and the total size
allocated to this user is
%4 MB. Only %5 MB will
be added to the thin disk.
Quota limit is reached.
Check user quota.
16214
Error
Out of disk space to add
storage to the thin disk
%1.
There is not enough storage
available.
Check storage capacity.
16215
Error
Failed to add storage to
the thin disk %1:
maximum segment
exceeded (error code
%2).
There is not enough storage
available.
Check storage capacity.
16217
Error
Console ([host name]):
Failed to update the thin
disk properties for virtual
device %2 (threshold:
%3, increment: %4)
Parameter values might be
inconsistent.
Check parameters.
16219
Error
Console ([host name]):
Failed to modify the thin
disk size for virtual device
%2 to %3 MB (%4
sectors).
Thin disk expansion failed
possibly due to a device error
or the system being busy.
Check device status and
system status; then try
again.
16220
Error
Console ([host name]):
Failed to add storage to
the thin disk %2.
There is not enough storage
available.
Check storage capacity.
CDP/NSS Administration Guide
592
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
16232
Error
Failed to initialize report
scheduler configuration.
The system might be busy or
disk space is running low.
Check system resource
usage and disk usage.
16234
Error
Failed to start report
scheduler.
The system might be busy and
takes longer to start.
Check to see if the CLI proxy
server module is started.
Restart the comm module if
the proxy server module is
not started.
16236
Error
Failed to stop report
scheduler.
The system might be busy and
takes longer to stop.
Check to see if the CLI proxy
server module is stopped.
16238
Error
Failed to retrieve report
schedule(s).
The system might be busy.
Retry later.
16240
Error
Failed to add / update
report schedule(s).
The system might be busy or
disk space is running low.
Check system resource
usage and disk usage.
16242
Error
Failed to remove report
schedule(s).
The system might be busy.
Retry later.
16252
Error
Failed to initialize
statistics log scheduler
configuration.
The statistics scheduler thread
could not be started possibly
due to being configured
incorrectly or system status.
Check system status.
16254
Error
Failed to start statistics
log scheduler.
The statistics scheduler thread
could not start to collect
information.
Check system status.
16256
Error
Failed to stop statistics
log scheduler.
Statistics scheduler thread
could not stop possibly due to
the system being busy.
Check system status.
16258
Error
Failed to retrieve
statistics log schedules.
Statistics schedules could not
be retrieved possibly due to
the system being busy.
Check system status.
16260
Error
Failed to add / update
statistics log schedule(s).
Statistics schedules could not
be updated possibly due to the
system being busy.
Check system status.
16262
Error
Failed to remove
statistics log schedule(s).
Statistics schedules could not
be removed possibly due to
the system being busy.
Check system status.
17001
Error
Rescan replica cannot
proceed due to
replication already in
progress.
Rescan cannot be performed
when replication is in
progress.
Wait for the process to
complete before trying again
or change the replication
schedule.
CDP/NSS Administration Guide
593
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
17002
Error
Rescan replica cannot
proceed due to
replication control area
missing.
There may be a storage
problem.
Check the virtual device
layout and storage devices
for missing segments.
17003
Error
Rescan replica cannot
proceed due to
replication control area
failure.
There may be a storage
problem.
Check the virtual device
layout and storage devices
for missing segments.
17004
Error
Replication cannot
proceed due to
replication control area
failure.
There may be a storage
problem.
Check the virtual device
layout and storage devices
for missing segments.
17005
Error
Replication cannot
proceed due to
replication control area
failure.
There may be a storage
problem.
Check the virtual device
layout and storage devices
for missing segments.
17006
Error
Rescan replica cannot
proceed due to
replication control area
failure.
There may be a storage
problem.
Check the virtual device
layout and storage devices
for missing segments.
17011
Error
Rescan replica failed due
to network transport
error.
Rescan for differences
requires connecting to the
replica server. A network
problem will cause rescan to
fail.
Check network condition
between the IPStor servers.
17012
Error
Replicating replica failed
due to network transport
error.
Replication failed due to a
network condition.
Check network condition
between the IPStor servers.
17013
Error
Rescan replica failed due
to local disk error.
Rescan encountered a disk I/
O error from the source disk.
Check the storage device or
system in the source server.
17014
Error
Replication failed due to
local disk error.
Replication encountered a disk
I/O error from the source disk.
Check the storage device or
system in the source server.
17015
Error
Replication failed
because local snapshot
used up all of the
reserved area.
Replication failed because the
snapshot from the source drive
could not be maintained due to
low snapshot resources.
Expand the snapshot
resource for the source
device.
17016
Error
Replication failed
because the replica
snapshot used up all of
the reserved area.
Replication failed because the
snapshot from the replica drive
could not be maintained due to
low snapshot resource space.
Expand the snapshot
resource for the replica
device.
31003
Error
Failed to open file %1.
The specified file does not
exist.
Check the file existence.
CDP/NSS Administration Guide
594
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
31004
Error
Failed to add user %1 to
the NAS server.
When adding username and
UID into the file /etc/passwd,
one of the following errors
occurred:
- nasgrp is not in /etc/group
- username already exists in /
etc/passwd
- the file /etc/passwd cannot
be updated
Check nasgrp group exists
by the command "getent
group | grep nasgrp". If not,
add it by the command
"groupadd nasgrp".
In case the username is new
and the group nasgrp
already exists, check if the
file system does not have
any issue by creating a test
file under /etc. If the file
cannot be created, reboot
the server to trigger a file
system check.
31005
Error
Failed to allocate
memory.
Memory is low.
Check system memory
usage and make sure
enough memory is reserved
for user-mode operations
especially if you have NAS
enabled. Run the command
"cat /proc/meminfo" to check
if
((MemFree+Buffers+Cache
d)/MemTotal) is not less than
10%. Need further
investigation to determine
the cause of high memory
usage.
31011
Error
IPSTORUMOUNT: Failed
to unmount %1.
When unmounting a NAS file
system, one of the following
errors occurred:
- the mount path is not from /
nas
- umount process cannot be
forked
- NAS file system is busy to be
unmounted
/etc/mtab cannot be locked
temporarily
Run "lsof /nas/<resource>"
to check the process that
opens the device. If the
process exists, then
manually kill it. If no process
opens the device, then you
may need to reboot the
server.
CDP/NSS Administration Guide
595
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
31013
Error
IPSTORMOUNT: Failed
to mount %1.
When mounting NAS file
system, one of the following
errors occurred:
- failed to get vdev name by
vid from ipstor.conf. This can
happen if ipstor.conf cannot be
read ot does mot contain
VirtualDevConnection info.
- unmount failed.(Ref. 31011)
(unmount will happen when
the mount path is duplicated)
- NAS file system failed to be
mounted
Check whether can open
ipstor.conf.
Try to create a test file under
$ISHOME/etc/
$HOSTNAME; if the file
cannot be created, written or
read, then file system might
be corrupted. You need to
reboot the server to trigger a
file system check.
Check whether the info of
vdev, vid and
VirtualDevConnection are
correctly in ipstor.conf
Try to manually mount the
nas device to a test folder,
like /mnt/test, you will see an
error on the screen if mount
fails.
31017
Error
Failed to write to file %1.
The file system may be
inconsistent.
Try to create a test file in the
indicated path. If the file
cannot be created, written,
or read, reboot the server to
trigger a file system check.
31020
Error
Failed to rename file [File
name] to file [File name].
The file system is full or
system resources are critically
low.
Try removing some
unnecessary files like logs or
cores.
31023
Error
IPSTORNASMGTD:
Failed to create file [File
name].
See 31020.
See 31020.
31024
Error
IPSTORNASMGTD:
Failed to lock file [File
name].
Some processes exited
without an unlock file.
Restart the server modules.
31025
Error
IPSTORNASMGTD:
Failed to open file [File
name].
One of the configuration files is
missing.
Make sure the package is
installed properly.
31028
Warning
Failed to lock file [File
name].
Some processes exited
without an unlock file.
Restart the server modules.
31029
Error
Failed to create file [File
name].
See 31020.
See 31020.
31030
Error
Failed to create directory
[Directory name].
See 31020.
See 31020.
CDP/NSS Administration Guide
596
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
31031
Error
Failed to remove
directory [Directory
name].
Some other process might be
accessing the directory.
Try stopping some running
process or exit out of
existing logins.
31032
Error
Failed to execute
program '[Program
name]'.
When any server process
cannot be started, it is most
likely due to insufficient system
resources, invalid state left by
an server process that may not
have been stopped properly,
or an unexpected OS process
failure that left the system in a
bad state. This should happen
very rarely. If frequent
occurrence is encountered,
there may be external factors
that contribute to the behavior
that must be investigated and
removed before running the
server.
If system resources are low,
use top to check the process
that is using the most
memory. If physical memory
is below the IPStor
recommendation, install
more memory to the system.
If the OS is suspected in bad
state due to unexpected
failure in either hardware of
software components,
restart the server machine to
make sure the OS is in a
healthy state before trying
again.
Check whether there is any
core file under $ISHOME/bin
that indicates process error.
31034
Warning
Local IPStor SAN Client
is not running.
The Client is not running
properly.
Restart the server modules.
31035
Error
Failed to add group
[Group name] to the NAS
server.
The number of reserved group
IDs are used up.
Add addition ranges from
Console -> NAS Clients ->
Windows Clients -> UID/
GID.
31036
Error
Failed to delete user
[User name] from the
NAS server.
User being deleted is currently
logged in.
Kill any running process that
belongs to an account that
you are deleting.
31037
Error
Error accessing NAS
Resource state file for
virtual device [Device
number].
System had an unclean
shutdown.
No action needed.
31039
Error
Failed to rename file [File
name] to file [File name].
File system is full.
Try removing some
unnecessary files like logs or
cores.
31040
Error
Failed to create the NAS
Resource. Failed to
allocate SCSI disk device
handle - operating
system limit reached.
OS limit reached.
Refer to doc on how to
rebuild kernel to support
more SCSI devices.
CDP/NSS Administration Guide
597
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
31041
Error
Exceed maximum
number of reserved NAS
users.
The number of reserved user
IDs is used up.
Add addition ranges from
Console -> NAS Clients ->
Windows Clients -> UID/
GID.
31042
Error
Exceed maximum
number of reserved NAS
groups.
The number of reserved user
IDis used up.
Add addition ranges from
Console -> NAS Clients ->
Windows Clients -> UID/
GID.
31043
Error
Failed to setup password
database.
See 31020.
Try removing some
unnecessary files like logs or
cores.
31044
Error
Failed to make symlink
from [File name] to [File
name].
See 31020.
Try removing some
unnecessary files like logs or
cores.
31045
Error
Failed to update /etc/
passwd.
Some processes exited
without unlocking file.
Restart the server modules.
31046
Error
Failed to update /etc/
group.
Some processes exited
without unlocking file.
Restart the server modules.
31047
Error
Synchronization daemon
is not running.
Someone manually stopped
the process.
See 11240.
31048
Error
Device [Device number]
mount error.
Failed to attach to the SAN
device provided by the local
client module or the file system
is corruptedr.
Make sure all of the physical
devices are connected and
powered on correctly and
restart the server modules. If
the Console shows that the
NAS resource is attached
but not mounted, you might
need to reformat this NAS
resource but this will mean
all data on the drive will be
removed.
31049
Error
Device [Device number]
umount error.
Some other process might be
accessing the mount point.
Kill any running processes
which might be accessing
the mount point.
31050
Error
Failed to detach device
vid [Device number].
The client module is not
running properly.
Restart the server modules.
31051
Error
Failed to attach device
vid [Device number].
Failed to attach the SAN
device provided by the local
client module or the file system
is corrupted.
Restart the server modules.
CDP/NSS Administration Guide
598
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
31054
Error
Failed to get my
hostname
Failed to get hostname with
the function gethostname.
Check the host exists with
that name and the name is
resolvable.
31055
Error
SAM: connection failure.
Samba authentication server
not accessible.
Check if the auth server is
up and running or the name
of the server is correct setup
from the Console.
31056
Warning
Delay mount due to
unclean file system on
vid [Device number].
During failover, the secondary
is waiting for a specific amount
of time until the primary
unmounts NAS resources
gracefully.
None.
31058
Warning
Not all disks unmount
complete.
A file system check is in
progress or the device is not
available during failover/
failback.
If the file system check is in
progress, you can try to stop
it by killing the file system
repair process.
Check physical device
status.
31060
Warning
Not all disks mount
complete.
A file system check is in
progress or the device is not
available during failover/
failback.
If the file system check is in
progress, you can try to stop
it by killing the file system
repair or checking
processes.
Check physical device
status.
31061
Error
Nas process ipstorsmbd
fail.
One of the following processes
is not running properly:
ipstorclntd, kvbdi,
ipstornasmgtd, smbd, nmbd,
winbindd, portmap,
rpc.mountd, mountd, nfsd
See 31032
See 31032
31062
Error
Failed to read from file
%1
See 31017.
See 31017.
31064
Error
Error file system type %1
There is a wrong file system
type of NAS resource set in
ipstor.conf.
Check the file system type in
ipstor.conf.
31066
Error
Invalid XML file
Failed to get NAS file system
block size from ipstor.conf.
Check the file system block
size in ipstor.conf.
31067
Error
cannot parse dynamic
configuration %1
Failed to read the file
$ISHOME/etc/$HOSTNAME/
ipstor.dat.cache.
See 31017.
CDP/NSS Administration Guide
599
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
31068
Error
dynamic configuration
does not match %1
Failed to get vdev by vid from
the file $ISHOME/etc/
$HOSTNAME/
ipstor.dat.cache.
Check the mapping of vdev
name and corresponding vid
in ipstor.dat.cache.
31069
Error
Do not destroy file
system's superblock of
%1
When formatting NAS
resources, the super block
could not be removed because
it failed to open the VBDI
device or write to the device.
Check whether the device /
dev/vbdixx exists.
31071
Error
Missing file %1
Failed to get status of the file
$ISHOME/bin/sfsck.
Run the command "stat
$ISHOME/bin/sfsck" to see
if any error displays.
31072
Error
Failed to update CIFS
native configuration
When updating CIFS native
configuration, one of the
following error happened:
If it exists, run the command
"dd if=/dev/vbdixx of=/tmp/
test.out bs=512 count=100" to
test whether you can read the
device. If not, check the
physical device status
-Failed to create the temporary
file $ISHOME/etc/
$HOSTNAME/
.smb.conf.XXXXXX
- Failed to get the cifs client
from $ISHOME/etc/
$HOSTNAME/nas.conf
- Failed to rename the file
$ISHOME/etc/$HOSTNAME/
.smb.conf.XXXXXX to
$ISHOME/etc/$HOSTNAME/
smb.conf
See 31017.
31073
Error
Failed to update NFS
native configuration
When updatting NFS native
configuration, one of the
following error happened:
- Failed to open the file
$ISHOME/etc/$HOSTNAME/
nas.conf
- Failed to create the
temporary file $ISHOME/etc/
$HOSTNAME/
.exports.XXXXXX
CDP/NSS Administration Guide
600
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
See
31017.
31074
Error
Failed to parse XML file
%1
Failed to open the file
$ISHOME/etc/$HOSTNAME/
nas.conf.
See 31017.
31075
Error
Disk %1 unmount failed
during failover
Failover or failback has
occurred so NAS resources
need to be unmounted.
Reboot the failed server.
31076
Critical
Due to storage failure,
NAS Secondary Server
has to reboot to resume
its tasks.
NAS resources cannot be
detached or unmounted during
failover/failback. The storage
failure prevented the file
system from flushing the
cache. Rebooting the failed
server will clean the cache.
Reboot the failed server.
31078
Error
Add NAS resource to
iocore failed during
failover processing %1
When adding a NAS resource
to iocore, one of the following
error happened:
- Failed to open the file
$IS_CONF
- NAS option is not enabled
- Failed to open /dev/isdev/
kisconf
See 31017
Run the command "stat /
dev/isdev/kisconf" to check
the file.
31079
Error
Missing file system
commands %1 of %2
When getting file system
command from $ISHOME/etc/
$HOSTNAME/nas.conf, the
InfoItem value is not right.
Check whether all the
InfoItem names are correct
in nas.conf. For example,
Example: InfoItem
name="mount". You can
compare it with nas.conf file
on a healthy NAS server.
50000
Error
iSCSI: Missing
targetName in login
normal session from
initiator %1
The iSCSI initiator may not be
compatible.
Check the iSCSI initiator on
the client side.
50002
Error
iSCSI: Login request to
nonexistent target %1
from initiator %2
The iSCSI target does not
exist any longer.
Check the iSCSI initiator on
the client side and the iSCSI
configuration on the server.
Remove targets from the
configuration if they do not
exist.
CDP/NSS Administration Guide
601
Troubleshooting / FAQs
CDP/NSS Error Codes
Code
Type
Text
Probable Cause
Suggested Action
50003
Error
iSCSI: iSCSI CHAP
authentication method
rejected. Login request to
target %1 from initiator
%2
The CHAP settings are not
valid.
Check the iSCSI CHAP
secret settings on the server
and the client sides.
51001
Warning
RAID: %1
The physical RAID controller
might have some failures.
Check RAID controller
configuration.
51002
Error
RAID: %1
The physical RAID controller
has some failures.
Check RAID controller
configuration.
51003
Critical
RAID: %1
The physical RAID controller
has some failures.
Check RAID controller
configuration.
51004
Warning
Enclosure: %1
The physical enclosure might
have some failures.
Check enclosure
configuration.
For any error not listed in this table, please contact FalconStor Technical Support.
CDP/NSS Administration Guide
602
CDP/NSS User Guide
Port Usage
This appendix contains information about the ports used by CDP and NSS.
CDP/NSS uses the following ports for incoming requests. Network firewalls should
allow access through these ports for successful communications. In order to
maintain a high level of security, you should disable all unnecessary ports. The ports
are not used unless the associated option is enabled in CDP/NSS. For FalconStor
appliances, the ports marked are enabled by default.
Protocol
Port
Usage
TCP
20
Standard FTP data port
UDP
20
Standard FTP data port
TCP
21
Standard FTP port
UDP
21
Standard FTP port
TCP
22
Standard Secure Shell (SSH) port for remote connection to the server
TCP
23
Standard Telnet port for remote connection to the server
UDP
23
Standard Telnet port for remote connection to the server
TCP
25
Standard SMTP port for Email Alerts
UDP
25
Standard SMTP port for Email Alerts
UDP
67
DHCP port for iSCSI Boot (BootIP) option
UDP
68
DHCP port for iSCSI Boot (BootIP) option
UDP
69
TFTP (Trivial File Transfer Protocol) port for iSCSI Boot (BootIP) option
HTTP
80
Standard HTTP port to access FalconStor Web Setup and also used for online
registration of license key codes.
Note: Port 80 is used to send license material to the FalconStor license server for
registration. The registration reply is then sent back using HTTP protocol, where a
local random port number is used on the server in the same way as Web-based
pages. The firewall does not block the reply if the 'established bit' is set to let
established traffic in.
HTTP
81
Standard HTTP port to access FalconStor Management Console via Web Start
TCP
111
rpcbind RPC program number mapper (NFS)
CDP/NSS Administration Guide
603
Port Usage
Protocol
Port
Usage
UDP
111
rpcbind RPC program number mapper (NFS)
Note: NFS port usage is assigned through the SUNRPC protocol. The ports
vary, so it is not possible or convenient to keep checking them and
reprogramming a firewall. Most firewalls have a setting to "Enable NFS" upon
which they will change the settings if the ports themselves change.
UDP
123
Standard Network Time Protocol (NTP) transport layer to access external time
servers
UDP
137
ipstornmbd NETBIOS Name Service for CIFS protocol
UDP
138
ipstornmbd NETBIOS Datagram Service for CIFS protocol
TCP
139
ipstorsmbd NETBIOS Session Service for CIFS protocol
UDP
161
SNMP port for SNMP queries
HTTPS
443
Standard secure HTTP port to access FalconStor Web Setup
UDP
623
Failover IPMI power control port
HTTPS
1311
Management port for DELL servers for hardware configuration
TCP
2009
ENFSD core file system driver for FalconStor HyperFS
UDP
2009
ENFSD core file system driver for FalconStor HyperFS
TCP
2049
nfsd NFS server for FalconStor HyperFS
UDP
2049
nfsd NFS server for FalconStor HyperFS
TCP
3260
Communication port between iSCSI clients and the server. Also used for iSCSI Boot
(BootIP) option
UDP
4011
PXE port for iSCSI Boot (BootIP) option
TCP
5001
isttcp port to test network connection
TCP
8009
Standard Apache AJP port to access FalconStor Web Setup
TCP
8443
Apache Tomcat SSL communication port between FalconStor FileSafe clients
and FileSafe server for internal commands
TCP
11576
Secure RPC communication port between FalconStor Management Console and the
server
TCP
11577
Communication port between servers for data replication
UDP
11577
Communication port between servers for data replication
TCP
11578
Communication port between replication servers for 56-bit authentication
UDP
11578
Communication port between replication servers for 56-bit authentication
TCP
11579
Communication port between replication servers for 128-bit authentication
CDP/NSS Administration Guide
604
Port Usage
Protocol
Port
Usage
UDP
11579
Communication port between replication servers for 128-bit authentication
TCP
11580
Communication port between failover pair
TCP
11582
Communication port for Command Line Interface (CLI)
TCP
11588
Communication port between FalconStor CCM and the server
TCP
11762
ipstorclntd SecureRPC communication port between SAN Clients and the
server for management functions such as snapshot notification, configuration,
and retrieval of client information.
Note: If you have a DiskSafe client behind a firewall, you need to open this
port on that firewall in order to have secure communication between DiskSafe
and the server.
TCP
18651
Communication port between FalconStor FileSafe clients and FileSafe server
for data copy
Although you may temporarily open some ports during initial setup of the CDP/NSS
appliance, such as the telnet port (23) and FTP ports (20 and 21), you should shut
them down after your work is complete.
CDP/NSS Administration Guide
605
CDP/NSS User Guide
SMI-S Integration
Large Storage Systems and Storage Area Networks (SANs) are emerging as a
prominent and independent layer of IT infrastructure in enterprise class and
midrange computing environments. Examples of applications and functions driving
the emergence of new storage technology include:
Sharing of vast storage resources between multiple systems via networks,
LAN free backup,
Remote, disaster tolerant, on-line mirroring of mission critical data,
Clustering of fault tolerant applications and related systems around a single
copy of data.
Archiving requirements for sensitive business information.
Distributed database and file systems.
The FalconStor SMI-S Provider for CDP and NSS storage offers CDP and NSS
users the ability to centrally manage multi-vendor storage networks for more efficient
utilization.
FalconStor CDP and NSS solutions use the SMI-S standard to expose the storage
systems it manages to SMI-S Client. The storage systems supported by FalconStor
include Fibre Channel disk arrays and SCSI disk arrays. A typical SMI-S Client can
discover FalconStor devices through this interface. It utilizes CIM-XML while is a
WBEM protocol that uses XML over HTTP to exchange Common Information Model
(CIM) information.
The SMI-S server is included in CDP and NSS versions 6.15 Release 2 and later.
CDP/NSS Administration Guide
606
SMI-S Integration
SMI-S Terms and concepts
Storage
Management
Initiative Specification
(SMI-S)
A storage standard developed and maintained by the Storage Networking Industry
Association (SNIA). SMI-S enables broad interoperability among heterogeneous
storage vendor systems, allowing different classes of hardware and software
products supplied by multiple vendors to reliably and seamlessly interoperate for the
purpose of monitoring and controlling resources.
The FalconStor SMI-S interface overcomes the deficiencies associated with legacy
management systems that deter customers from using more advanced storage
management systems.
openPegasus
The FalconStor SMI-S Provider uses an existing open source CIM Object Manager
(CIMOM) called openPegasus for a portable and modular solution. It is an opensource implementation of the DMTF CIM and WBEM standards.
openPegasus is packaged in tog-pegasus-[version].rpm with Red Hat Linux and is
automatically installed CDP and NSS appliances with version 6.15 R2 and later. If it
has not been installed on your appliance, you can install it using the following
command: -rpm -ivh --nodeps tog-pegasus*.rpm
Command
Central Storage
(CCS)
SMI-S Provider Usage on Veritas CommandCentral Storage (CCS) offers a storage
resource management solution by providing centralized visibility and control across
physical and virtual heterogeneous storage environments. By enabling storage
capacity management, centralized monitoring and application to spindle mapping,
CommandCentral Storage helps improve storage utilization, optimizes resources,
increases data availability, and reduces capital and operational costs.
CDP/NSS Administration Guide
607
SMI-S Integration
Using the SMI-S Provider
Launch the Command Central Storage console
1. Use an html browser to open the address https://localhost on port 8443 to open
CCS console.
2. Use the default user name: admin and password: password to login to
Command Central Storage console for the first time.
The top two panels are the main menu bar of CCS and the bottom right is the main
control panel. The main menu bar and the storage section of the main control panel
are important to SMI-S usage.
Add FalconStor Storage
To add FalconStor managed storage devices:
1. Navigate to Tools in the main menu bar and then select Configure a New Device
in the main control panel.
2. Select Array from the drop-down menu for Device Category and select
FalconStor NSS for Device Type and click Next.
The Device Configuration screen displays.
3. Enter the IP address of the server, along with the user name and password,
which is the same as the server login account (i.e. root account or administration
account). For the Interop Namespace field, enter falconstor/Default and accept
the default for the other fields.
Once the server has been added successfully, a status screen similar to the screen
shown below displays:
CDP/NSS Administration Guide
608
SMI-S Integration
View FalconStor Devices
To view FalconStor storage devices:
1. Select Managing-->Summary from the main menu bar of the Command Central
Storage console.
Alternatively, you can select Managing -->Storage from the main menu bar.
2. In the main control panel select Arrays.
The Virtualization SAN Arrays Summary screen displays the FalconStor storage.
3. Select the corresponding device by clicking on the name.
A summary of the storage device displays.
View Storage Volumes
To view storage volumes:
1. Select the Storage Volumes tab in the sub-menu on the top of main control
panel.
A summary of storage volumes displays.
Assigned virtual disks display in Unknown Storage Volumes [Masked to
Unknown Host(s)] or (Un)Claimed Storage Volumes while unassigned virtual
disks display in Unallocated Storage Volumes [Unmasked].
2. Select an individual volume to view the storage pool it is in, and the physical
LUN it relies on.
View LUNs
To view logical unit numbers (LUNs):
1. Select the LUNs tab in the sub-menu on the top of the main control panel.
A summary of CDP/NSS vitual disks displays. Assigned vitual disks display as
Unknown LUNs [Maskd to Unknown Host(s)] or (Un)claimed LUNs, while
unassigned virtual disks display as Unallocated LUNs [Unmasked]..
2. Select an individual LUN to view the storage pool it is in, and the physical LUN it
relies upon.
View Disks
1. To view disks, select the Disks tab in the sub-menu to view LUN information.
A summary of physical storage displays. And the individual disks display.
CDP/NSS Administration Guide
609
SMI-S Integration
2. Select individual disk to view which storage pool it is in and which storage
volume it was created from.
View Masking Information
1. To view masking information, select Connectivity in the sub-menu bar to view a
summary of all the FC adapters, ports and storage views.
2. Select individual adapters and ports to view the detail.
3. Select individual view to view the port it is seen from and the storage volume it
sees.
Enable SMI-S
To enable SMI-S, right-click on the server in the FalconStor Management Console
and select Properties.
Then highlight the SMI-S tab and select the Enable SMI-S checkbox..
CDP/NSS Administration Guide
610
RAID Management for
VS-Series Appliances
(Updated 12/1/11)
The FalconStor RAID Management Console allows you to discover, configure, and
manage storage connected to VS-Series appliances.
A redundant array of independent disks (RAID) consists of a set of physical disks
configured according to a specific algorithm. The FalconStor RAID Management
Console enables centralized management of RAID controllers, disk drives, RAID
arrays, and mapped/unmapped Logical Units for the storage enclosure head and
any expansion enclosure(s) connected to it. The console can be accessed from a
VS-Series server after you connect to it in the FalconStor Management Console.
The management responsibilities of the RAID Management Console and the
FalconStor Management Console are shown below:.
RAID management information is organized as follows:
Prepare to use the RAID Management Console - Prepare for RAID
management.
Launch the RAID Management Console and discover storage - Launch the
RAID Management Console.
CDP/NSS Administration Guide
611
RAID Management for VS-Series Appliances (Updated 12/1/11)
Manage storage arrays in the RAID Management console:
Display a storage profile
View enclosures
Manage controller modules
Manage disk drives
Manage RAID arrays
Logical Unit Mapping
Monitor configured storage - Monitor storage from the FalconStor
Management console.
Prepare for RAID management
You must complete the following before attempting any RAID management
procedures:
1. Connect the FalconStor appliance and storage enclosures according to steps 1
through 4 in the FalconStor Virtual-Storage Appliances (VS/TVS) Hardware
QuickStart Guide (QSG) shipped with your appliance.
2. Perform initial system configuration using the FalconStor Web Setup application,
as described in the FalconStor CDP/NSS Software QuickStart Guide (also
shipped with your appliance).
3. Connect to the VS server in the FalconStor Management Console, logging in as
a user with Administrator status.
CDP/NSS Administration Guide
612
RAID Management for VS-Series Appliances (Updated 12/1/11)
Preconfigured storage
Preconfigured storage enclosures are shipped with a default RAID 6 configuration that
consumes all available resources. In the FalconStor Management console, default
devices that have been mapped to the FalconStor host are visible under Physical
Resources --> Physical Devices --> SCSI Devices.
Mapped
LUs
Note: Other devices displayed in this location are not related to storage. PERC 6/i
devices are internal devices on the CDP/NSS appliance; the Universal Xport
device is a system device housing a driver that provides access to storage.
In the RAID Management console, these devices are known as Logical Units (LUs) (refer
to Logical Unit Mapping). The FalconStor RAID Management console lets you
reconfigure these default devices as needed.
When mapped LUs are available in the FalconStor Management console, you can create
San Resources. The last digit of the SCSI address (A:C:S:L) corresponds to the LUN
number that you choose in the Mapping dialog.
Refer to Logical Resources in the CDP/NSS Administration Guide and FalconStor
Management Console online help for details on configuring these physical devices
as virtual devices and assigning them to clients.
Unconfigured storage
If your storage array has not been preconfigured, you must prepare storage using
functions in the RAID Management console before you can create SAN resources in
the FalconStor Management console:
Create RAID arrays (refer to Create a RAID array).
Create Logical Units (LUs) on each array (refer to Create a Logical Unit).
CDP/NSS Administration Guide
613
RAID Management for VS-Series Appliances (Updated 12/1/11)
Map each LU to a Logical Unit Number (LUN) (refer to Define LUN
mapping).
CDP/NSS Administration Guide
614
RAID Management for VS-Series Appliances (Updated 12/1/11)
Launch the RAID Management Console
Right-click the server object and select RAID Management. The main screen, which
describes the management categories available in the console, is displayed.
Discover storage
This procedure locates in-band or out-of-band storage.
1. Click the Discover button (upper-right edge of the display).
2. In the Discover Storage dialog, select the discovery method.
Select Manual (the default) to discover out-of-band storage. Enter a
controller IP address and select Discover. The preconfigured controller IP
addresses for controller modules on the storage enclosure head
(Enclosure 0) are 192.168.0.101 (slot 0) and 192.168.0.102 (slot 1).
Note: Each controller module uses a different IP address to connect to
the server. You can use either IP address for the purpose of discovering
storage.
CDP/NSS Administration Guide
615
RAID Management for VS-Series Appliances (Updated 12/1/11)
Select Automatic if you do not know the IP address. This option can detect
only in-band storage and will require additional time to search the subnet.
A confirmation message is displayed when storage is discovered. The example
below shows two storage items discovered during Automatic discovery. Each
discovered storage array includes a storage enclosure head and any expansion
enclosures that were preconfigured for your system.
After discovery, each storage array profile is listed in the Discover Storage dropdown. Select a profile to display components in the RAID Management console.
You can use the keyboard to navigate through the Discover Storage list. Page Up/
Page Down jump between the first and last items in the list; Up and Down cursor
arrows scroll through all items in the list.
Action menu
You can also manage storage profiles by clicking Action --> .
CDP/NSS Administration Guide
616
RAID Management for VS-Series Appliances (Updated 12/1/11)
To discover storage, click Add to display the Discover Storage dialog. Continue as
described above.
To remove a storage profile, click its checkbox and then click Remove. After you do
this, the profile you removed will still exist, but its storage will not be visible from the
host server.
Future storage discovery
To discover an additional storage enclosure head or expansion enclosure in the
future, select Discover Storage from the drop-down list, then click Discover.
CDP/NSS Administration Guide
617
RAID Management for VS-Series Appliances (Updated 12/1/11)
Display a storage profile
After storage has been discovered, select a storage profile from the Discover
Storage drop-down list. The console loads the profile using its (valid) IP address and
displays the components of the array.
In the navigation pane, the Storage object is selected by default; information at this
level includes the storage name and IP address and summary information about all
components in the array. From this object, you can configure all controller
connection settings (refer to Configure controller connection settings).
Navigation
pane
The navigation pane includes objects for all components in the storage array you
selected in the Discover Storage drop-down list. Double-click an object to expand
and display the objects below it; double-click again to collapse. When you select any
object, related information is displayed in the content pane to the right. Some items
include a right-click menu of management functions, while others are devoted to
displaying status information.
CDP/NSS Administration Guide
618
RAID Management for VS-Series Appliances (Updated 12/1/11)
Status bar
The Status Bar at the bottom of the screen identifies - from left to right - the host
machine, the storage array name and its WWID, and that date/time of the last
update to storage configuration.
Menu bar
Action menu - Click Manage Storage to display a dialog that lets you display a
storage profile and discover new storage (equivalent of Discover Storage).
Tools menu - Click Manage Event Log to view or clear the event log for the selected
storage profile.
Click Exit to close the RAID Management console and return to the FalconStor
Management console.
Tool bar
Click Exit to close the RAID Management console and return to the FalconStor
Management console.
Click About to display product version and copyright information.
Rename storage
You can change the storage name that is displayed for the Storage object in the
navigation pane. To do this:
1. Right-click the Storage object and click Rename.
2. Type a new display name. It can include up to 30 characters consisting of letters,
numbers, and certain special characters: _ (underscore); - (hyphen); or #
(pound sign).
3. Click OK when you are done.
Refresh the display
To refresh the current storage profile, right-click the Storage object and click
Refresh. Note that this is not an alternative method for discovering storage.
CDP/NSS Administration Guide
619
RAID Management for VS-Series Appliances (Updated 12/1/11)
Configure controller connection settings
After storage has been discovered, you can change the port settings for controller
modules on the controller enclosure head as required by your network administrator.
To do this:
1. Right-click the Storage object and select Configure Controller Connection. You
can also do this from the Controller Modules object or from the object for an
individual controller.
2. Select the controller from the drop-down list. The dialog displayed from the
object for an individual controller provides settings for that controller only.
3. Set the IP address, subnet mask, and gateway as needed, then click Apply.
Caution: Improper network settings can prevent local or remote clients from
accessing storage.
CDP/NSS Administration Guide
620
RAID Management for VS-Series Appliances (Updated 12/1/11)
View enclosures
A storage array includes one storage enclosure head (numbered Enclosure 0) and,
if connected, expansion enclosures (numbered Enclosure 1 to Enclosure x). Select
the Enclosures object to display summary information for components in all
enclosures in the selected storage profile.
Individual enclosures
Select a specific storage enclosure object to display quantity and status information
for its various components, including batteries, power supply/cooling fan modules,
power supplies, fans, and temperature sensors.
CDP/NSS Administration Guide
621
RAID Management for VS-Series Appliances (Updated 12/1/11)
Storage enclosure head
Expansion enclosure
CDP/NSS Administration Guide
622
RAID Management for VS-Series Appliances (Updated 12/1/11)
Manage controller modules
Each enclosure head (Enclosure 0) has two RAID controller modules. Select the
Controller Modules object to display summary information and status for both
controllers, as well as a controller image that provides at-a-glance controller status.
The controller icon in the navigation pane also indicates status:
and
Controller is online.
and
Controller needs attention.
and
Controller activity is suspended.
and
Controller has failed.
and
Controller is in service mode.
and
Controller slot is empty.
You can configure connection settings for both controllers from this object (refer to
Configure controller connection settings).
RAID controller firmware must be upgraded from time to time (refer to Upgrade
RAID controller firmware).
CDP/NSS Administration Guide
623
RAID Management for VS-Series Appliances (Updated 12/1/11)
Individual controller modules
Select a controller object to display detailed information and configure its connection
settings. The selected controller is outlined in yellow and will also show controller
status.
You can configure connection settings for both controllers from this object (refer to
Configure controller connection settings).
CDP/NSS Administration Guide
624
RAID Management for VS-Series Appliances (Updated 12/1/11)
Manage disk drives
The storage enclosure head has 12 or 24 drives; an expansion storage enclosure
will have either 12 or 24 drives. The display also includes an image for each
enclosure, showing at-a-glance drive status.
Interactive
enclosure
images
The enclosure image in the content pane provides information about any drive,
regardless of the disk object you have selected in the navigation pane. Enclosure 0
always represents the storage enclosure head. Enclosures 1 through x represent
expansion enclosures. (When an enclosure has 24 drives, drive images are oriented
vertically.) Hover your mouse over a single drive image to display enclosure/slot
information and determine whether the drive is assigned or unassigned. Hovering
adds a yellow outline to the drive. Slot statuses include:
Unassigned - available to be assigned to an array.
Assigned to an array.
Set as a hot spare and in use to replace a failed disk.
Set as a hot spare, on standby.
Unassigned disk removed - empty slot.
Disk replaced - assigned.
Disk replaced - unassigned.
The following disk images indicate a disk that is not healthy.
Previously assigned to an array but was removed.
Previously assigned to an array but failed.
Not previously assigned to an array but failed.
Hot spare failed while in use.
Hot spare standby failed.
CDP/NSS Administration Guide
625
RAID Management for VS-Series Appliances (Updated 12/1/11)
Select the Disk Drives object to display summary and status information for all drives
in all enclosures in the selected profile, including layout, status, disk mode, total
capacity, and usable capacity, as well as interactive enclosure images (refer to
Interactive enclosure images).
CDP/NSS Administration Guide
626
RAID Management for VS-Series Appliances (Updated 12/1/11)
Individual disk drives
In the navigation pane, the icon for an individual disk indicates drive mode and
status:
Assigned, status optimal
Assigned, status failed
Assigned, being replaced (rebuild action)
Unassigned, status optimal
Unassigned, status failed
Unassigned, replacing failed drive (rebuild action)
Hot spare in use, status optimal
Hot spare in use, status failed
Hot spare standby, status optimal
Hot spare standby, status failed
Select an individual disk drive object to display additional details about the drive.
The select drive is outlined in green in the interactive enclosure image.
You can also configure the selected drive to be a global hot spare.
CDP/NSS Administration Guide
627
RAID Management for VS-Series Appliances (Updated 12/1/11)
Configure a hot spare
Configuring a disk as a hot spare enables it to replace any failed disk automatically.
This option is available for the selected disk only if the disk is unassigned and its
status is optimal (normal).
To create a global spare, right-click an unassigned disk and select Hot Spare - Set.
The procedure will start automatically.
When the procedure is done, the disk icon is changed to standby mode in all
interactive enclosure displays (refer to Interactive enclosure images).
Remove a hot spare
If a hot spare is in standby mode (and not in use), you can remove the hot spare
designation. To do this:
Right-click the disk and select Hot Spare - Remove.
When the procedure is done, the disk icon image changes to unassigned in all
interactive enclosure displays.
CDP/NSS Administration Guide
628
RAID Management for VS-Series Appliances (Updated 12/1/11)
Manage RAID arrays
A RAID array is a collection of disks chosen from all enclosures in the selected
storage profile. Select the RAID Arrays object to display summary information about
all arrays, including name, status, RAID level, total capacity, total free capacity, and
physical disk type. When you select this object, the disks associated with all arrays
are outlined in blue in the interactive enclosure image.
From this object, you can create a RAID array, then create Logical Units (LUs) on
any array and map them to FalconStor hosts (refer to Create a RAID array and
Create a Logical Unit).
CDP/NSS Administration Guide
629
RAID Management for VS-Series Appliances (Updated 12/1/11)
Create a RAID array
You can create a RAID array using unassigned disks chosen from all enclosures in
the selected storage profile. To do this:
1. Right-click the RAID Arrays object and select Create Array.
2. Type a name for the RAID and select the RAID level.
3. Select physical disks in the interactive enclosure image. Drive status must be
Optimal, Unassigned (view hover text to determine status). For most effective
use of resources, all disks in a RAID array should have the same capacity. If you
select a disk with a different capacity than the others you have selected, a
warning (Warning: disks differ in capacity) will be displayed.
As you select disks, the Number of Disks in RAID and RAID Capacity values
increase; selected disks show a check mark.
4. Select Create when you are done.
Several messages will be displayed while the RAID is created; a confirmation
message will display when the process is complete. The storage profile is
updated to include the new array.
CDP/NSS Administration Guide
630
RAID Management for VS-Series Appliances (Updated 12/1/11)
Create a Logical Unit
You must define a Logical Unit (LU) on an array in order to map a device to a
FalconStor host. To do this:
1. Right-click the RAID Arrays object or the object representing an individual array
and select Create Logical Unit.
2. Type the label for the LU; this is the name that will appear in the RAID
Management console.
3. If you began the procedure from the RAID Arrays object, select the RAID array
on which you want to create the LU from the RAID drop-down list, which shows
the current capacity of the selected array.
If you began the procedure from an individual array, the current capacity for that
array is already displayed.
4. Enter a capacity for the LU and select GB, TB, or MB from the drop-down list.
5. The Logical Unit Owner (the enclosure controller) is selected by default; do not
change this selection.
6. You can assign (map) the LU to the FalconStor host at this time. The Map LUN
option is selected by default. You can do this now, or uncheck the option and
map the LU later (refer to Unmapped Logical Units).
7. Select a host from the drop-down list.
8. Choose a LUN designation from the drop-down list of available LUNs.
9. Select Create when you are done.
CDP/NSS Administration Guide
631
RAID Management for VS-Series Appliances (Updated 12/1/11)
Several messages will be displayed while the LU is created; a confirmation
message will display when the process is complete. The storage profile is
updated to include the new LU and you will see it appear in the display.
Individual RAID arrays
In the navigation pane, the icon for an individual array provides at-a-glance array
status:
RAID 0, status optimal
RAID 0, status degraded (one or more disks have failed)
RAID 0, status failed
RAID 1, status optimal
RAID 1, status degraded (one or more disks have failed)
RAID 1, status failed
RAID 5, status optimal
RAID 5, status degraded (one or more disks have failed)
RAID 5, status failed
RAID 6 status optimal
RAID 6 status degraded (one or more disks have failed)
RAID 6 status failed
CDP/NSS Administration Guide
632
RAID Management for VS-Series Appliances (Updated 12/1/11)
Select a RAID array object to display summary details and status information about
physical disks assigned to the array, as well as the mapped Logical Units (LUs) that
have been created on the array. When you select an array, the associated disks are
outlined in green in the interactive enclosure image.
The following functions are available from the selected array:
Create a Logical Unit
Rename the array
Delete the array
Check RAID array actions
Replace a physical disk
CDP/NSS Administration Guide
633
RAID Management for VS-Series Appliances (Updated 12/1/11)
Rename the array
You can change the name that is displayed for an array in the navigation pane at any
time. To do this:
1. Right-click the array object and click Rename.
2. Type a new display name. It can include up to 30 characters consisting of letters,
numbers, and certain special characters: _ (underscore); - (hyphen); or #
(pound sign).
3. Click OK when you are done.
Delete the array
To delete an array, expand the RAID Arrays object until you can see the individual
array objects. When you delete an array, all data will be lost and cannot be retrieved.
1. Right-click the array object and select Delete Array.
2. Type yes in the dialog to confirm that you want to delete the array, then select
OK.
When the array has been deleted, the storage profile is updated automatically.
CDP/NSS Administration Guide
634
RAID Management for VS-Series Appliances (Updated 12/1/11)
Check RAID array actions
LU activities may take some time. Typical actions include:
Initialization - creating a Logical Unit
Rebuild - swapping in a hot spare to replace a failed disk
Copy-back - replacing a failed disk with an unassigned healthy disk,
removing the hot spare from the configuration
To check current actions, right-click the object for an individual array and select
Check Actions. A message reporting the progress of any pending action will be
displayed.
To check actions on another array, select it from the drop-down list.
Click OK to close the dialog.
Replace a physical disk
When a disk has failed in the array, the hot spare takes its place automatically. You
need to follow up and replace the failed disk with an unassigned healthy disk,
freeing up the hot spare. A failed disk is easily identified in the Disk Drive area of the
console.
CDP/NSS Administration Guide
635
RAID Management for VS-Series Appliances (Updated 12/1/11)
In the RAID Array area of the console, the array icon shows that its status as
degraded (
) and disk status is displayed as failed.
CDP/NSS Administration Guide
636
RAID Management for VS-Series Appliances (Updated 12/1/11)
Right-click the array object in the navigation pane and select Replace Physical Disk.
The Replace Physical Disk dialog shows the failed disk. In the array image in the
dialog, select an unassigned, healthy disk to replace the failed disk. The disk you
select will show a green check mark and the disk ACSL will be displayed in the
dialog.
Click Replace Disk. A rebuild action will start. While this action is in progress, the
icons for the replacement disk and the disk being replaced will change to replace
.
When the action is done, replacement disk status changes to assigned/optimal.
CDP/NSS Administration Guide
637
RAID Management for VS-Series Appliances (Updated 12/1/11)
Logical Units
Double-click an Array object to display the objects for mapped Logical Units (LUs)
on the array. Select an LU object to display status, capacity, WWPN, RAID
information, ownership, cache, and other information.
The following functions are available from the selected LU:
Define LUN mapping
Remove LUN mapping
Rename LU
Delete Logical Unit
Define LUN mapping
If you did not enable LUN mapping when you created a Logical Unit, you can do this
at any time. To do this:
CDP/NSS Administration Guide
638
RAID Management for VS-Series Appliances (Updated 12/1/11)
1. Right-click the Logical Unit object in the console and select Define LUN
mapping. (You can also do this from LUS listed under the Unmapped Logical
Units object; refer to Unmapped Logical Units.)
2. Choose a LUN from the drop-down list of available LUNs and select OK.
Several messages will be displayed while the LUN is assigned and a
confirmation message will display when the process is complete. The storage
profile is updated.
After you perform a rescan in the FalconStor Management console, you can prepare
the new device for assignment to clients. In the console, the last digit of the SCSI
address (A:C:S:L) corresponds to the LUN number you selected in the Mapping
dialog.
CDP/NSS Administration Guide
639
RAID Management for VS-Series Appliances (Updated 12/1/11)
Remove LUN mapping
Removing LUN mapping removes a physical device from the FalconStor console
and prevents the server from accessing the device. To do this:
1. Right-click the LU object and select Remove LUN Mapping.
2. Type yes in the dialog to confirm that you want to remove LUN mapping, then
select OK.
Several messages will be displayed while the mapping is removed and a
confirmation message will display when the process is complete. The storage
profile is updated.
You can re-map the LU at a later time, then rescan in the FalconStor Management
console to discover the device.
Rename LU
You can change the name that is displayed for an LU in the navigation pane at any
time. To do this:
1. Right-click the LU object and click Rename.
2. Type a new display name. It can include up to 30 characters consisting of letters,
numbers, and certain special characters: _ (underscore); - (hyphen); or #
(pound sign).
3. Click OK when you are done.
CDP/NSS Administration Guide
640
RAID Management for VS-Series Appliances (Updated 12/1/11)
Delete Logical Unit
To delete an LU, expand the object for an individual RAID array until you can see the
individual LU objects. When you delete an LU, all data will be lost and cannot be
retrieved.
1. Right-click the LU object and select Delete Logical Unit.
2. Type yes in the dialog to confirm that you want to delete the LU, then select OK.
Several messages will be displayed while the LU is deleted and a confirmation
message will display when the process is complete. When the LU has been
deleted, the storage profile is updated.
CDP/NSS Administration Guide
641
RAID Management for VS-Series Appliances (Updated 12/1/11)
Logical Unit Mapping
Select this object to display current mapping information for all Logical Units created
on all RAID arrays, including mapped and unmapped LUs.
The display also includes summary information about the host machine, which
represents the controllers on all servers connected to the storage array, such as
host and interface type and port information.
You can expand this object to display unmapped and mapped LUs.
Unmapped Logical Units
Selecting Unmapped Logical Units displays LUs that have not been mapped to a
host machine and are therefore not visible in the FalconStor Management Console.
CDP/NSS Administration Guide
642
RAID Management for VS-Series Appliances (Updated 12/1/11)
From this object, you can define LUN mapping for any LU with Optimal status (refer
to Define LUN mapping).
Select an individual unmapped LU to view configuration details.
From this object you can rename the LU (refer to Rename LU) or define LUN
mapping (refer to Define LUN mapping).
CDP/NSS Administration Guide
643
RAID Management for VS-Series Appliances (Updated 12/1/11)
Mapped Logical Units
Display information for mapped LUs from the Host object. Host information includes
the host OS, the type of interface on the host controller, and the WWPN and alias for
each port.
This screen includes the mapped Logical Units that are visible in the FalconStor
console, where the last digit of the SCSI address (A:C:S:L) corresponds to the
number in the LUN column of this display - this is the LUN number you selected in
the Mapping dialog.
CDP/NSS Administration Guide
644
RAID Management for VS-Series Appliances (Updated 12/1/11)
Upgrade RAID controller firmware
When an upgrade to RAID controller firmware is available, FalconStor will send a
notification to affected customers. Contact FalconStor Technical Support to
complete the following steps to upgrade firmware:
1. Download firmware files as directed by Technical Support.
2. Select Tools --> Upgrade Firmware in the menu bar.
3. To complete Stage 1, browse to the download location and select the firmware
file.
If you also want to upgrade non-volatile static random access memory
(NVSRAM), browse to the download location again and select the file.
Click Next when you are done.
4. To complete Stage 2, transfer the selected files to a server location specified by
Technical Support.
5. To complete Stage 3, download the firmware to controllers.
6. In Stage 4, activate the firmware.
CDP/NSS Administration Guide
645
RAID Management for VS-Series Appliances (Updated 12/1/11)
Event log
To display an event log for the selected storage profile, select Tools --> Manage
Event Log --> View Event Log in the menu bar.
All events are shown by default; three event types are recorded.
- Informational events that normally occur.
- Warnings related to unusual component conditions.
- Critical errors such as device failure or loss of connectivity.
Filter the event log
Select an event type in the Events list to display only one event category.
Click a column heading to sort event types, components, locations, or
descriptions.
Select an item in the Check Component list to display events only for the
RAID array, RAID controller modules, physical disks, virtual disks, or
miscellaneous events.
Click Quit to close the Event Log.
Clear the event log
To remove events from the log for the currently displayed storage profile, click Tools
--> Manage Event Log --> Clear Event Log in the menu bar, then select OK in the
confirmation dialog.
CDP/NSS Administration Guide
646
RAID Management for VS-Series Appliances (Updated 12/1/11)
Monitor storage from the FalconStor Management console
While all storage configuration must be performed in the RAID Management
console, you can monitor storage status information in the FalconStor Management
console from the Enclosures tab, which is available in the right-hand pane when you
select the server object. Storage component information includes status of
expansion enclosures and their components; you can also display information about
the host server, management controllers, and other devices.
Storage information
To display information about storage, make sure the Check Storage Components
option is checked.
Choose a storage profile from the drop-down list. Click Refresh to update the display
with changes to storage resources that may have been made by another user in the
RAID Management console.
If you uncheck this option, information about storage is removed from the display
immediately.
CDP/NSS Administration Guide
647
RAID Management for VS-Series Appliances (Updated 12/1/11)
Server information
To include information about the host server and other devices, make sure the Host
IPMI option is checked. You can display information for as many or as few
categories as you like:
Chassis status
Management controller (MC) status
Sensor information
FRU device information
LAN Channel information
If you uncheck an option, related information is removed from the display
immediately.
CDP/NSS Administration Guide
648
CDP/NSS Administration Guide
Index
A
Access control
Groups 283
SAN Client 63
SAN Resources 95
Storage pools 70
Access rights
Groups 283
IPStor Admins 42
IPStor Users 42
Read Only 86
Read/Write 86
Read/Write Non-Exclusive 86
SAN Client 63
SAN Resources 95
Accounts
Manage 41
ACSL
Change 63
Activity Log 36
Adapters
Rescan 53
Administrator
Management 41
AIX Client 62, 177
Delete SAN Resource 95
Expand virtual device 94
SAN Resource re-assignment 86
Alias 56, 190
APC PDU 201, 205
Appliance
Check physical resources 100
Log into 98
Remove storage device 102
Start 96
Statistics 101
Stop 96
telnet access 98
Uninstall 104
Appliance-based protection 20
Asymmetric Logical Unit Access (ALUA) 505
Authentication 178
Authorization 179
Auto Recovery 219
Auto Save 34, 40
AWK 474
Backup
dd command 385
To tape drive 385
ZeroImpact 382
Block devices 53, 496
Troubleshooting 497
BMC Patrol
SNMP integration 442
Statistics 443
View traps 443
C
CA Unicenter TNG
Launch FalconStor Management Console 439
SNMP integration 438
Statistics 439
View traps 439
Cache resource 226
Create 226
Disable 231
Enlarge 231
Suspend 231
Write 60
capacity-on-demand 66
CCM error codes 534
CCS
Veritas Command Central Storage 607
CDP journal 300
Add tag 296
Mirror 296
Protect 296
Recover data 300
Status 295
Tag 296, 302
Visual slider 300
CDP/NSS
Licensing 34
CDP/NSS Server
Properties 35
Central Client Manager (CCM) 20
CHAP secret 45
Cisco switches 162
CLI
Troubleshooting 523
Client
Add 61, 176
CDP/NSS Administration Guide
649
Index
Fibre Channel 169
iSCSI 108
AIX 62, 177
Delete SAN Resource 95
Expand virtual device 94
SAN Resource re-assignment 86
Assignment
Solaris 90
Windows 90
Definition 16
HBA failover settings 160
HP-UX 62, 177
Delete SAN Resource 95
iSCSI 106
Linux 62, 177
Expand virtual device 94
NetWare
Assigning resources 172
QLogic driver 156
Solaris 62, 90, 177
Expand virtual device 94
Troubleshooting 512
Windows 90
Expand virtual device 94
Client Throughput Report 125
Command Line Interface 20, 391
Commands 393
Common arguments 392
Event Log 407
Failover 407
Installation and configuration 391
Usage 391
Community name
Changing 444
Compression
Replication 327
Configuration repository 33, 195
Mirror 195
Configuration wizard 30
Connectivity 45
Console 28
Administrator Management 41
Change password 44
Connect to server after failover 198
Connectivity 45
Custom menu 65
Definition 16
Discover Storage Servers 29, 33
Import a disk 55
Log 64
Log Options 64
Logical Resources 58
Options 64
Physical Resources 50
Replication 60
Rescan adapters 53
SAN Clients 61
Save/restore configuration 33
Search 32
Server properties 35
Start 28
System maintenance 47
Troubleshooting 505
User interface 32
Continuous Data Protection (CDP) 287
Continuous replication 321, 330
Enable 324
Resource 331, 332
Create Primary TimeMark - 324
Cross mirror
Check resources & swap 211
Configuration 193
Recover from disk failure 210
Requirements 187
Re-synchronize 211
Swap 183
Troubleshooting 521
Verify & repair 211
D
Data access 178
Data migration 75
Data protection 261
Data tab 139
dd command 385
Debugging 513
Delta Mode 324
Delta Replication Status Report 127, 333
Devices
Failover 190
Scan LUNs greater than zero 502
Disaster recovery
Import a disk 55
Replication 23, 320
Save/restore configuration 33
Disk
Foreign 55
IDE 53
Import 55
CDP/NSS Administration Guide
650
Index
Replace a physical disk 253, 380
System 51
Disk Space Usage Report 128
Disk Usage History Report 129
DiskSafe 20, 41, 179
Linux 508
DynaPath 20, 88
DynaPath-FC
Fibre Channel Target Mode 174
E
Email Alerts
Configuration 465
Exclude system log entries 474
Include system log entries 473
Modifying properties 476
Signature 467
System log check 473
System log ignore 474
Triggers 467, 477
Custom email destination 477
New script 478
Output 478
Return codes 478
Sample script 478
X-ray 471
EnableNOPOut 114
Encryption
Replication 327
Event Log 32, 115
Command Line Interface 407
Export 117
Filter information 116
Print 117
Refresh 117
Sort information 116
Troubleshooting 510
Expand virtual device 92
Linux clients 94
Solaris clients 94
Troubleshooting 497
Windows 2000 Dynamic disks 94
Windows clients 94
Export data
From reports 123
F
Failover 181, 182, 520
And Mirroring 222, 260
Asymmetric 183
Auto Recovery 207, 218
Auto recovery 209
Check Consistency 218
Command Line Interface 407
Configuration 185
Connect to primary after failover 198
Consistency check 218
Convert to mutual failover 217
Cross mirror
Check resources & swap 211
Configuration 193
Recover from disk failure 210
Re-synchronize 211
Swap 183
Verify & repair 211
Exclude physical devices 217
Fibre Channel Target failure 189
Fix failed server after failover 209
Force a takeover 219
HBAs 168
Heartbeat monitor 191
Intervals 218
Mutual failover 182
Network connection failure 189
Network connectivity failure 182
Physical device change 216
Power control 203
APC PDU 201, 205
HP iLO 201, 204
IPMI 201, 204
RPC100 201, 204
SCSI Reserve/Release 204
Primary/Secondary Servers 182
Recovery 182, 207, 218
Remove configuration 221
Replication note 353
Requirements 185
Asymmetric mode 187
Clients 186
Cross mirror 187
General 185
Shared storage 186
Sample configuration 184
Self-monitor 191
Server changes 216
Server failure 191
Setup 192
Status 206
CDP/NSS Administration Guide
651
Index
Storage device failure 190, 191
Storage device path failure 190
Subnet change 217
Suspend/resume 220
TimeViews 222
Troubleshooting 520
Verify physical devices match 218
FalconStor Management Console 16, 28
Fibre Channel Configuration Report 132
Fibre Channel Target Mode 149, 153, 157
2 Gig switches 153
Access new devices 174
Add clients 169
Assign resources to clients 171
Client HBA failover settings 160
AIX 163
HP-UX 162
Linux 163
NetWare 164
Solaris 164
Windows 160
DynaPath-FC 174
Enable 165
Fabric topology 155
Failover
Limitations 168
Multiple switches 168
NetWare clients 168
Failover configuration 168
Hardware configuration 151, 155
Initiator mode 166
Installation and configuration 150
Internal Fibre Channel drives 157
Multiple paths 171
NetWare clients
Assigning resources 172
QLogic driver 156
Persistent binding 156
Clients 155
Downstream 151
QLogic
configuration 153
QLogic ports 166
Solaris clients 156
Target mode 166
Target port binding 151
Troubleshooting clients 516
Zoning 152
FileSafe 21, 41, 179, 605
FileSafe Server 21
filesystem utility 75
Filtered Server Throughput Report 143
Foreign disk 55
format utility 91, 94
G
Global Cache 230
Global options 336
Groups 59, 281
Access control 283
Add resources 283
Create 281
Replication 282
GUID 21, 55, 59
H
Halt server 49
health monitoring 199
heartbeat 199
High availability 181
Host-based protection 21
Hostname
Change 31, 48
HotZone 21, 232
Configure 233
Disable 239
Prefetch 232
Read Cache 232
Status 237
Suspend 239
HP iLO 201, 204
HP OpenView
SNMP integration 436
HP-UX 26
HP-UX Client 62, 177
Delete SAN Resource 95
HyperTrac 21
I
IBM Tivoli NetView
SNMP integration 440
IDE drives 53
Import
Disk 55
In-Band Protection 20
Installation
SNMP
BMC Patrol 442
CA Unicenter TNG 438
CDP/NSS Administration Guide
652
Index
HP OpenView 436
IBM Tivoli NetView 440
IP address
changing 496
IPBonding
mode options 318
IPMI 49, 201, 204, 472
Filter 50
Monitor 49
IPStor Admins
Access rights 42
IPStor Server
Checking processes 99
IPStor Users
Access rights 42
ipstorconsole.log 64
iSCSI Client 21
Failover 114, 186
Troubleshooting 515
iSCSI Target 22
iSCSI Target Mode 106
Initiators 106
Targets 106
Windows
Add iSCSI client 108
Disable 114
Enable 107
Stationary client 109
ismon
Statistics 101
J
Jumbo frames 48, 512
K
Keycodes 34
kisdev# 385
L
Label devices 90
Licensing 30, 34
Link Aggregation 318
Linux Client 62, 177
Expand virtual device 94
Troubleshooting 516
Local Replication 23, 320
Logical Resources 22, 58
Expand 92
Icons 59, 510
Status 59, 510
Logs 115
Activity log 36
Console 64
Event log refresh 64
ipstorconsole.log 64
LUN
Scan LUNs greater than zero 502
M
MaxRequestHoldTime 114
MCS 498
Menu
Customize Console 65
MIB 22
MIB file 429
loading 430, 497
MIB module 429
Microscan 22, 39, 327, 336
Microsoft iSCSI initiator 114
default retry period 114
Migrate
Drives 75
Mirroring 240
And Failover 222, 260
CDP journal 296
Configuration 242
Configuration repository 195
Expand primary disk 254
Fix minor disk failure 252
Global options 259
Monitor 247
Performance 38, 259
Promote the mirrored copy 250
Properties 259
Rebuild 258
Recover from failure 252
Remove configuration 260
Replace a physical disk 253
Replace disk in active configuration 252
Replace failed disk 252
Replication note 353
Requirements 242
Resume 258
Resynchronization 39, 248, 259
Setup 242
Snapshot resource 269
Status 250
Suspend 258
CDP/NSS Administration Guide
653
Index
Swap 250
Synchronize 254
MPIO 498
MTU 48
Multipathing 56, 387
Aliasing 56
Load distribution 388
load distribution 388
Path management 389
Mutual CHAP 45, 46
change IP address 497
NNM
SNMP integration 436
Statistics 437
NPIV 22, 200
NSS
What is? 14
O
OID 22, 25
Out of kernel resources error 525
N
Near-line mirroring 354
After configuration 363
Configuration 355
Fix minor disk failure 379
Global options 377
Monitor 358
Overview 354
Performance 377
Properties 378
Rebuild 373
Recover data 365
Recover from failure 379
Remove configuration 378
Replace a physical disk 380
Replace disk in active mirror 380
Replace failed disk 379
Requirements 355
Resume 377
Re-synchronization 359
Rollback 366
Setup 355
Status 364
Suspend 377
Swap 373
Synchronize 373
NetView
SNMP integration 440
Statistics 441
NetWare Client
Assigning resources 172
QLogic
driver 156
Troubleshooting 517
Network configuration 30, 47
Network connectivity 510
Failure 182
NIC Port Bonding 22, 316
P
Passwords
Add/delete administrator password 41
Change administrator password 41, 44
Patch
Apply 46
Rollback 46
Path failure 190
Performance 225
Mirror 38
Mirroring 259
Near-line mirroring 377
Replication 38, 336
Persistent binding 51, 156, 160, 504
Clients 155
Downstream 151
Troubleshooting 503
Persistent reservation 112
Physical device
Prepare 52
Rename 53
Repair 56
Test throughput 56
Physical Resource Allocation Report 135
Physical resources 50, 71
Check 100
Icons 51
IDE drives 53
Prepare Disks 76
Troubleshooting 497
Physical Resources Allocation Report 134
Physical Resources Configuration Report 133
Ports 180
Power Control options 200, 203
Prefetch 22, 232
Prepare disks 52, 76
pure-ftpd package 48
CDP/NSS Administration Guide
654
Index
PVLink 162
Q
Qlc driver 157
QLogic
Configuration 153
HBA 160, 200
iSCSI HBA 103
Ports 166
Target mode settings 153
Queue Depth 516
Quiescent 294
Quota
Group 43, 44
User 43, 44
R
RAID Management
Array 625
Automatic discovery 616
Check actions 635
Console 615
Navigation tree 618
Controller modules 623
Controller settings 620
Discover Storage 615, 617
Automatic 616
Expansion enclosures 617
Manual 615
Disk drive
Assigned 625
Available 625
Empty 625
Failed 625
Hot spare 625
Remove 628
Set 628
Removed 625
Standby 625
Disk drive images 625
Disk drives 625
Interactive images 625
Enclosures 621
Expansion enclosures 621
FalconStor Management console
Discover storage 647
Enclosures tab 647
IPMI information 648
Firmware upgrade 645
Hardware QuickStart Guide 612
Host information 642, 644
Hot spare
Remove 628
Set 628
In-band 615
Individual controller modules 624
Individual disk drives 627
Individual enclosure 621
Expansion enclosure 622
Storage enclosure head 622
Individual Raid arrays 632
Logical Unit Mapping 642
Logical Units 613, 629, 633, 638
Create Logical Unit 631
Define LUN mapping 638, 643
Delete Logical Unit 641
Remove LUN mapping 640
Rename Logical Unit 640
Unmapped Logical Units 642
LUs 613
Manual discovery 615
Mapped Logical Units 644
Monitor storage 647
Out-of-band 615
Preconfigured storage 613
RAID Arrays
Check actions 635
Create RAID Array 630
Delete RAID Array 634
Replace physical disk 635
RAID arrays 629
Logical Units 638
SAN Resources 613
Storage enclosure head 621, 625
Storage object 618
Storage profile 618
Read Cache 22
Reboot server 49
Recover data with TimeView 300
RecoverTrac 22
Relocate a replica 351
remote boot 500
Remote Replication 23, 320
Repair
Paths to a device 56
Replica resource
Protect 331
Replication 23, 320, 336
CDP/NSS Administration Guide
655
Index
Assign clients to replica disk 337
Change configuration options 339
Compression 327
Configuration 322
Console 60
Continuous 321
Continuous replication resource 331
Delta 321
Delta mode 324
Encryption 327
Expand primary disk 352
Failover note 353
First replication 330
Force 341
How it works 321
Local 320
Microscan 39, 327, 336
Mirroring note 353
Performance 38, 336
Parameters 336
Policies 326
Primary disk 23, 320
Promote 337
Recover files 339
Recreate original configuration 338
Remote 320
Remove configuration 352
Replica disk 23, 320
Requirements 322
Resume schedule 341
Reversal 338, 349
Scan 23, 338
Setup 322
Start manually 341
Status 333
Stop in progress 341
Suspend schedule 341
Switch to replica disk 337
Synchronize 332, 341
Test 336
Throttle 39
TimeMark note 353
TimeMark/TimeView 339
Troubleshooting 522
Reports 118
Client Throughput 126
Creating 119
Global replication 148
Delta Replication Status 333
Disk Space Usage 128
Export data 123
Filtered Server Throughput 143
Physical Resource Allocation 135
Physical Resources Allocation 134
Physical Resources Configuration 133
SAN Client Usage Distribution 140
SAN Client/Resources Allocation 141
SAN Resource Usage Distribution 143
SAN Resources Allocation 142
SCSI Channel Throughput 137
SCSI Device Throughput 139
Server Throughput 126
Types 125
Global replication 148
Viewing 123
repositories 21
Rescan 196
Adapters 53
Resource IO Activity Report 135
Retention 23
RPC100 201, 204
S
SafeCache 23, 225, 326
Cache resource 226
Configure 226
Disable 231
Enlarge 231
Properties 231
Status 231
Suspend 231
Troubleshooting 523
SAN Client 61
Access control 63
Add 61, 176
Fibre Channel 169
iSCSI 108
AIX 62, 177
Assign SAN Resources 86
Definition 16
HP-UX 62, 177
iSCSI 106
Linux 62, 177
Solaris 62, 90, 177
Windows 90
SAN Client / Resources Allocation Report 141
SAN Client Usage Distribution Report 140
SAN Resource tab 139
SAN Resource Usage Distribution Report 143
CDP/NSS Administration Guide
656
Index
SAN Resources 58, 71, 72
Access control 95
Assign to Clients 86
Create service enabled device 83
Create virtual device 76
Creating 76
Delete 95
Physical resources 72
Prepare Disk 76
Virtual devices 72
Virtualization examples 72
SAN Resources Allocation Report 142
SCSI
Aliasing 56, 190
Troubleshooting adapters/devices 501
SCSI Channel Throughput Report 137
SCSI Device Throughput Report 139
Security 178
Authentication 178
Authorization 179
Data access 178
Disable ports 180
Physical security of machines 180
Recommendations 179
Storage network topology 180
System management 178
Server
Authentication 178
Authorization 178
Check physical resources 100
Checking processes 99
Definition 16
Discover 29, 33
Import a disk 55
Log into 98
Network configuration 47
Properties 35
Remove storage device 102
Save/restore configuration 33
Scan LUNs greater than zero 502
Start 96
Statistics 101
Stop 96
telnet access 98
Uninstall 104
X-ray 518
Server Throughput Report 143
Service Enabled Devices 75
Creating 83
Troubleshooting 524
Service enabled devices
Creating 76
SMI-S 24
Snapshot 261
Agent 24
notification 267, 293
trigger 293
Resource
Check status 268
Delete 269
Expand 269
Mirror 269
offline 496
Options 269
Properties 269
Protect 269
Reinitialize 269
Shrink Policy 269
Troubleshooting 496
Setup 261
Snapshot Copy 276
Status 280
Snapshot Resource
expand 265
SNMP
Advanced topics 444
BMC Patrol 442
CA Unicenter TNG 438
Changing the community name 444
HP OpenView 436
IBM Tivoli NetView 440
Implementing 433
Integration 429
Limit to subnetwork 444
Manager on different network 444
Traps 37, 430
Troubleshooting 523
Using a configuration for multiple Storage
Servers 444
snmpd.conf 444
Software updates
Add patch 46
Rollback patch 46
Solaris 157
Internal Fibre Channel drives 157
Solaris Client 62, 177
Expand virtual device 94
Persistent binding 156
CDP/NSS Administration Guide
657
Index
Troubleshooting 518
Virtual devices 90
Statistics
ismon 101
Stop Takeover option 208
Storage 24
Remove device 102
Storage Cluster Interlink 183, 185
Port 24, 185, 197
Storage device path failure 190
Storage Pool Configuration Report 146
Storage pools 66
Access control 70
Administrators 66
Allocation Block Size 69
Create 67
Manage 66
Properties 68
Security 70
Set access rights 70
Tag 70
Type 68
Storage quota 43
Storage Server
Authentication 178
Authorization 178
Connect in Console 29
definition 16
Discover 29, 33
Import a disk 55
Network configuration 47
Save/restore configuration 33
Scan LUNs greater than zero 502
Troubleshooting 518
uninstall 496
X-ray 518
Swapping 211
Sync Standby Devices 183, 521
Synchronize Out-of-Sync Mirrors 39, 377
Synchronize Replica TimeMark 324
System
Disk 51
log 473
Management 178
tab 139
System maintenance 47
Halt 49
IPMI 49
Network configuration 47
Reboot 49
Restart network 49
Restart the server 49
Set hostname 48
T
Tachyon HBAs 162
Target mode settings
QLogic 153
Target port binding 151
target server 320
Thin Provisioning 24, 73, 78, 242, 322
Throttle 39
speed 347
tab 346
Throttle window
Add 346
Delete 346
Edit 346
Throughput
Test 56
TimeMark 24
Replication note 353
retention 23, 276
Troubleshooting 523
TimeMark/CDP 287
Add comment 296
Change priority 296
Copy 298
Create manually 296
Delete 315
Disable 315
Failover 222
Free up storage 315
Maximum reached 293
Policies 311, 314
Priority 293, 297
Replication 315
Resume CDP 314
Roll forward 310
Rollback 310
Scheduling 290
Setup 288
Status 294
Suspend CDP 314
TimeView 287, 300
TimeView 25, 287, 300
Recover data 300
Remap 307
Tivoli
CDP/NSS Administration Guide
658
Index
SNMP integration 440
Trap 25
Traps 430
Trigger 25
Trigger Replication after TimeMark 353
Troubleshooting 495, 515
Block devices 497
CLI 523
Client
Connectivity 512
Windows 513
Console launch 505
Cross mirror 521
Debugging 513
Event log 510
Failover 520
Cross mirror 521
FC storage 503
Fibre Channel Client 516
iSCSI Client 515
Jumbo frame support 512
Linux Client 512, 516
NetWare SAN Client 517
Network connectivity 510
Physical resources 497
Replication 522
SafeCache 523
SCSI adapters and devices 501
Linux Client 502
Service Enabled Devices 524
Snapshot resources 496
SNMP 523
Solaris Client 518
TimeMark 523
Virtual device expansion 497
Windows client 513
Examples 72
Volume set addressing 151, 169, 503
VSA 151, 169, 503
enable for client 503
W
watermark value 326
Windows 2000 Dynamic disks
Expand virtual device 94
Windows Client
Expand virtual device 94
Troubleshooting 513
Virtual devices 90
World Wide Port Names 170
Write caching 60
WWN Zoning 25
WWPN 88, 170
mapping 88
X
X-ray 518
CallHome 471
System Information file 472
Y
YaST 47
Z
ZeroImpact 25
backup 382
Zoning 152
Soft zoning 152
U
UEFI 500
USEQUORUMHEALTH 185
User Quota Usage Report 147
V
VAAI 25
Virtual devices 72
Creating 76
Expand 92
expansion FAQ 497, 498
Virtualization 72
CDP/NSS Administration Guide
659