Professional Documents
Culture Documents
Issue 13
Date 2019-08-15
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and the
customer. All or part of the products, services and features described in this document may not be within the
purchase scope or the usage scope. Unless otherwise specified in the contract, all statements, information,
and recommendations in this document are provided "AS IS" without warranties, guarantees or
representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Website: http://www.huawei.com
Email: support@huawei.com
Contents
2 Introduction.................................................................................................................................... 4
2.1 Basic Concepts............................................................................................................................................................... 4
2.1.1 Introduction to Windows Server..................................................................................................................................4
2.1.2 File Systems in Windows............................................................................................................................................ 4
2.2 Host-SAN Connectivity..................................................................................................................................................5
2.2.1 FC Connectivity...........................................................................................................................................................5
2.2.2 iSCSI Connectivity...................................................................................................................................................... 6
2.2.3 Multipath Connectivity................................................................................................................................................7
2.2.3.1 UltraPath................................................................................................................................................................... 7
2.2.3.2 MPIO........................................................................................................................................................................ 7
2.2.3.3 ALUA....................................................................................................................................................................... 8
2.2.4 SAN Boot.................................................................................................................................................................... 9
2.3 Interoperability Query.................................................................................................................................................. 10
2.4 Specifications................................................................................................................................................................11
2.5 Common Management Tools and Commands..............................................................................................................12
2.5.1 Remote Login............................................................................................................................................................ 12
2.5.2 Management Tool...................................................................................................................................................... 14
2.5.3 Disk Management Commands...................................................................................................................................16
3 Planning Connectivity................................................................................................................ 20
3.1 Non-HyperMetro Scenarios..........................................................................................................................................20
3.1.1 Direct-Attached FC Connections...............................................................................................................................20
3.1.2 Fabric-Attached FC Connections.............................................................................................................................. 22
3.1.3 Direct-Attached iSCSI Connections.......................................................................................................................... 25
3.1.4 Fabric-Attached iSCSI Connections..........................................................................................................................28
3.2 HyperMetro Scenarios.................................................................................................................................................. 31
4.1 Switch........................................................................................................................................................................... 32
4.2 Storage System............................................................................................................................................................. 32
4.3 Host...............................................................................................................................................................................32
4.3.1 Identifying HBAs...................................................................................................................................................... 33
4.3.2 Querying HBA Properties......................................................................................................................................... 33
5 Configuring Connectivity.......................................................................................................... 35
5.1 Establishing Fibre Channel Connections......................................................................................................................35
5.1.1 Host Configuration.................................................................................................................................................... 35
5.1.2 (Optional) Switch Configuration............................................................................................................................... 35
5.1.3 Storage System Configuration...................................................................................................................................42
5.2 Establishing iSCSI Connections................................................................................................................................... 46
5.2.1 Host Configuration.................................................................................................................................................... 46
5.2.2 (Optional) Switch Configuration............................................................................................................................... 53
5.2.3 Storage System Configuration...................................................................................................................................55
5.2.4 CHAP Authentication................................................................................................................................................59
5.3 Scanning LUNs on a Host............................................................................................................................................ 67
6 Configuring Multipathing.........................................................................................................68
6.1 Concepts....................................................................................................................................................................... 68
6.1.1 Initiator...................................................................................................................................................................... 68
6.1.2 HyperMetro Working Modes.....................................................................................................................................71
6.1.3 ALUA Working Principles........................................................................................................................................ 72
6.1.3.1 ALUA Working Principles and Failover in Non-HyperMetro Scenarios.............................................................. 72
6.1.3.2 ALUA Working Principles and Failover in HyperMetro Scenarios.......................................................................73
6.2 Configuring Multipathing in Non-HyperMetro Scenarios........................................................................................... 74
6.2.1 UltraPath.................................................................................................................................................................... 74
6.2.1.1 Storage System Configuration................................................................................................................................74
6.2.1.2 Host Configuration................................................................................................................................................. 74
6.2.2 OS Native Multipathing Software............................................................................................................................. 76
6.2.2.1 Storage System Configuration................................................................................................................................76
6.2.2.2 Host Configuration................................................................................................................................................. 78
6.2.2.3 Verification............................................................................................................................................................. 81
6.3 Configuring Multipathing in HyperMetro Scenarios................................................................................................... 82
6.3.1 UltraPath.................................................................................................................................................................... 82
6.3.1.1 Storage System Configuration................................................................................................................................82
6.3.1.2 Host Configuration................................................................................................................................................. 83
6.3.1.3 Verification............................................................................................................................................................. 85
6.3.2 OS Native Multipathing Software............................................................................................................................. 86
6.3.2.1 Storage System Configuration................................................................................................................................86
6.3.2.2 Host Configuration................................................................................................................................................. 87
6.3.2.3 Verification............................................................................................................................................................. 92
7 FAQs...............................................................................................................................................94
7.1 What Can I Do When Being Prompted to Uninstall MPIO During UltraPath Installation?........................................ 94
7.2 What Are Common Management Commands?............................................................................................................95
7.3 How Do I Change the Windows Disk Timeout Time?................................................................................................. 96
7.4 How Do I Modify the Timeout Time for the FC HBA Port Driver?............................................................................97
7.4.1 Modifying Emulex HBA Driver Parameters............................................................................................................. 97
7.4.2 Modifying QLogic HBA Driver Parameters............................................................................................................. 98
7.5 How Do I Modify the iSCSI Initiator's Driver Timeout Time?..................................................................................101
7.6 How Do I Change the Number of TCP Data Retransmission Times?........................................................................102
1.1 Purpose
1.2 Audience
1.3 Related Documents
1.4 Conventions
1.5 Where To Get Help
1.1 Purpose
This document details the configuration methods and precautions for connecting Huawei
SAN storage devices to Windows Server hosts.
1.2 Audience
This document is intended for:
l Huawei technical support engineers
l Technical engineers of Huawei's partners
l Personnel who are involved in interconnecting Huawei SAN and Windows servers or
who are interested in the interconnection.
Readers of this guide are expected to be familiar with the following topics:
l Huawei OceanStor V3, OceanStor V5, and Dorado V3
l Windows Server
1.4 Conventions
Symbol Conventions
Symbol Description
General Conventions
Convention Description
Boldface Names of files, directories, folders, and users are in boldface. For
example, log in as user root.
Command Conventions
Format Description
Format Description
Product Information
For documentation, release notes, software updates, and other information about Huawei
products and support, go to the Huawei Online Support site (registration required) at http://
support.huawei.com/enterprise/.
Technical Support
Huawei has a global technical support system, able to offer timely onsite and remote technical
support service.
For assistance, contact:
l Your local technical support
http://e.huawei.com/en/branch-office-query
l Huawei company headquarters.
Huawei Technologies Co., Ltd.
Address: Huawei Industrial Base Bantian, Longgang Shenzhen 518129 People's
Republic of China
Website: http://e.huawei.com/
Document Feedback
Huawei welcomes your suggestions for improving our documentation. If you have comments,
send your feedback to infoit@huawei.com.
2 Introduction
structure, which improves performance, reliability, and disk space utilization. NTFS also
provides extended functions such as the access control list (ACL) and file system logs.
l exFAT
Extended File Allocation Table File System (exFAT) is a Microsoft file system optimized
for flash drives (NFTS cannot be used to manage flash drives). exFAT applies to
Windows Embedded 5.0 and later (including Windows CE 5.0, Windows CE 6.0,
Windows Mobile 5, Windows Mobile 6, and Windows Mobile 6.1). This file system
supports 4 GB or larger files that are not supported by FAT32.
l ReFS
Resilient File System (ReFS) is a new file system introduced with Windows Server
2012. ReFS improves system availability and fault tolerance capabilities in the big data
era. When interworking with Space Storage, ReFS provides a comprehensive, end-to-
end, and flexible storage architecture.
2.2.1 FC Connectivity
A Fibre Channel (FC) SAN is a specialized high-speed network that connects host servers to
storage systems. The FC SAN components include HBAs in the host servers, switches that
help route storage traffic, cables, storage processors (SPs), and storage disk arrays.
To transfer traffic from host servers to shared storage, the FC SAN uses the Fibre Channel
protocol to package SCSI commands into Fibre Channel frames.
l Ports in FC SAN
Each node in the SAN, such as a host, a storage device, or a fabric component has one or
more ports that connect it to the SAN. Ports are identified in a number of ways, such as
by:
– World Wide Port Name (WWPN)
A globally unique identifier for a port that allows certain applications to access the
port. The FC switches discover the WWPN of a device or host and assign a port
address to the device.
– Port_ID (or port address)
Within a SAN, each port has a unique port ID that serves as the FC address for the
port. This unique ID enables routing of data through the SAN to that port. The FC
switches assign the port ID when the device logs in to the fabric. The port ID is
valid only when the device is logged on.
l Zoning
Zoning provides access control in the SAN topology. Zoning defines which HBAs can
connect to which targets. When you configure a SAN by using zoning, the devices
outside a zone are not visible to the devices inside the zone.
Zoning has the following effects:
– Reduces the number of targets and LUNs presented to a host.
– Controls and isolates paths in a fabric.
– Separates different environments, for example, a test from a production
environment.
By carrying SCSI commands over IP networks, iSCSI is used to access remote block devices
in the SAN, providing hosts with the illusion of locally attached devices.
A single discoverable entity on the iSCSI SAN, such as an initiator or a target, represents an
iSCSI node.
l IP address
Each iSCSI node can have an IP address associated with it so that routing and switching
equipment on your network can establish the connection between the server and storage.
This address is just like the IP address that you assign to your computer to get access to
your company's network or the Internet.
l iSCSI name
A worldwide unique name for identifying the node. iSCSI uses the iSCSI Qualified
Name (IQN) and Extended Unique Identifier (EUI).
By default, Windows generates unique iSCSI names for your iSCSI initiators, for
example, iqn.1991-05.com.microsoft:win-pjcqrusvvl9. The default value is retained in
most situations. If the value needs to be changed, ensure that the new iSCSI entered is
worldwide unique
2.2.3.1 UltraPath
UltraPath is a Huawei-developed multipathing software. It can manage and process disk
creation/deletion and I/O delivery of operating systems.
UltraPath provides the following functions:
l Masking of redundant LUNs
In a redundant storage network, an application server with no multipathing software
detects a LUN on each path. Therefore, a LUN mapped through multiple paths is
mistaken for two or more different LUNs. UltraPath installed on the application server
masks redundant LUNs on the operating system driver layer to provide the application
server with only one available LUN, the virtual LUN. In this case, the application server
only needs to deliver data read and write operations to UltraPath. UltraPath then masks
the redundant LUNs, and writes data into LUNs without damaging other data.
l Optimum path selection
In a multipath environment, the owning controller of the LUN on the storage system
mapped to an application server is the prior controller. With UltraPath, an application
server accesses the LUN on the storage system through the prior controller, thereby
obtaining the highest I/O speed. The path to the prior controller is the optimum path.
l Failover and failback
– Failover
When a path fails, UltraPath fails over its services to another functional path.
– Failback
UltraPath automatically delivers I/Os to the first path again after the path recovers
from the fault.
l I/O Load balancing
UltraPath provides load balancing within a controller and across controllers.
– For load balancing within a controller, I/Os poll among all the paths of the
controller.
– For load balancing across controllers, I/Os poll among the paths of all these
controllers.
l Path test
UltraPath tests faulty and idle paths:
– Faulty paths
UltraPath frequently tests faulty paths to detect the path recovery as soon as
possible.
– Idle paths
UltraPath tests idle paths to identify potentially faulty paths early on, preventing
unnecessary I/O retries. The test frequency is kept low to minimize impact on
service I/Os.
2.2.3.2 MPIO
Windows Microsoft Multi-Path IO (MPIO) allows storage vendors to develop multipathing
solutions that contain the hardware-specific information needed to optimize connectivity with
storage systems. MPIO can be used independently. This software helps balance loads among
multiple paths, and implement path selection and failover between storage systems and hosts.
l Failover Only
This policy does not perform load balancing. This policy uses a single active path, and
the rest of the paths are standby paths. The active path is used for sending all I/Os. If the
active path fails, then one of the standby paths is used. When the failed path is
reactivated or reconnected, the standby path that was activated returns to standby.
l Round Robin
This load balancing policy allows the Device Specific Module (DSM) to use all available
paths for MPIO in a balanced way. This is the default policy that is chosen when the
storage controller follows the active-active model and the management application does
not specifically choose a load balancing policy.
l Round Robin with Subset
This load balancing policy allows the application to specify a set of paths to be used in a
round robin fashion, and with a set of standby paths. The DSM uses paths from a
primary path pool for processing requests as long as at least one of the paths is available.
The DSM uses a standby path only when all the primary paths fail. For example, given 4
paths: A, B, C, and D, paths A, B, and C are listed as primary paths and D is the standby
path. The DSM chooses a path from A, B, and C in round robin fashion as long as at
least one of them is available. If all three paths fail, the DSM uses D, the standby path. If
paths A, B, or C become available, the DSM stops using path D and switches to the
available primary paths.
l Least Queue Depth
This load balancing policy sends I/O down the path with the fewest currently outstanding
I/O requests. For example, consider that there is one I/O sent to LUN 1 on Path 1, and
the other I/O is sent to LUN 2 on Path 1. The cumulative outstanding I/O on Path 1 is 2,
and on Path 2 is 0. Therefore, the next I/O for either LUN will process on Path 2.
l Weighed Paths
This load balancing policy assigns a weight to each path. The weight indicates the
relative priority of a given path. The larger the number, the lower ranked the priority. The
DSM chooses the least-weighted path from among the available paths.
l Least Blocks
This load balancing policy sends I/O down the path with the least number of data blocks
currently being processed. For example, consider that there are two I/Os: one is 10 bytes
and the other is 20 bytes. Both are in process on Path 1, and there are no outstanding I/Os
on Path 2. The cumulative outstanding amount of I/O on Path 1 is 30 bytes. On Path 2, it
is 0. Therefore, the next I/O will process on Path 2.
2.2.3.3 ALUA
l ALUA definition
Asymmetric Logical Unit Access (ALUA) is a multi-target port access model. In a
multipathing state, the ALUA model provides a way of presenting active/passive LUNs
to a host and offers a port status switching interface to switch over the working
controller. For example, when a host multipathing program that supports ALUA detects a
port status change (the port becomes unavailable) on a faulty controller, the program will
automatically switch subsequent I/Os to the other controller.
l ALUA impacts
ALUA is applicable to a storage system that has only one prior LUN controller. All host
I/Os can be routed through different controllers to the working controller for execution.
ALUA will instruct the hosts to deliver I/Os preferentially from the LUN working
controller, thereby reducing the I/O routing-consumed resources on the non-working
controllers.
If all I/O paths of the LUN working controller are disconnected, the host I/Os will be
delivered only from a non-working controller and then routed to the working controller
for execution.
l Suggestions for using ALUA on Huawei storage
To prevent I/Os from being delivered to a non-working controller, you are advised to
ensure that:
– LUN home/working controllers are evenly distributed on storage systems so that
host service I/Os are delivered to multiple controllers for load balancing.
– Hosts always try the best to select the optimal path to deliver I/Os even after an I/O
path switchover.
and booted from SAN storage devices. SAN Boot is also called Remote Boot or boot from
SAN.
SAN Boot can help to improve system integration, enable centralized management, and
facilitate recovery.
l Server integration: Blade servers are used to integrate a large number of servers within a
small space. There is no need to configure local disks.
l Centralized management: Boot disks of servers are centrally managed on a storage
device. All advanced management functions of the storage device can be fully utilized.
For example, the snapshot function can be used for backup. Devices of the same model
can be quickly deployed using the snapshot function. In addition, the remote replication
function can be used for disaster recovery.
l Quick recovery: Once a server that is booted from SAN fails, its boot volume can be
quickly mapped to another server, achieving quick recovery.
Step 2 On the home page, choose Interoperability Center > Storage Interoperability.
----End
2.4 Specifications
Windows has different specifications for LUNs and file systems. Table 2-3 lists the
limitations on the number of LUNs. Table 2-4 lists specifications of NTFS with GPT disks.
Windows Server 2008 l 8 buses per adapter 4096 LUNs per target
Windows Server 2008 R2 l 128 target IDs per bus (Emulex)
l 255 LUNs per target ID 256 LUNs per target
(QLogic)
Windows Server 2012 and l 255 buses per adapter 4096 LUNs per target
later versions l 128 target IDs per bus (Emulex)
l 255 LUNs per target ID 256 LUNs per target
(QLogic)
NOTE
Table 2-4 lists only part of NTFS specifications. For more information about specifications of
each file system, see the corresponding Microsoft document, for example, NTFS Overview.
Step 1 Ensure that the network connectivity is normal between the client host and the managed host.
Inbound rules refer to rules for receiving network information. After Remote Desktop (TCP-
In) is enabled, other devices can access the managed host through the remote desktop. After
File and Printer Sharing (Echo Request – ICMPv4-In) is enabled, other devices on the
network can ping the managed host to check network connectivity.
To prevent security risks, you are advised to restore firewall configurations to the initial state
after completing host commissioning.
Step 3 Ping each other's IP address on the client host and the managed host respectively to verify
firewall configurations.
Step 4 Configure the remote login level on the managed host.
Right-click My Computer and choose Properties from the shortcut menu. In the dialog box
that is displayed, click Change Settings. In the dialog box that is displayed, click the Remote
tab. In Remote Desktop, select Allow Connections from computers running any version
of Remote Desktop (less secure).
Step 5 On the client host, enter mstsc in the Run window to start the remote desktop connection, as
shown in Figure 2-4.
Step 6 In the remote desktop connection dialog box that is displayed, enter the IP address of the
managed host.
Step 7 Enter the username and password of the managed host.
----End
Step 2 Right-click Computer and choose Manage from the shortcut menu.
Server Manager is started, as shown in Figure 2-5.
----End
Managing Disks
You can use Server Manager to manage storage resources, such as initializing disks,
partitioning disks, formatting disks, and managing volumes.
In Windows Server 2008 and later versions, you need to set the state of LUNs mapped to the
host for the first time to online in Disk Management. The operating system then marks the
LUNs for identification. This process is disk initialization. Only the initialized disks can be
used by the host in volume management. You need to specify disk partition format when
initializing disks. Available formats are:
l Master Boot Record (MBR)
l GUID Partition Table (GPT)
NOTE
In versions earlier than Windows Server 2008, you do not need to set a disk online or specify partition
format. The system uses the default format MBR.
The partition format refers to the method of organizing disk partitions in Windows XP Professional and
Windows Server 2003. For details, see:
https://www.microsoft.com/en-US/download/details.aspx?id=53314
In Windows, disks are categorized as basic and dynamic disks. Only simple volumes can be
created on basic disks. Spanned volumes, mirror volumes, striped volumes, and RAID-5
volumes are created on dynamic disks. In Windows Server 2008 and later versions, the
operating system converts basic disks to dynamic disks when spanned volumes (or other
volumes that can only be created on dynamic disks) are created on basic disks.
The preceding commands show the process that three LUNs are initialized and then used to
create a RAID-5 volume.
Step 2 Set the state of the LUNs mapped to the host for the first time to online.
DISKPART> select disk 2
NOTE
Compared with the GUI-based disk management, the CLI disk management commands in Windows are
complex. However, those CLI commands are useful in automatic management and tests.
----End
3 Planning Connectivity
Windows hosts and storage systems can be connected based on different criteria. Table 3-1
describes the typical connection modes.
Fibre Channel connections are the most widely used. To ensure service data security, both
direct-attached connections and fabric-attached connections require multiple paths.
The following details Fibre Channel and iSCSI connections in HyperMetro and non-
HyperMetro scenarios.
3.1 Non-HyperMetro Scenarios
3.2 HyperMetro Scenarios
Two-Controller Storage
The following uses Huawei OceanStor 5500 V3 as an example to explain how to directly
connect a Windows host to a two-controller storage system through FC multi-path
connections, as shown in Figure 3-1.
NOTE
In this connection diagram, each of the two controllers is connected to a host HBA port with an optical
fiber. The cable connections are detailed in Table 3-2.
Multi-Controller Storage
The following uses Huawei OceanStor 18800 V3 (four-controller) as an example to explain
how to directly connect a Windows host to a multi-controller storage system through FC
multi-path connections, as shown in Figure 3-2.
NOTE
In this connection diagram, each of the four controllers is connected to a host HBA port with an optical
fiber. The cable connections are detailed in Table 3-3.
Two-Controller Storage
The following uses Huawei OceanStor 5500 V3 as an example to explain how to connect a
Windows host to a two-controller storage system through FC multi-path connections using a
switch, as shown in Figure 3-3.
NOTE
In this connection diagram, two controllers of the storage system and two ports of the Windows host are
connected to the FC switch through optical fibers. On the FC switch, the ports connecting to the storage
controllers and to the Windows host are grouped in a zone, ensuring connectivity between the host ports
and the storage.
NOTE
Zone division in this table is for reference only. Plan zones based on site requirements.
Multi-Controller Storage
The following uses Huawei OceanStor 18800 V3 (four-controller) as an example to explain
how to connect a Windows host to a multi-controller storage system through FC multi-path
connections using a switch, as shown in Figure 3-4.
NOTE
In this connection diagram, four controllers of the storage system and two ports of the Windows host are
connected to the FC switch through optical fibers. On the FC switch, the ports connecting to the storage
controllers and to the Windows host are grouped in a zone, ensuring connectivity between the host ports
and the storage.
NOTE
Zone division in this table is for reference only. Plan zones based on site requirements.
Two-Controller Storage
The following uses Huawei OceanStor 5500 V3 as an example to explain how to directly
connect a Windows host to a two-controller storage system through iSCSI multi-path
connections, as shown in Figure 3-5.
NOTE
In this connection diagram, each of the two controllers is connected to a port on the host network adapter
with a network cable. The IP address plan is detailed in Table 3-6.
Table 3-6 IP address plan for direct-attached iSCSI multi-path connections (two-controller
storage)
Port Name Port Description IP Address Subnet Mask
NOTE
IP addresses in this table are for reference only. Plan IP addresses based on site requirements.
Multi-Controller Storage
The following uses Huawei OceanStor 18800 V3 (four-controller) as an example to explain
how to directly connect a Windows host to a multi-controller storage system through iSCSI
multi-path connections, as shown in Figure 3-6.
NOTE
In this connection diagram, each of the four controllers is connected to a port on host network adapters
with a network cable. The IP address plan is detailed in Table 3-7.
Table 3-7 IP address plan for direct-attached iSCSI multi-path connections (four-controller
storage)
NOTE
IP addresses in this table are for reference only. Plan IP addresses based on site requirements.
Two-Controller Storage
The following uses Huawei OceanStor 5500 V3 as an example to explain how to connect a
Windows host to a two-controller storage system through iSCSI multi-path connections using
an Ethernet switch, as shown in Figure 3-7.
NOTE
In this connection diagram, two controllers of the storage system and two ports of the Windows host
network adapter are connected to the Ethernet switch through network cables. IP addresses of the ports
on the storage and host are in the same subnet, ensuring connectivity between the host ports and the
storage.
Table 3-8 IP address plan for fabric-attached iSCSI multi-path connections (two-controller
storage)
Port Name Port Description IP Address Subnet Mask
NOTE
IP addresses in this table are for reference only. Plan IP addresses based on site requirements.
Multi-Controller Storage
The following uses Huawei OceanStor 18800 V3 (four-controller) as an example to explain
how to connect a Windows host to a multi-controller storage system through iSCSI multi-path
connections using an Ethernet switch, as shown in Figure 3-8.
NOTE
In this connection diagram, four controllers of the storage system and four ports of the Windows host
network adapters are connected to the Ethernet switch through network cables. IP addresses of the ports
on the storage and host are in the same subnet, ensuring connectivity between the host ports and the
storage.
Table 3-9 IP address plan for fabric-attached iSCSI multi-path connections (four-controller
storage)
Port Name Port Description IP Address Subnet Mask
NOTE
IP addresses in this table are for reference only. Plan IP addresses based on site requirements.
This chapter describes the preparations on the switches, storage systems, and hosts.
4.1 Switch
4.2 Storage System
4.3 Host
4.1 Switch
Ensure that the switches are functioning properly and their ports have the necessary licenses
and transmit data normally. Refer to the switch vendor's documentation for details on how to
check functionality and license status for the switches in your environment. Figure 4-1 shows
an example of a port failure due to lack of a license.
4.3 Host
Before connecting a host to a storage system, make sure that the host HBAs have been
identified and are functioning properly. You also need to obtain the world wide names
(WWNs) of HBA ports for subsequent storage system configurations.
QLogic Sansurfer
Windows also provides a Fibre Channel Information Tool for discovery of SAN resources,
which is available at:
http://www.microsoft.com/en-us/download/details.aspx?id=17530
After the software is installed, run fcinfo in the Command Prompt to obtain the HBA
information. Figure 4-3 provides an example.
For Windows Server 2012 and later versions, Windows PowerShell provides the Get-
InitiatorPort command to query the information about WWNs and iSCSI initiators of FC
HBAs. Figure 4-4 shows an example.
Figure 4-4 Querying the FC HBA and iSCSI initiator information in Windows Server 2016
5 Configuring Connectivity
The following uses a Brocade switch as an example to explain how to configure switches.
2. In the Web Tools login dialog box displayed, enter the account and password.
The default account and password are admin and password.
Web Tools works properly only when Java is installed on the host. Java 1.6 or later is
recommended.
Step 2 On the switch management page that is displayed, click Switch Information.
l Fabric OS version indicates the switch version. The interoperability between switches
and storage systems varies with the switch version. Use only switches with verified
interoperability.
l Type is a decimal consisting of an integer and a decimal fraction. The integer indicates
the switch model and the decimal fraction indicates the switch template version. You
only need to pay attention to the switch model. Table 5-1 describes mapping between
switch types and names.
l Ethernet IPv4 indicates the switch IP address.
l Effective configuration indicates the currently effective configurations. This parameter
is critical to subsequent zone configurations. In this example, the currently effective
configuration is ss.
----End
Configuring Zones
Zone configuration is important for Fibre Channel switches. The configurations differ with
the switch vendor, model, and version. For details, refer to the configuration guide specific to
the switch used in your layout. The following uses the Brocade 6510 switch as an example to
explain the zone configuration procedure.
In normal conditions, port indicators on the switch are steady green after the corresponding
ports have been connected to hosts and storage arrays using optical fibers. The example
illustrated in Figure 5-3 uses ports 0, 1, 4, and 5.
Choose Configure > Zone Admin from the main menu of Web Tools.
Step 4 Check whether the switch has identified hosts and storage systems.
On the Zone Admin page, click the Zone tab. In Member Selection List, check whether all
related ports have been identified, as shown in Figure 5-5.
In this example, the hosts use ports 0 and 1, while the storage systems use ports 4 and 5. The
display indicates that the switch has correctly identified the devices connected by the four
ports.
Step 5 Create zones.
On the Zone tab page, click New Zone and enter a name (Zone001 in this example). Add
port 0 (connecting to port P0 of a host) and port 4 (connecting to controller A of a storage
system) to this zone, as shown in Figure 5-6.
Use the same method to create Zone002 to Zone004. Add ports 1 and 5 to Zone0002, ports 0
and 5 to Zone003, and ports 1 and 4 to Zone004.
Step 6 Add the new zones to the configuration file and activate them.
On the Switch View tab page, identify the effective configuration file, as shown in Figure
5-7.
On the Zone Admin page, click the Zone Config tab. In the Name drop-down list, choose
the effective configuration file New_config.
In Member Selection List, select Zone001 to Zone004 and add them to the configuration
file.
Click Save Config to save the configuration and then click Enable Config for the
configuration to take effect.
Figure 5-8 shows the configuration on the GUI.
On the Name Server tab page, verify that the ports have been added to the zones and the
zones have taken effect (marked * in the upper right corner), as shown in Figure 5-9.
----End
After you have configured the zones on the switch, log in to DeviceManager of the storage
system and choose Provisioning > Host > Initiator. On the page that is displayed, select FC
from the Initiator Type drop-down list. Check whether the host initiators have been
discovered.
As shown in Figure 5-10, the host initiators have been discovered and are online.
Step 2 Click the Host tab, select the host that was created on the storage system, and click Add
Initiator.
Step 3 Select FC from the Initiator Type drop-down list and find the host initiators' WWNs.
Step 4 Select the host initiators and add them to Selected Initiators.
Step 5 Verify that the initiators have been added to the host correctly.
As shown in Figure 5-14, the initiators have been added to the host successfully. The initiator
properties depend on the operating system and multipathing software used by the hosts. For
details, see storage configurations in 6 Configuring Multipathing.
----End
Step 1 Choose Control Panel > Network and Internet > Network Connections. Right-click the
desired network port and choose Properties from the shortcut menu.
Step 2 Double-click Internet Protocol Version 4. In the dialog box that is displayed, configure an
IPv4 address. Figure 5-16 shows an example.
Select Use the following IP address and configure the following parameters:
l IP address
l Subnet mask
l Default gateway
----End
To configure an IP address for a host running Windows Server 2012/2016, perform the
following operations:
Step 1 Choose Control Panel > Network and Internet > Network and Sharing Center > Change
adapter settings. Right-click the desired network port and choose Properties from the
shortcut menu.
Step 2 Double-click Internet Protocol Version 4. In the displayed Internet Protocol Version 4
(TCP/IPv4) Properties window, configure an IPv4 address. Figure 5-17 shows an example.
Select the Use the following IP address option and configure the following parameters:
l IP address
l Subnet mask
l Default gateway
----End
Step 1 Ping IP addresses of the host and storage system respectively to verify the network
connectivity between them.
Step 2 Enter a name for the initiator.
On the iSCSI Initiator Properties page, click the Configuration tab and specify an initiator
name, as shown in Figure 5-18.
NOTE
NOTE
According to the iSCSI protocol, a host can send NOP Out heartbeat packets to check link
connectivity between an initiator and a target. By default, this function is disabled for
Windows Server. To enable this function, locate the parameter HKLM\SYSTEM
\CurrentControlSet\Control\Class\{4D36E97B-E325-11CE-
BFC1-08002BE10318}\<Instance Number>\Parameters\EnableNOPOut in the registry,
change its value to 1, and restart the host for the modification to take effect.
----End
Configuring VLANs
When many hosts are connected by an Ethernet network, a large number of broadcast packets
are generated during communication between the hosts. Broadcast packets sent from one host
will be received by all the other hosts on the network, consuming considerable bandwidth.
Moreover, all hosts on the network can access each other, leaving data vulnerable to security
risks.
Dividing hosts on an Ethernet network into multiple logical groups helps save bandwidth and
prevent security risks. Each logical group is a VLAN. The following uses Huawei Quidway
2700 Ethernet switch as an example to explain how to configure VLANs.
In the following example, two VLANs (VLAN 1000 and VLAN 2000) are created. VLAN
1000 contains ports GE 1/0/1 to 1/0/16. VLAN 2000 contains ports GE 1/0/20 to 1/0/24.
Step 1 Go to the system view.
<Quidway>system-view
System View: return to User View with Ctrl+Z.
----End
Binding Ports
When storage systems and hosts are connected point-to-point, existing bandwidth may be
insufficient for storage data transmission. Moreover, redundancy in connection cannot be
achieved in point-to-point connection. To address these problems, ports are bound (link
aggregation) to improve bandwidth and balance loads among multiple links.
Huawei OceanStor storage devices support 802.3ad link aggregation (dynamic aggregation).
In this link aggregation mode, multiple network ports are in an active aggregation group and
work in duplex mode at the same speed. After binding iSCSI front-end ports on a storage
device, enable aggregation for their peer ports on the switch. Otherwise, links are unavailable
between the storage device and the switch.
This section uses switch ports GE 1/0/1 and GE 1/0/2 and the storage system's ports P2 and
P3 as an example to explain how to bind ports.
The port binding method differs with the OceanStor system version. For details, refer to the
specific storage product documentation.
Step 1 Log in to DeviceManager and choose Provisioning > Port > Ethernet Ports.
Step 2 Bind ports.
1. Select the ports that you want to bind and choose More > Bond Port.
The Bond Port dialog box is displayed.
2. Specify Bond Name, select the target ports, and click OK.
3. In the security alert dialog box that is displayed, select I have read and understand the
consequences associated with performing this operation and click OK.
4. In the Success dialog box that is displayed, click OK.
After the storage system ports are bound, configure link aggregation on the switch using the
following command:
<Quidway>system-view
System View: return to User View with Ctrl+Z.
After the command is executed, LACP is enabled for ports GE 1/0/1 and GE 1/0/2. Then the
ports can be automatically detected and added to an aggregation group.
----End
6. Click Modify.
----End
NOTE
----End
----End
The initiator properties depend on the operating system and multipathing software used by the
hosts. For details, see storage configurations in 6 Configuring Multipathing.
Windows iSCSI Initiator will automatically connect to favorite targets after down links are
recovered. However, when CHAP authentication is configured, iSCSI Initiator will not
automatically update CHAP authentication information about the favorite targets before
automatic target connection. As a result, the automatic target connection may fail. To ensure
correct connection to favorite targets, update favorite target information after configuring
CHAP authentication as follows:
Step 1 On the Favorite Target tab page of iSCSI Initiator, delete the previously configured target.
Step 2 Reconnect to the iSCSI target. Enter the user name and password of CHAP authentication and
select Add this connection to the list of Favorite Targets.
----End
In the ISM navigation tree, choose SAN Services > Mappings > Initiators. In the function
pane, select the initiator whose CHAP authentication you want to configure and choose
CHAP > CHAP Configuration in the navigation bar, as shown in Figure 5-24.
Step 2 In the CHAP Configuration dialog box that is displayed, click Create in the lower right
corner, as shown in Figure 5-25.
In the Create CHAP dialog box that is displayed, enter the CHAP user name and password,
as shown in Figure 5-26.
The CHAP user name contains 4 to 25 characters and the password contains 12 to 16
characters.
The limitations to CHAP user name and password vary with storage systems. For details, see
the help documentation of corresponding storage systems.
Step 3 Assign the CHAP user name and password to the initiator, as shown in Figure 5-27.
In the ISM navigation tree, choose SAN Services > Mappings > Initiators. In the function
pane, select the initiator whose CHAP account is to be enabled and choose CHAP > Status
Settings in the navigation bar, as shown in Figure 5-28.
Step 5 In the Status Settings dialog box that is displayed, choose Enabled from the CHAP Status
drop-down list, as shown in Figure 5-29.
----End
OceanStor 18000/T V2/V3 (V300R001) storage system
The iSCSI initiators' CHAP authentication methods are similar for OceanStor 18000/T V2/V3
systems. The following uses OceanStor V3 (V300R001) as an example to describe how to
configure CHAP authentication.
Step 1 On DeviceManager, click the icon on the right navigation tree. Then, click Host in the
displayed page.
Step 2 Select the host for which CHAP authentication needs to be enabled. In the initiator list, select
the target initiator and click Modify.
Step 3 In the displayed Modify Initiator dialog box, select Enable CHAP authentication, enter the
CHAP name and password, and then click OK.
----End
OceanStor V3 (V300R002 and later)/Dorado V3/OceanStor V5 storage system
The iSCSI CHAP authentication methods are similar for OceanStor V3 (V300R002 and
later), OceanStor V5, and Dorado V3. The following uses OceanStor V3 as an example to
describe how to configure CHAP authentication.
Step 2 Select the host for which CHAP authentication needs to be enabled. In the initiator list, select
the target initiator and click Properties.
Step 3 In the displayed Initiator Properties dialog box, select Enable CHAP authentication, enter
the CHAP name and password, and then click OK.
----End
6 Configuring Multipathing
6.1 Concepts
6.2 Configuring Multipathing in Non-HyperMetro Scenarios
6.3 Configuring Multipathing in HyperMetro Scenarios
6.1 Concepts
6.1.1 Initiator
Table 6-1 describes the key parameters of initiators.
Special mode Determines which Special mode is used for path Mode 0
type switchover. All special modes support ALUA. Detailed
requirements are as follows:
l Mode 0:
– The storage system version must be
V500R007C00 and later, V300R003C20 and
later, V300R006C00SPC100 and later, or
Dorado V300R001C01SPC100 and later.
– The host and storage system must be connected
using a Fibre Channel network.
– The OS of the host that connects to the storage
system must be Red Hat 7.X, Windows Server
2012 (using QLogic HBAs), or Windows Server
2008 (using QLogic HBAs).
l Mode 1:
– The storage system version must be
V500R007C00 and later, V300R003C20 and
later, V300R006C00SPC100 and later, or
Dorado V300R001C01SPC100 and later.
– The OS of the host that connects to the storage
system must be AIX or VMware.
– If HyperMetro is deployed, HyperMetro must
work in load balancing mode.
l Mode 2:
– The storage system version must be
V500R007C00 and later, V300R003C20 and
later, V300R006C00SPC100 and later, or
Dorado V300R001C01SPC100 and later.
– The OS of the host that connects to the storage
system must be AIX or VMware.
– If HyperMetro is deployed, HyperMetro must
work in local preferred mode.
l Mode 3:
– The storage system version must be
V500R007C00 and later,
V300R006C10SPC100 and later, or Dorado
V300R001C01SPC100 and later.
– The OS of the host that connects to the storage
system must be Linux or Solaris.
– HyperMetro must work in local preferred mode.
Path Type In HyperMetro scenarios, the value can be either Optimal Path
Optimal Path or Non-Optimal Path.
l When HyperMetro works in load balancing mode,
set the Path Type for the initiators of both the local
and remote storage arrays to Optimal Path. Enable
ALUA on both the host and storage arrays. If the
host uses the round-robin multipathing policy, it
delivers I/Os to both storage arrays in round-robin
mode.
l When HyperMetro works in local preferred mode,
set the Path Type for the initiator of the local
storage array to Optimal Path, and that of the
remote storage array to Non-Optimal Path. Enable
ALUA on both the host and storage arrays. The
host preferentially delivers I/Os to the local storage
array.
In non-HyperMetro scenarios, the value is Optimal
Path.
NOTE
l You must configure initiators according to the requirements of the specific OS that is installed on the
host. All of the initiators added to a single host must be configured with the same switchover mode.
Otherwise, host services may be interrupted.
l After configuring an initiator's switchover mode, you must restart the host for the configuration to
take effect.
Load balancing 1. Enable ALUA on the host and set the path The distance between
mode selection policy to round-robin. both HyperMetro
2. Configure an ALUA-supporting storage arrays is less
switchover mode for the initiators on both than 1 km, such as when
HyperMetro storage arrays. If multiple they are in the same
initiators are assigned to a host, this equipment room or on
should be done for each of the initiators. the same floor.
3. Set the path type of the initiators on both
storage arrays to optimal.
Local preferred 1. Enable ALUA on the host. It is advised to The distance between
mode set the path selection policy to round- both HyperMetro
robin. storage arrays is greater
2. Configure an ALUA-supporting than 1 km, such as when
switchover mode for the initiators on both they are in different
HyperMetro storage arrays. If multiple locations or data centers.
initiators are assigned to a host, this
should be done for each of the initiators.
3. Set the path type of the initiators on the
local array to optimal and those on the
remote array to non-optimal.
Host Host
AO AN AO AO’
A B A B
When HyperMetro works in local preferred mode, the host multipathing software defines the
paths to the owning controller on the local storage array as AO paths. This ensures that the
host delivers I/Os only to the owning controller on the local storage array, reducing link
consumption. If all AO paths fail, the host will deliver I/Os to the AN paths on the non-
owning controller. If the owning controller of the local storage array fails, the system will
activate the other controller to maintain the local preferred mode.
6.2.1 UltraPath
NOTE
After you install UltraPath, set the trespass policy for LUNs as follows:
NOTE
This configuration must be performed on all hosts separately. Retain the default settings for other
parameters.
l For UltraPath earlier than 21.2.0, it is recommended that you run the set luntrespass
command to disable the trespass function.
UltraPath CLI #3 >set luntrespass=off array_id=0
l For UltraPath 21.2.0 and later, it is recommended that you retain the default settings (the
trespass function is disabled by default).
If a path switchover takes a long period of time, you can modify the timeout time for a driver
by following instructions in 7.4 How Do I Modify the Timeout Time for the FC HBA Port
Driver? or 7.5 How Do I Modify the iSCSI Initiator's Driver Timeout Time?, thereby
shortening I/O interruption.
For details about the old and new storage versions, see 2.2.3.3 ALUA.
The Switchover Mode and Path Type depend on the actual services. Different models of
Huawei storage systems support different ALUA policies. For details, refer to the specific
Huawei storage model's product documentation.
For details about the Windows versions, see the Huawei Storage Interoperability
Navigator.
If a LUN has been mapped to the host, you must restart the host for the configuration to take
effect after you modify the initiator parameters. If you configure the initiator for the first time,
restart is not needed.
Unless otherwise specified, the recommended configurations for Huawei storage that supports
ALUA are detailed as follows:
OceanStor T V1 OceanStor T V2
NOTE
You are advised to enable ALUA on OceanStor T V1 because mirror link interruption may interrupt
services if ALUA is disabled.
In the preceding figure, the VID is HUAWEI and the PID is XSG1.
NOTE
NOTE
The VID must contain eight characters and the PID must contain 12 characters. If the characters are
insufficient, add spaces. You can copy the VID and PID from the output of the mpclaim -e command.
After the mpclaim -r -i -d "HUAWEI XSG1 " command is executed, the host restarts
automatically.
For Windows in non-HyperMetro networking, it is advisable to use the default MPIO policy.
If a path switchover takes a long period of time, you can modify the timeout time for a driver
by following instructions in 7.4 How Do I Modify the Timeout Time for the FC HBA Port
Driver?, 7.5 How Do I Modify the iSCSI Initiator's Driver Timeout Time?, and 7.6 How
Do I Change the Number of TCP Data Retransmission Times? to shorten I/O interruption.
----End
6.2.2.3 Verification
Run the mpclaim -s -d command to verify that the configuration has taken effect.
Run the mpclaim -s -d MPIO Disk No. command to verify path information about an MPIO
disk.
6.3.1 UltraPath
This section describes the operations on storage systems and hosts when UltraPath is used.
l For UltraPath 21.2.0 and later, it is recommended that you retain the default settings (the
trespass policy is disabled by default).
After configuring the trespass policy, set the HyperMetro working mode to local preferred on
UltraPath. In this mode, the local storage array is preferred in processing host services. The
remote storage array is used only when the local array is faulty. This improves the service
response speed and reduces the access latency.
Table 6-8 lists the command for setting the HyperMetro working mode.
If a path switchover takes a long period of time, you can modify the timeout time for a driver
by following instructions in 7.4 How Do I Modify the Timeout Time for the FC HBA Port
Driver? or 7.5 How Do I Modify the iSCSI Initiator's Driver Timeout Time?, thereby
shortening I/O interruption.
6.3.1.3 Verification
Run the upadm show upconfig command. If the command output contains the following
information, the configuration is successful.
HyperMetro WorkingMode : read write within primary array
The Switchover Mode and Path Type depend on the actual services. For details, see
HyperMetro Configuration Guide for Huawei SAN Storage Using OS Native
Multipathing Software.
For details about the Windows versions, see the Huawei Storage Interoperability
Navigator.
If a LUN has been mapped to the host, you must restart the host for the configuration to take
effect after you modify the initiator parameters. If you configure the initiator for the first time,
restart is not needed.
In the preceding figure, the VID is HUAWEI and the PID is XSG1.
NOTE
On the Windows server, open Command Prompt and run the mpclaim -r -i -d "HUAWEI
XSG1 " command, as shown in Figure 6-17.
NOTE
The VID must contain eight characters and the PID must contain 12 characters. If the characters are
insufficient, add spaces. You can copy the VID and PID from the output of the mpclaim -e command.
After the mpclaim -r -i -d "HUAWEI XSG1 " command is executed, the host restarts
automatically.
The default policy varies with operating system configurations. Table 6-10 lists the default
policies for commonly used operating systems.
For Windows in non-HyperMetro networking, it is advisable to use the default MPIO policy.
Step 5 Enable path verification.
On the MPIO tab, click Details. In the dialog box that is displayed, select Path Verify
Enabled and click OK. Then restart the host for the configuration to take effect.
If a path switchover takes a long period of time, you can modify the timeout time for a driver
by following instructions in 7.4 How Do I Modify the Timeout Time for the FC HBA Port
Driver?, 7.5 How Do I Modify the iSCSI Initiator's Driver Timeout Time?, and 7.6 How
Do I Change the Number of TCP Data Retransmission Times? to shorten I/O interruption.
----End
6.3.2.3 Verification
Run the mpclaim -s -d command to verify that the configuration has taken effect.
Run the mpclaim -s -d MPIO Disk No. command to verify path information about an MPIO
disk.
7 FAQs
7.1 What Can I Do When Being Prompted to Uninstall MPIO During UltraPath Installation?
7.2 What Are Common Management Commands?
7.3 How Do I Change the Windows Disk Timeout Time?
7.4 How Do I Modify the Timeout Time for the FC HBA Port Driver?
7.5 How Do I Modify the iSCSI Initiator's Driver Timeout Time?
7.6 How Do I Change the Number of TCP Data Retransmission Times?
Function Syntax
Function Syntax
NOTE
l VendorID must be eight bytes long and ProductID 16 bytes long. If VendorID or ProductID
contains fewer bytes, use spaces as placeholders.
For details about the meaning of parameter num in the command for modifying the load
balancing policy, see Table 7-2.
Table 7-2 Meaning of parameter num in the command for modifying the load balancing
policy
Parameter Definition
1 Failover Only
2 Round Robin
5 Weighted Paths
6 Least Blocks
7 Vendor Specific
after the disk timeout time expires, the host operating system reports an error or abandons this
data request.
By default, the disk timeout time in Windows is 60s. The disk timeout time can be changed in
some special conditions, for example, installing drivers. In this case, you need to change the
disk timeout time back to 60s after drivers are installed.
The following explains how to change the disk timeout in the registry.
Step 2 Choose HKEY_LOCAL_MACHINE > System > CurrentControlSet > Services > Disks.
Step 4 In Value data, enter a desired value. You can specify the value format. Available formats are
Hexadecimal and Decimal, as shown in Figure 7-2.
----End
Before modifying FC HBA port parameters, you need to download the HBA management
tool from the HBA vendor. The following describes how to modify Emulex and QLogic FC
HBAs' port parameters.
https://www.broadcom.com/products/storage/fibre-channel-host-bus-adapters/
onecommand-manager-centralized#downloads
Step 2 Open the Emulex HBA management tool. This tool will automatically detect the Emulex
HBA port that the local host uses. Select the HBA port, click Driver Parameters tab, select
the parameter you want to modify, type the value, and click Apply, as shown in Figure 7-3.
----End
Select 13 and 15. Set the values of the parameters to 10. Then select 20: Commit Changes.
After the configuration, check the HBA parameters.
----End
If UltraPath is installed, run CMD command iscsiconfig get timeout value to query the iSCSI
initiator driver timeout and then run iscsiconfig set timeout xxx to specify the timeout value.
----End
Step 1 In Command Prompt, run the regedit command to go to the registry editing page.
Step 2 Choose HKEY_LOCAL_MACHINE > System > CurrentControlSet > Services > Tcpip
> Parameters.
l If no, right-click in the blank area and choose New > DWORD (32 bits) to create
TcpMaxDataRetransmissions with its value being 3.
----End
C
CHAP Challenge Handshake Authentication Protocol
CLI Command Line Interface
CDFS CD-ROM File System
D
DM-Multipath Device Mapper-Multipath
E
Ext2 The Second Extended File System
Ext3 The Third Extended File System
Ext4 The Fourth Extended File System
F
FC Fiber Channel
G
GE Gigabit Ethernet
H
HBA Host Bus Adapter
I
IP Internet Protocol
ISM Integrated Storage Manager
iSCSI Internet Small Computer Systems Interface
L
LACP Link Aggregation Control Protocol
LE Logical Extent
LUN Logical Unit Number
LV Logical Volume
LVM Logical Volume Manager
M
MB Megabyte
N
NFS Network File System
R
RAID Redundant Array of Independent Disks
S
SAN Storage Area Network
P
PE Physical Extent
PV Physical Volume
V
VLAN Virtual Local Area Network
VG Volume Group
W
WWN World Wide Name
In Windows, disks are categorized as basic and dynamic disks. Only simple volumes can be
created on basic disks. Spanned volumes, mirror volumes, striped volumes, and RAID-5
volumes are created on dynamic disks.
RAID-5 volumes are not supported in desktop operating systems such as Windows XP,
Windows 7, and Windows 8.
In Windows Server 2008 and later versions, the operating system converts basic disks to
dynamic disks when spanned volumes (or other volumes that can only be created on dynamic
disks) are created on basic disks.
The definitions of the volumes are as follows:
l Spanned volume
A spanned volume is created on a single or multiple disks and combines the disks as a
whole. A spanned volume is used to expand volume capacity.
l Mirror volume
A mirror volume is created on two or multiple disks. Member disks in a mirror volume
are mirrors to each other. Mirror volumes improve data reliability.
l Striped volume
A striped volume is created on two or multiple disks. Member disks in a striped volume
are of the same size and stripped. When data is written onto a striped volume, the data is
divided into several parts and the parts are written onto each member disk. Theoretically,
striped volumes help improve write performance and expand volume capacity.
l RAID-5 volume
After incorporating a parity disk, a striped volume becomes a RAID-5 volume.
Therefore, a RAID-5 volume has all advantages of a striped volume and also ensures
data reliability.
Windows volume management is simple. You can manage Windows volumes on a graphical
user interface (GUI).
The following uses Windows 2012 as an example to explain how to create a RAID-5 volume:
Step 1 After LUNs are mapped to the host, start Computer Management. Right-click Disk
Management and choose Rescan Disks from the shortcut menu, as shown in Figure 9-1.
Step 3 Right-click a disk and choose Initialize Disks from the shortcut menu. In the Initialize Disks
dialog box that is displayed, select the disks that you want to initialize and the partition
format. In this example, partition format MBR is selected. Then, the states of the selected
disks change to Online, as shown in Figure 9-3.
Step 4 Right-click a disk and choose New RAID-5 Volume from the shortcut menu. The New
RAID-5 Volume dialog box is displayed, as shown in Figure 9-4.
Step 5 Select the disks that you want to add to the RAID-5 volume, specify capacities of the selected
disks, and click Next. Select a drive letter for the newly created RAID-5 volume, the file
system type for volume formatting, strip size, and fast formatting.
NOTE
A RAID-5 volume has at least three member disks. After a RAID-5 volume is created, it spends a
certain period of time in synchronous verification. The verification time grows with the volume capacity.
Fast formatting is recommended.
A disk must be formatted after being installed. The operating system creates partitions on a disk only
after writing the disk identifier, end-of-sector marker (also called a signature), and MBR or GUID.
----End
10.1 WSFC
10.2 Veritas VCS
10.1 WSFC
10.1.1 Overview
Earlier Windows versions (such as Windows Server 2003) use Microsoft Cluster Service
(MSCS) to provide clustering functions. Windows Server 2008 and later versions use
Windows Server Failover Cluster (WSFC).
An MSCS cluster is a server group consisting of independent computers. Nodes in the cluster
work together as a single system to ensure that key applications and resources are always
available to clients. The clustering function enables users and administrators to manage nodes
as a whole instead of independent computers.
WSFC server clustering software adds new functions on the basis of MSCS. The new
functions include the validation wizard and GPT disks.
MSCS
An MSCS server cluster contains a maximum of eight nodes, and can be configured as either
of the following clusters:
l Single-node cluster
l Single-quorum device cluster
l Multi-node cluster
Each cluster node is connected to one or multiple cluster storage devices. In most Windows
Server 2003 Enterprise Edition or Windows Server 2003 Datacenter Edition versions, cluster
storage devices can be iSCSI, SAS, parallel SCSI, and Fibre Channel devices.
Table 10-1 lists the maximum number of nodes supported by different operating systems.
WSFC
A WSFC cluster is a group of independent servers that work together to improve the
availability of applications and services. WSFC provides infrastructure features that support
high-availability and disaster recovery scenarios for hosted server applications. If a cluster
node or service fails, the services that were hosted on that node can be automatically or
manually transferred to another available node in a process known as failover.
The nodes in a WSFC cluster work together to collectively provide the following types of
capabilities:
Term Description
10.1.2 Configuration
For details, visit:
https://technet.microsoft.com/en-us/library/cc731844(v=ws.10).aspx
10.2.1 Overview
Veritas Cluster Server (VCS) can connect multiple independent systems to a management
framework to improve availability. Each system (or node) runs its own operating system and
collaborates at the software level to form a cluster. The VCS combines common hardware
with intelligent software to provide failover and control for applications. If a node or a
monitored application becomes faulty, other nodes perform predefined operations to take over
services and start these services in other locations in the cluster.