Professional Documents
Culture Documents
Welcome to CLARiiON AX100 Fundamentals. This course will provide the learners with an understanding of the
CLARiiON AX100-series hardware and software components. It covers the steps involved in replacing different hardware
components, as well as the steps to configure a new CLARiiON AX100-series array. This course also covers the topics
surrounding connectivity between the AX100 array and various hosts.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO
REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS
PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR
FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.
EMC2, EMC, CLARiiON, Navisphere, and Symmetrix are registered trademarks, and SnapView is a trademark of EMC
Corporation.
All other trademarks used herein are the property of their respective owners.
The AUDIO portion of this course is supplemental to the material and is not a replacement for the student notes
accompanying this course.
EMC recommends downloading the Student Resource Guide from the Supporting Materials tab, and reading the notes in
their entirety.
Course Objectives
These are the learning objectives for this course. Please take a moment to review them.
The EMC CLARiiON AX series extends the benefits of network storage, including consolidation, automation, and
advanced data protection, to a broader range of customers by providing a cost-effective alternative to direct attached
storage. The AX100 combines the advanced functionality and data protection features of CLARiiON's industry-leading
RAID architecture with high capacity serial ATA (SATA drives) to deliver a cost-effective, highly functional, network
storage solution.
As an entry level array, the CLARiiON AX100 series differs from the CX series in several ways, from the exclusive use of
SATA drives, to the number of supported LUNs and cache size. The AX100 also supports a maximum fan out rule (# of
host ports to SP ports ). A view of the SAN from the SP’s perspective of 4 initiators per SP port for both the single and dual
SP models. Therefore, the maximum fan out for an AX100SC is 8 initiators per array. For the AX100, the maximum fan
out is 16 initiators per array; therefore, the maximum HA Hosts that can be connected to an AX100 dual SP array is 8.
There are four (4) storage systems per server.
Note that the AX100 and AX100SC support a maximum of 1 ISL, or hop; therefore, the maximum number of switches in a
fabric is 2. The Brocade Silkworm 3250 VL2 E comes pre-configured (zoned ) so you must connect port 0 and/or port 4 to
the SPs, and ports 1, 2, 3, 5, 6, and/or 7 to server HBAs. User installable configurations consist of any direct connected
systems.
Navisphere Express software is an array-based management tool that consists of the FLARE Operating Environment and a
Web-based user interface (UI). Both are loaded at the factory on the AX100 Storage Processor. The Navisphere Express UI
is displayed in a common browser, and provides the following basic functionality:
• Security
• Storage Configuration and allocation
• Data redundancy
• Status and configuration information display
• Event management
• Data replication
• Path management
User Authorization 1 Admin account for array Admin, Manager, & Admin, Manager, &
Monitor roles for array Monitor roles for domain
Event Monitoring SNMP from array SNMP, pager, email, SNMP, pager, email,
Email from array phone home phone home
z SATA drives
z Suitcase
z Power Supplies
z Memory DIMMs
z Battery Backed Cache Module
z Fans
z UPS
The AX100 and AX100SC are rack-mountable storage-system enclosures, 3.5 inches (2U) high, that contain 3 to 12 Serial
Advanced Technology Attachment (SATA) disk drives. They use a Fibre Channel Arbitrated Loop (FC-AL) or Fibre
Channel Switch (FC-SW) as an interconnect interface to host servers. Navisphere® Express software manages the storage
systems from any qualified workstation on a shared Ethernet LAN. Sophisticated RAID (redundant array of independent
disks) technology and data caching prevent data loss in case of component failure. Redundant hardware options provide
levels of high availability usually restricted to much larger (and more expensive) storage systems. Besides economical
disks, the AX100-Series includes the following major components: Please take a moment to review them.
• One (AX100SC) or two (AX100) storage processors, each with one CPU fan
• One serial port (RJ45 connector) for service
• One 10/100 Ethernet LAN port (RJ45 connector) for management
• For the AX100 only, one serial port (RJ45 connector) for uninterruptible power supply (UPS)
• One power supply per processor
• One front-end (FE) card per processor, each with two 2-Gbps
• Fibre Channel host ports (small form factor - SFF - connectors)
• A battery-backed cache card (AX100SC systems only)
• One 512-megabyte memory card per processor
• Four (AX100SC) or five (AX100) system fans
• One uninterruptible power supply (UPS) (AX100 systems only)
You can install, upgrade, or replace all of the major AX100-Series components without professional assistance. Front-end
cards are not replaceable in the field, but are part of the field-replaceable storage processor assembly.
9
10
The CLARiiON AX100/AX100SC storage systems contain a maximum of 12 drives. The slots are numbered 0 through 3 on
the top row, 4 through 7 on the middle row, and 8 through 11 on the bottom row. The AX100/AX100SC storage systems
accommodate 160 GB and 250 GB Serial ATA (SATA) disk drives. There is a green enclosure power light situated
between disks 2 and 3, and an amber enclosure fault light situated between disks 0 and 1. Each disk slot has a Status LED,
illuminated green to indicate activity and amber to indicate a fault.
Note: The first four disks are required to boot the AX100 Storage Processors, and the first three disks are required to boot
the AX100SC Storage Processor. These disks must remain in their original slots.
Removal of a hard disk requires the user to press in on the hard-disk drive latch to unlock it from the system chassis.
Shown here is the single Storage Processor rear view. It contains a single power supply that can be replaced without tools
but not in a “hot” state. Removal of the power supply in a single SP configuration will cause an immediate power shutdown.
The Battery Backed Cache Module (BBCM) is used to protect customer data that has not yet been committed to a disk
surface in the event of a hardware failure.
When the array is initially plugged into AC power, the array will power up. To perform a graceful power down, depress the
power button. To restart the array, press the power button again. To return the array to an un-initialized state, press and hold
the power button from the powered down state for approximately 4 seconds until the SP power LED comes on.
The SP Fault LED can also be used to monitor the SP boot sequence. It will blink once every four seconds during BIOS
testing, once a second during POST testing, and four times a second during OS and driver load. When the OS and all drivers
have successfully loaded, the LED will extinguish.
The NMI button is used to reboot the array and create core dumps that can be used by Engineering to troubleshoot array
problems. This button should not be used for normal power down operations.
This shows the proper removal of a power supply. Do the following before removing the power supply on a single SP
AX100 :
• Power down the system.
• Disconnect AC power from the system.
• Wait for the AX100 to fully power down. A full power down may take several minutes.
• Disconnect AC power from the system.
• Turn the securing latch to the right of the power supply 1/4 turn until the loop aligns vertically with the chassis.
NOTE: The dual processor SP suitcase has two power supplies and each power supply is hot swappable, provided the
remaining power supply is fully functional. Removing a power supply in a dual storage processor unit (AX100) will cause
the associated storage processor to shut down.
Rear View
Front View
The power supply is common to both the AX100SC and the AX100. Note that they do not include a power switch and
provides a single AC power-in connector. DC power is distributed to the storage system components through a connector on
the rear of the power supply. The supplies are located at the rear of the storage system above the front-end connectors and
contains their own cooling fans. The power supply in an AX100SC is NOT hot-swappable, whereas the power supply in an
AX100 is hot-swappable.
Each 300-W power supply is an auto-ranging, power-factor-corrected, multi-output, off-line converter with its own line
cord. Each supply supports 12 disk modules, all system fans, and its associated CPU fan and storage processor. Systems
with two power supplies share 12-volt load currents to the disk drives.
A failed power supply prevents operation of a single-processor AX100SC. A dual-processor AX100 system with a failed
power supply continues operating in a degraded mode until the power supply is replaced. Do not connect the power cord to
an AC power source until the storage system, UPS , and switches are installed in the cabinet. The storage system will power
up as soon as the AC power is applied.
Each Storage Processor is equipped with two (2) Fibre-Channel Front-end ports for host or SAN attach. Each port is 2 GB
and has a link LED that is illuminated if receiving light. Each port presents a unique World Wide Port Name (WWPN) to
the attached devices (switches or hosts). Future AX products will include host front-end ports with iSCSI capability.
Example:
WWN seed – 00:60:01:b2
Resultant WWNN – 50:06:01:60:80:a0:01:31
Resultant SPA port 0 WWPN – 50:06:01:60:00:60:01:b2
Resultant SPA port 1 WWPN – 50:06:01:61:00:60:01:b2
Resultant SPB port 0 WWPN – 50:06:01:68:00:60:01:b2
Resultant SPB port 1 WWPN – 50:06:01:69:00:60:01:b2
As mentioned in the previous slide, each port on an AX100 Storage Processor presents a unique WWPN to attached
devices. This slide shows the breakdown of how bits are allocated for the AX100 front end ports. Please take a moment to
review the following:
BITS 127 – 124
• This value represents the IEEE WWN Type. It must be set to 0x5.
BITS 123 – 100
• This value is the CLARiiON Company ID assigned by the IEEE Committee. It must be set to 0x006016.
BITS 99 – 96
• This value must be set to 0.
BITS 95 – 64
• This value is the bitwise “OR” of 0x80000000 and the 32 bit WWN Seed read from the resume prom.
BITS 63 – 60
• This value represents the IEEE WWN Type. It must be set to 0x5.
BITS 59 – 36
• This value is the CLARiiON Company ID assigned by the IEEE Committee. It must be set to 0x006016.
BITS 35 – 32
• This value will differentiate multiple ports within the same node name. It will range from 0 to 15. The settings for
SP ports are as follows:
SPA – port 0 = 0; SPA – port 1 = 1; SPB port 0 = 8; SPB port 1 = 9
BITS 31 – 0
• This value is the 32 bit WWN Seed read from the resume prom mounted on the midplane board.
The 10/100 LAN – (RJ45) connection is the Ethernet port used to manage the array. The Ethernet address for this port must
be set using the Navisphere Storage System Initialization Tool. The Console port is an RS232 serial – up to 115K baud –
Service Port with modem support capabilities (RJ45) that can be used to both initialize the array and monitor the boot
process.
Shown here is the dual SP model rear view. In the event of a power supply failure, the corresponding Storage Processor
will be powered down and the remaining power supply will provide power to its storage processor and all the disk drives.
The power supplies can be hot swapped. Note each SP has its own LAN port, Console port, Status/Fault LEDs, and NMI
Buttons, but only one power switch. The dual SP model also has an additional RJ-45 connection used to monitor the UPS. It
uses a serial com port to monitor the UPS state and provide for a means to test the batteries.
Before removing the suitcase from the chassis, follow the steps below:
• Power down the system.
• Wait for the AX100 to fully power down. A full power down may take several minutes.
• Disconnect AC power from the system.
• Turn the securing latches on either side of the storage processor assembly 1/4 turn until the loops align vertically with
the chassis.
Pull the unit straight out of the chassis towards you.
For proper removal of the SP suitcase cover, disconnect the power supply connector from the main SP board.
NOTE: The dual processor SP suitcase has two power supply connectors.
Depress the side latches and slide the SP suitcase cover forward. Grasp the suitcase cover and lift straight up to remove.
CPU
DIMM
BBCM
The AX100SC houses 5 fans, 4 positioned for disk drive cooling and a fifth positioned for CPU cooling. A single 512 MB
DIMM is mounted on the Storage Processor card. A 256 MB Battery Backed Cache Module (BBCM) is socket attached to
the Storage Processor. A portion of the 512 MB DIMM and the 256 MB BBCM represent a mirrored write cache pair. The
512 MB DIMM is partitioned by the Storage Processor into 3 areas.
Approximately 347 MB is assigned to the Windows XP operating system Base software, Navisphere components, and
advanced features including the CLARiiON SnapView option. It is also used as a buffer for non-cached I/O. Approximately
30 MB is allocated for read cache usage. Read cache is not mirrored to the BBCM. Pre-fetch algorithms allow the Storage
Processor to fetch data from disk before it is actually requested by server. Approximately 128 MB is allocated for write
cache usage. Write cache is mirrored to the BBCM when write caching is enabled and is protected in the event of power or
Storage Processor failure. It allows write complete to server before data is actually committed to the disk surface.
Follow the steps below for proper removal of the BBCM board:
• Power down the system.
• Wait for the AX100 to fully power down. A full power down may take several minutes.
• Disconnect AC power from the system.
• Remove the SP suitcase.
• Remove the SP suitcase cover.
NOTE: Only the single SP suitcase contains a BBCM board. The dual SP suitcase requires a separate standby power supply.
Lift up on the plunger to unlock the BBCM board from the SP suitcase chassis.
Pull out on the BBCM card to separate it from the BBCM card socket.
Reverse the previous steps to replace the BBCM board.
In the AX100SC (Single Controller), the 256-megabyte Battery Backed Cache Module (BBCM) is used to protect customer
data that has not yet been committed to a disk surface, in the event of a hardware failure. This represents a mirror image of
the write cache on the Storage Processor card. It has 2 lithium ion batteries with a 72 hour time limit and a 3 to 5 hour
recharge time. It is equipped with three (3) LEDs, which are visible through the Storage Processor air dam. The Blue LED
(left-most) indicates that data is stored in cache when the LED is illuminated. The Amber LED (center) indicates a fault
condition. The Green LED (right-most) indicates fully charged batteries when illuminated solidly, and charging batteries
when blinking. A graceful power down de-stages the cache data to the vault drives, so the BBCM is not needed and the
batteries are not discharged.
The above chart represents the condition of the LEDs and their various states. Please take a moment to review.
CPU
CPU
DIMM
DIMM
The AX100 houses 7 fans, 5 positioned for disk drive cooling and a fan positioned at each CPU for cooling. Two 512 MB
DIMMs are mounted on the card, one for use by each of the Storage Processors. These two DIMMs represent a mirrored
write cache pair. The 512 MB is partitioned by the Storage Processor into 3 areas. Approximately 347 MB is assigned to the
Windows XP operating system Base software, Navisphere components, and advanced features including the CLARiiON
SnapView option. It is also used as a buffer for non-cached I/O. Approximately 30 MB is allocated for read cache usage.
Read cache is not mirrored to the peer DIMM. Pre-fetch algorithms allow the Storage Processor to fetch data from disk
before it is actually requested by server. Approximately 128 MB is allocated for write cache usage. Write cache is mirrored
to the peer DIMM when write caching is enabled and is protected in the event of power or Storage Processor failure. It
allows write complete to server before data is actually committed to the disk surface.
Before removing the DIMM, follow the proper procedures listed below:
• Power down the system.
• Wait for the AX100 to fully power down. A full power down may take several minutes.
• Disconnect AC power from the system.
• Remove the SP suitcase.
• Remove the SP suitcase cover.
• Lift the memory module straight up and out of the SP suitcase.
• Push the ejector tabs outward to release the memory module.
NOTE: The single SP suitcase contains one DIMM, and the dual SP suitcase contains two DIMMs.
Air flow on the AX100 array is from front to back. For proper cooling, all CRUs (Customer Replaceable Unit) must be
installed. Please read and follow the guidelines below for proper removal of the CPU fans.
• Use disk filler panels when less than 12 drives are installed
• Removal of Power Supply in Single SP configuration will cause an immediate power shutdown (Cache is battery-
backed)
• Removal of one Power Supply in Dual SP configuration will power down corresponding SP (remaining Power
Supply holds up peer SP and all drives)
Before removing the CPU fan:
• Power down the system.
• Wait for the AX100 to fully power down. A full power down may take several minutes.
• Disconnect AC power from the system.
• Remove the SP suitcase.
• Remove the SP suitcase cover.
• Disconnect the fan power cable from the SP board.
Lift the fan straight up and out of the SP suitcase chassis.
Reverse the previous steps to replace the CPU fan.
AX100 storage systems require an uninterruptible power supply (UPS). If the AX100 loses power from both power supplies
(for example, during a site-wide power outage), the UPS provides battery power that keeps the system running on SP A.
The AX100 writes cached data from SP A to disk, and then continues operating as a single-SP system (with cache disabled)
for the life of the battery or until site power is restored. For a detailed description of the UPS supplied with your AX100
system, refer to the vendor documentation for the uninterruptible power supply. For other information, contact your AX100
sales and/or service representative.
z Host Software
– Navisphere Storage System Initialization
– Navisphere Server Utility
– PowerPath
z Storage System Software
– FLARE-Operating-Environment
– Navisphere (Agent)
– ManagementServer
– Base
– SharedStorageControl
– EMCRemote
– SnapView
– SnapshotManagement
On the host side, the AX 100 uses two wizards. The Navisphere Storage System Initialization tool and the Navisphere
Server Utility. These are used to initialize AX100/AX100SC storage systems, register hosts, and perform SnapView
functions with the array so that virtual disks and Snapshots may be managed and assigned to the hosts. The Navisphere
Storage System Initialization tool can be executed from a management station, and the Navisphere Server Utility must be
executed from each host that will access virtual disks and Snapshots on the array.
PowerPath multi-pathing software is installed on the hosts in all AX configurations. Storage supported software
functionality will include Navisphere Express (bundled with each AX100/AX100SC), Power Path Base (bundled with
AX100/AX100SC), Access Logix (bundled with AX100/AX100SC), and SnapView. (bundled with AX100/AX100SC).
FLARE-Operating Environment – EMC CLARiiON tailored Microsoft Windows Embedded operating system and EMC
CLARiiON array operating environment. The Navisphere software package receives commands from the Navisphere
Server Utility and responds with information to the Navisphere Server Utility. Take a moment to review the additional
software packages and their functions below:
ManagementServer – Provides the web-based functionality, supports Providers, and interfaces with the FLARE operating
environment.
Base – Key software package that provides much of the AX100/AX100SC functionality.
SharedStorageControl – Optional license which enables Logical Unit masking at the array.
EMCRemote – Application that permits storage processor operating system level access for service and support personnel.
SnapView – provides the ability to create instantaneous point-in-time copies of virtual disks as an aid to backup operations.
SnapshotManagement – Optional license which enables the SnapView functionality.
Software Deployment
Management Station
Browser
Initialization Utility
Ethernet
Server A Server B
Server Utility Server Utility
HTTP
Fibre Channel Traffic Fibre Channel
Fabric
Management Server
AX100
EMC Global Education
© 2005 EMC Corporation. All rights reserved. 32
The above slide shows how the various software components are deployed on an AX100 array. The Navisphere Storage
System Initialization Utility is included on the CD that ships with the AX100-Series storage system. You can run this utility
from the CD, or you can install it on a server that is connected to the storage system. It communicates with the storage
system over the ethernet connection. We recommend that you install the utility on at least one server connected to the
storage system. The initialization utility uses a wizard as its user interface. The easy-to-use wizard guides you through the
steps required to initialize your AX100-Series storage system. Please read the note below:
Note: Before using the Navisphere Storage System Initialization Utility, make sure that you know the static IP addresses for
the storage system SPs (for example, http://123.45.6.7, and 123.45.6.8), the subnet mask, and the default gateway address.
For storage system initialization, connect a server and the storage system to the same network subnet. Once you assign an IP
address to the storage system, the server and storage system can be on different subnets.
The Navisphere Server Utility is included on the CD that ships with the AX100-Series storage system. It, too, can be run
from the CD, or you can install it on the servers that are connected to the storage system. It communicates with the storage
system via the Fibre Channel connections. We recommend that you install the utility on each server connected to the storage
system. The Navisphere Server Utility provides two functions: server registration, and, for Microsoft Windows servers,
snapshot usage. One function provided by the management server is the Navisphere UI which is delivered to the
management station as http and is viewed via a browser. Use the below guidelines when running these utilities:
• For all servers, use the Update Server Information feature to initially send the server name and IP address to the
storage system and, if needed, to update this data later on.
• For attached Microsoft Windows servers, use the server utility to start and stop snapshots on the source server (server
assigned to the source virtual disk), or to allow or remove access to the snapshot by the secondary server (server
assigned to the snapshot).
HTML Pages
Server Framework
RAID ++
The Management Server framework is comprised of several layers: a web server to provide HTTP access, an encoding layer
to translate CIM/HTML, and a CIMOM object manager. The providers are designed to collect data and feed into CIMOM.
The Management provider is responsible for managing all configuration aspects of the Navisphere infrastructure, including
handling all RAID specific get/set operations. The Security provider is responsible for authenticating users to the array and
allowing a user, based on a security ID, to make requests on objects in the CIMOM. The Log Provider is a centralized
notification system designated to store, process, and forward critical events via the following mechanisms: Email (via
SMTP server) and SNMP trap notification. Finally, the Directory provider is responsible for maintaining the directory
structure of the operating environment and indexing the contents.
z Step-by-step process
z Eleven Step “Quick Start”
All AX100/AX100SC storage systems are shipped with a Quick Start guide to be used for installation. Most of the steps
outlined here will be addressed in greater detail later in this course. The following series of slides will take you through the
required steps to successfully configure an AX100 array.
Please take a moment to read through the next few slides covering the required steps to prepare the array.
Step 1
• The AX100/AX100SC storage systems require 120V AC using a standard 3 prong plug (NEMA 5-15).
• Each Storage Processor (SP) requires a LAN connection, and the management station that will initialize the storage
system(s) must be on the same physical subnet as the storage system(s).
• Optical cables with small form factor (SFF) connectors must be available when physically connecting the servers to
the storage systems.
• Each SP requires a unique static IP Address.
Step 2
• The customer must make arrangements for the installation of supported HBAs in the servers.
• The customer must make arrangements for the installation of a supported HBA driver in each server.
• PowerPath for AX100/AX100SC needs to be installed in each server.
Step 3
• The AX100/AX100SC storage systems are heavy and unwieldy, it is advised that assistance be sought when
unpacking the units from the shipping cartons.
Step 4
• The rails must be installed in the rack and the AX100 installed on the rails. The rails and cabinet may be EMC
supplied or could be one of many that meet EMC standards.
• If a SAN environment is to be installed, the rails for the switch(es) must be installed and the switch(es) installed on
the rails.
• If the storage system being installed is an AX100 (dual processor), the UPS unit must be installed in the cabinet.
Step 5
• Always use the strain reliefs when connecting the power cords to the storage system power supplies. If the storage
system is an AX100 and the UPS is installed, connect only one storage system power supply to the UPS unit. The
second power supply should be connected to another source.
Step 6
• Power up the devices in the cabinet. No particular order is required. If this is the first power up for the storage
system, power up will commence automatically. If the storage system had been previously powered, and power down
was accomplished via the power switch, the power switch must be used to power up.
Step 7
• Connection of the servers may be in either a direct attach storage (DAS) configuration or a storage area network
(SAN) configuration. Optical cables will be used in all cases.
• A LAN connection must be provided between the storage processors and a management station for the purposes of
initialization, management and monitoring.
Step 8
• The initialization utility must be installed on a management station. The management station need not be dedicated
and may also be a production server attached to the storage system.
• The initialization utility must be executed and each AX100/AX100SC storage system must have its unique network
parameters configured. Initialization is performed via the ethernet connections to the storage processor(s) and does
not require fibre channel connections.
Step 9
• Each server planned to have access to the AX100/AX100SC storage systems must have the server registration utility
installed.
• Each server planned to have access to the AX100/AX100SC storage systems must have the server registration utility
executed. This utility will register the host HBAs with the storage system database via the fibre channel cables.
Step 10
• The storage system is now ready for configuration. Disk Pools must be created before Virtual Disks can be created.
• The optional single hot spare can be created at any time.
• Storage can be assigned to servers either at the time of the Virtual Disk creation or at a later time.
• Event notification can be configured to send e-mail messages to whomever the customer deems appropriate.
Step 11
• The final step in the installation process is enabling the servers to view the assigned storage. The procedure varies on
different operating systems.
z Pre-installation
z Installation
z Configuration
z Uninstalling
PowerPath is a host-based, multi-pathing software used to manage host connections to the virtual disk in the storage sub-
system. The next few slides will take you through the steps needed from pre-installation through the uninstalling of
PowerPath.
PowerPath software is installed on the server and provides path management for the connection paths between a server and
the virtual disks in the storage system. It transfers I/O to a working path if one path fails, and provides load balancing to
distribute I/O load equally among paths. For more detailed information on path management software, go to the AX100
support website, or the AX100-Series Documentation CD, and under Technical descriptions, select PowerPath. The
admsnap utility is a server-based command line interface that will complete the items listed here to install PowerPath.
Take a moment to read the following notes for proper configuration.
First, set up the host, which includes installing the HBA and drivers. Check the PowerPath documentation, HBA vendor
documentation, and EMC Support Matrix to make sure that the correct driver and firmware revisions are installed on the
HBA and that the current HBA settings are appropriate for use with PowerPath.
Next install PowerPath on the host. Prior to installing PowerPath, review the PowerPath Product Guides and Release Notes
to understand the installation procedure and the latest problems and fixes. PowerPath product guides and Release Notes are
shipped with the equipment. The documentation can also be downloaded from the CLARiiON AX100 website at
http://staging.emc.com/products/systems/CLARiiON/ax100/support/instructions.jsp#troubleshoot. Release notes can be
obtained from Powerlink. Choose the PowerLink link from the www.emc.com website. Release notes are updated
periodically so they are cumulative and include information about every PowerPath point release, new system and
environment requirements, and all known and fixed bugs.
For PowerPath installation, refer to the Product Guide section entitled Installing PowerPath. If prompted to, reboot the host
when the PowerPath installation completes. Be aware that PowerPath will not have AX series devices in its configuration
because the array is not attached.
Next, setup the AX series array. Follow the procedures described in the AX series array manuals to configure the array and
attach host systems to the array. As part of the AX array setup, use Navisphere Express to assign devices to the host.
To boot from the AX series array, use the instructions in the Microsoft Administrative documentation library describing
how to set up boot devices on volumes. Follow the Microsoft instructions before installing PowerPath, unless otherwise
noted in the PowerPath documentation.
The last step is to configure the newly presented devices into PowerPath. This can be accomplished by a host reboot or
using the powermt config CLI command. Administrators should verify that all the devices assigned to the host using
Navisphere Express are included in the PowerPath configuration. This can be accomplished by running a powermt display
dev=all. If devices are not in the PowerPath configuration, review the array configuration section of the product guide to
insure the setup tasks have been followed.
The instructions for installing PowerPath on hosts running Windows 2000 and 2003 can be found in the Installing
PowerPath section of the product guide titled AX100 Series Installing a Storage System in a Switch Connection to a
Windows Server. This screen lists some considerations to be aware of when installing PowerPath on Windows systems.
Read the notes below to install PowerPath on a Windows host.
First, locate the installation CD and registration key that was shipped with the AX series array. To install PowerPath on a
Windows host, choose the installation setup.exe file and accept all the defaults. If the EMC Licensing Tool is displayed,
click Add and OK. When the Setup Wizard prompts to reboot the host, click No. PowerPath should now be installed as
shown by the PowerPath Administrator icon in the Windows taskbar. Next, attach the AX series array to the host and use
Navisphere Express to configure a number of disks to the host. Last, follow the instructions in the Testing PowerPath
section of the guide to start the PowerPath Administrator GUI and verify that the devices have been correctly configured
into PowerPath. On Windows hosts PowerPath automatically places devices under its control. If necessary, apply any
patches that are available for the release of PowerPath that is installed. Review the patch release notes before installing the
patch.
To uninstall PowerPath, first close all applications and client files to avoid warning messages when rebooting after the
uninstall. Next, open the Windows Control Panel, double-click Add/Remove Programs. Then select EMC PowerPath and
choose Remove. Follow the instructions in the PowerPath for Windows Installation Guide for removing PowerPath. When
the uninstall completes and you are asked to reboot the host, choose No. Manually shut down the host by choosing Start >
Shut Down. Choose Shut Down from the list box in the Shutdown Windows window and click OK. For Windows 2003, a
Comment must be entered for the Shutdown. When the server is completely powered down, disconnect the redundant
cables. Use Navisphere Express to deregister all HBAs from the host, then power up the server.
The slide lists some considerations to be aware of when installing PowerPath on systems running Red Hat and SUSE Linux.
To install PowerPath on hosts running Red Hat or SUSE distributions of Linux, follow the instruction in the AX100-Series
product guide Installing a Storage System in a Direct Connection to a Linux Server. First, locate the installation CD and
registration key that was shipped with the AX series array.
The “Before You Install” section contains information on PowerPath pre-installation tasks. The server should also have the
HBAs installed with the correct driver and firmware. All previous versions of PowerPath must be removed before installing
PowerPath with AX series array support.
Next, install PowerPath following the instructions in the AX Series Installation Guide. When the installation is complete,
follow the instructions in the Testing PowerPath section of the installation manual. These instructions include how to
register PowerPath and how to configure PowerPath.
Next, attach the AX series array to the host and use Navisphere Express to configure a number of disks to the host Finally,
follow the instructions in the Testing PowerPath section to verify PowerPath is successfully installed.
Please follow the guidelines listed in the slide for the proper steps necessary to remove PowerPath from Linux Hosts.
First, stop all executing PowerPath CLI commands. Some commands such as the powermt display command can be killed,
others, such as a powermt config, need to complete and therefore should not be killed.
Next, verify that no PowerPath devices are in use. This may require stopping an application and un-mounting the
filesystems.
Next, manually unload the Navisphere agent. This is only necessary if the host has the Navisphere Agent installed.
Next, stop all I/O and close all applications.
Manually remove references to PowerPath pseudo devices from system configuration files. This may require editing
etc/fstab and removing any references to mounting filesystems that are built on PowerPath devices.
Uninstall PowerPath by following the procedure listed in the section “Removing PowerPath” in the PowerPath for Linux
Installation Guide. With the host down, disconnect all paths to the array. Use Navisphere Express to deregister all HBAs
from the host and then power up the host.
The instructions for installing PowerPath on hosts running Netware can be found in the AX100 Series product guide
Installing a Storage System in a Direct Connection to a NetWare Server. Also Check EMC’s PowerLink Web site
(powerlink.emc.com) for PowerPath patches and notices. Review the patch readme files to determine which patches (if any)
you want to install after PowerPath, and whether those patches have any additional prerequisites that must be met before
installing PowerPath
The slide lists some considerations to be aware of when installing PowerPath on Netware systems. Prior to installing
PowerPath, locate the PowerPath for Netware installation CD and registration key.
Install the HBAs and drivers in the Netware host.
Next, install PowerPath following the instructions in the Installing PowerPath section of the AX100 Series Installation
guide. When the installation is complete, you are prompted to reboot the server. Select Shut down to complete the
installation. Power up the server and proceed to the instructions on how the attach the AX series array to the host. Use
Navisphere Express to configure a number of disks to the host.
Finally, follow the instructions in the Testing PowerPath section to verify the installation.
After a successful PowerPath install, use the emcpreg CLI command to install a license key. The CLI command is also
useful for updating expired license keys. The emcpreg CLI command is available on all versions of PowerPath, except
Windows 2000 and 2003.
Note: on some platforms, PowerPath installs will ask for a license key during the installation process.
The example shows the emcpreg CLI command usage output, which is shown when an invalid syntax of the emcpreg
command is shown. In the example, the usage syntax is displayed after running emcpreg.
Take a moment to look below at some of the more useful options (opts) that are accepted by the emcpreg command are:
• Check key – reports whether the current key is valid or invalid.
• Add key – adds a new license key
• Install – installs a new key. This option is used when adding a key for the first time.
In the bottom of the example, an emcpreg –install is run to install a new key. If this were a real key, the xxxx-xxxx-xxxx-
xxxx would be replaced by the real key.
The example shows the EMC PowerPath Licensing Tool for Windows 2000 and 2003. The tool is launched during
PowerPath installation. The tool can also be manually launched by running:
C:\Program Files\EMC\PowerPath\EmcLicTool.exe .
Enter the key in the box as shown and click on Add. If the key is valid, it will be moved to the top pane.
Before installing PowerPath, check the EMC CLARiiON AX100 support web site to verify whether there are patches for
the version of PowerPath you are installing.
Patch releases are identified by a three-digit version number Major.Minor.Patch. In a full product release, the value of Patch
is zero (0). An example of a full release is 3.0.0
Patch filenames are formatted as EMCPP.Platform.Major.Minor.Patch[.Build] where
Platform - Is one of the following:, W2000, W2003_32, or W2003_64.
Major- Is a one-digit major version number.
Minor- Is a one-digit minor version number.
Patch- Is a one-digit patch number.
Build - Is an optional build number.
Therefore the patch EMCPP. EMCPP.W2000.3.0.2.b100 is the second patch for PowerPath 3.0 for Windows 2000 hosts.
PowerPath patches typically are distributed as WinZip files with a filename suffix of .zip. An example of a patch file is
EMCPP.W2000.3.0.2.b100.zip. Be sure to download the patch readme along with the patch. The readme contains details
about the patch along with instructions on how to install the patch.
Occasionally, a patch may be distributed as a full install of the product. In this case, the filename for the patch will be
EMCpower.Platform.Major.Minor. An example of a patch file name is EMCpower.LINUX.3.06.
A PowerPath patch can be applied only to a full release or an earlier (lower numbered) patch release with the same Major
and Minor number. The Readme file describes the contents of the patch and provides installation instructions. Always
follow the directions in the readme to install a patch. Some patches are cumulative, meaning that they include all previous
patches, and others are standalone. The patch readme will include these details, along with bug fixes included in the patch.
Before applying a patch, stop all applications and back up the PowerPath configuration file. Perform this task via CLI by
running the powermt save command or via the GUI by right clicking on the EMC PowerPathAdmin root node, choosing
All Tasks > Save Config for Reboot.
To restore the configuration file, use the powermt load command or choose Load Config File after right clicking on the
EMC PowerPathAdmin node. The Administrator may want to restore a configuration in the unlikely event of uninstalling
the patch.
Both the powermt save and powermt load CLI commands accept a filename. For example powermt save
./ppconfig_010104. If the file name is not included, then the default filename is powermt custom. After installing a patch
and verifying that the configuration is correct, use the PowerPath CLI or PowerPath Administrator GUI to save your
configuration file. Specify a new filename to prevent overwriting the original pre-patch configuration backup.
The AX100 series array supports many types of topologies and configurations. As always, you should consult the latest
EMC Support Matrix for the AX100. In the following slides, we will look at the configuration rules for installing an AX100
array, as well as some typical installation topologies.
AX Series Configurations
AX100 AX100SC
Max Virtual Disks 256 256
This slide shows the various configurations supported by the AX100 and AX100/SC arrays. Please take a few moments to
read the configuration rules:
Note: In this document, array means storage system.
• A total of 8 servers are supported per storage system.
• A total of 8 Host Bus Adapters (HBAs) are supported on single−SP storage systems.
• A total of 16 HBAs are supported on dual−SP storage systems (8 per SP).
• A total of 4 storage systems per server are supported.
• A total of 256 virtual disks per storage system are supported.
• User installable HBAs are limited to the Emulex LP101−E and the Qlogic QLA200−E.
• HBAs from different manufacturers cannot be mixed in the same server.
• User installable switches are limited to the Brocade Silkworm 3250 VL2 E. Pre−configured zoning on the VL2 E
must be used to be user installable. All other supported switches are not user installable.
• Switches from different manufacturers cannot be mixed in the same SAN.
• If an AX100 storage system is connected to a server, no other type of storage system can be connected to that server.
• A server must be connected to a storage system either directly or through a switch. A server CANNOT be connected
to the SAME storage system both directly and through a switch.
• All direct connect configurations are user installable.
• Any switch configuration utilizing an Inter Switch Link (ISL) is not user installable. No more than 2 switches can be
connected with an ISL.
• In any supported multiple−server configuration, the servers can run the same operating system or any combination of
supported operating systems.
SnapView Configurations
SnapView AX100/AX100SC
This slides lists the SnapView configuration limits for an AX100. Below is a list of rules when using SnapView Snapshots
with various topologies. Please take a few minutes to review these.
•The snapshot feature requires that the source and secondary servers are connected to the same SP. PowerPath must be
installed on all attached servers for all configurations.
•A clustered configuration requires one or more HBAs per server and a single−SP or dual−SP storage system. 2 to 8 nodes
are supported in a clustered configuration.
•All nodes in a cluster configuration must run the same operating system. Diagrams may show other operating systems, but
dissimilar operating systems cannot be clustered. All nodes in a cluster configuration must be connected to the same SPs.
•The installation of an operating system on a storage system, for the purpose of booting the server from the storage system,
is not a user installable option.
•Boot from a storage system on clustered servers running Windows 2000 requires a dedicated boot HBA.
•Boot from a storage system on clustered servers running Windows 2003 DOES NOT require a dedicated boot HBA.
Backup Support
EMC's testing and qualification responsibilities only apply to SAN (switch) attached backup devices. SAN attached backup
configurations include a backup device connected to a port on the switch. These configurations are not currently customer
installable. Please refer to the EMC ESM for SAN attached backup supported configurations.
For tape devices connected directly to a backup host, EMC strongly recommends that the component vendor’s supported
configuration matrices are reviewed (host configuration matrix for compatibility with SCSI or Fibre Channel interface used
to connect to tape device, tape device vendor for compatibility with selected SCSI or Fibre Channel interface and backup
software vendor for compatibility with the tape device).
PATH Definition
A path is a connection between an HBA port and a target SP port in a storage system. Each HBA that is directly connect to,
or zoned through, a switch to an SP port is considered an initiator of that SP port. An HBA port can be zoned through
different switch ports to the same SP port or to different SP ports, resulting in multiple paths between the HBA port and an
SP.
Netware Netware 5.10 NCS 1.6 for Netware 5.10 (max 8 HA nodes) QLA2340
Netware 6.5 NCS 1.7 for Netware 6.5 (max 8 HA nodes) QLA200-E
Please see EMC AX 100 Support Configuration Guide (P/N 300-00-166 rev x) for complete supported configurations.
Supported Switches
Silkworm
Brocade Mo DS-8B2-ODL 8 (1 e-port) 2 Gbit Switch
3200
Silkworm
Brocade 8 (1 e-port) 2 Gbit Switch
3250VL 2E
Spherion
McDATA T12 12 2Gbit Switch
4300
The above list shows currently supported switches. User installable switches are limited to the Brocade Silkworm 3250
VL2 E. Pre−configured zoning on the VL2 E must be used. All other supported switches are not user installable. Switches
from different manufacturers cannot be mixed in the same SAN. If an AX100 storage system is connected to a server, no
other type of storage system can be connected to that server. A server must be connected to a storage system, either directly
or through a switch. A server CANNOT be connected to the SAME storage system, both directly and through a switch. All
direct connect configurations are user installable. Any switch configuration utilizing an Inter Switch Link (ISL) is not user
installable. No more than 2 switches can be connected with an ISL.
Figure 1:
Here is a server with a single HBA connected to one port on the storage processor. This is not highly available. There are
two single points of failure: the HBA and the SP. PowerPath in a Windows environment will tell you when one path has
failed.
Figure 2:
Figure 2: Two servers, each with a single HBA connected to the same SP through different ports. Two single points of
failure for each server: HBA and SP. However, two servers have access to the array. PowerPath on each Windows server
only tells you if that one path has failed.
Servers can be clustered if they are running the same operating system.
Figure 3:
Figure 3: One server with two HBAs connected to a port on each SP. Highly available configuration with one path to each
SP. With PowerPath on a server, it can reach any virtual disk if one SP fails or one HBA fails.
Figure 4:
Figure 4: One server with four HBAs connected to an SP port on the two SPs. This is a highly available configuration with
two paths to each SP (multipath). With PowerPath on a server, it can reach any virtual disk if one SP fails or up to three
HBAs fail.
Figure 5:
Figure 5: Two servers with one HBA on each, connected to an SP port on the two SPs. Not highly available. PowerPath on
a server tells you if the path has failed.
Figure 6:
Figure 6: Four servers with one HBA on each connected to an SP port on the two SPs. Not highly available. Each server has
only one path to a storage processor. PowerPath on a server tells you if one path has failed.
Figure 7:
Figure 7: Two servers with two HBAs, each connected to an SP port on the two SPs. Highly available configuration where
each server has one path to each storage processor. With PowerPath on a server, the server can reach any virtual disk if one
SP fails or one HBA fails.
If the servers are running the same operating system, they can be clustered.
Figure 8:
Figure 8: SAN with Brocade 3250VL2E and with four servers, (supporting up to six), each with one HBA. Each server can
reach a port on either SP (see switch zoning explanation below). The server cannot access the storage system if its HBA
fails or if the switch port it connects to fails. With PowerPath on a server, the server can reach any virtual disk if one SP
fails.
If the servers are running the same operating system, they can be clustered.
Preconfigured zoning allows each HBA to reach both storage processors. This illustration shows the Brocade 3250VL2E,
where ports 0 and 4 are for storage system ports only. All other switch ports are for HBAs only. Storage system ports
connected to port 0 and port 4 can reach all HBAs connected to ports 1, 2, 3, 5, 6, and 7.
Figure 9:
Figure 9: Three servers with two HBAs, each can reach one SP port on both SPs. Highly available configuration where each
server has one path to each storage processor SP (see switch zoning explanation below). With PowerPath on a server, the
server can reach any virtual disk if one SP fails or one HBA fails.
If the servers are running the same operating system, they can be clustered.
Preconfigured zoning allows each HBA to reach both storage processors. This illustration shows the Brocade VL2 E, where
ports 0 and 4 are for the storage system ports only. All other switch ports are for HBAs only. Storage system ports
connected to port 0 and port 4 can reach all HBAs connected to ports 1, 2, 3, 5, 6, and 7.
Figure 10:
Figure 10: SAN with two Brocade 3250VL2E switches and three servers (and supporting up to six). With two HBAs per
server, each can reach a port on each SP. Highly available configuration where each server has one path to each storage
processor. With PowerPath on a server, the server can reach any virtual disk if one SP fails, or one HBA fails.
You also can connect the remaining SP ports A1 and B1 to port 4 on each switch to create more high availability (each
server will still be able to reach virtual disks if a switch fails, because both SPs are connected to alternate switches). Also, if
a server with a single HBA is connected to one of the switches, it will have access to both SPs.
If servers are running the same operating system, they can be clustered.
Figure 11:
Figure 11: Three servers each with one HBA. Each server can reach an SP port on each SP (see switch zoning explanation
below). With PowerPath on a server, the server can reach any virtual disk if one SP fails. While each server has one path to
each storage processor, the server cannot access the storage system if its HBA fails or if the switch port to which it connects
fails. This configuration allows you to connect the maximum 8 servers to an array.
If the servers are running the same operating system, they can be clustered.
Preconfigured zoning allows each HBA to reach both storage processors. This illustration shows the Brocade VL2 E, where
ports 0 and 4 are for the storage system ports only. All other switch ports are for HBAs only. Storage system ports
connected to port 0 and port 4 can reach all HBAs connected to ports 1, 2, 3, 5, 6, and 7.
Figure 12:
Figure 12: Three servers with two HBAs, each can reach both SP ports on each SP. Highly available configuration where
each server has two paths to each storage processor. With PowerPath on a server, the server can reach any virtual disk if
one SP fails, a port on the SP fails, or one HBA fails. In addition, with redundant switches, each server will still be able to
reach virtual disks if a switch fails as both SPs are connected to alternate switches.
If any servers are running the same operating system, they can be clustered.
Figure 13:
Figure 13: Three servers with two HBAs, each can reach one SP port on both SPs. Highly available configuration where
each server has one path to each storage system SP (see switch zoning explanation below). With PowerPath on a server, the
server can reach any virtual disk if one SP fails or one HBA fails. An additional fourth server with two HBAs directly
connected to each SP on the storage system. Because all available ports on the switch are occupied, this configuration
allows maximum server utilization.
If the servers are running the same operating system, they can be clustered.
Preconfigured zoning allows each HBA to reach both storage processors. This illustration shows the Brocade VL2 E, where
ports 0 and 4 are for the storage system ports only. All other switch ports are for HBAs only. Storage system ports
connected to port 0 and port 4 can reach all HBAs connected to ports 1, 2, 3, 5, 6, and 7.
H H H H H H H H H H H H H H H H
B B B B B B B B B B B B B B B B
A A A A A A A A A A A A A A A A
SP A SP B SP A
Dual fabrics provide a high availability environment. Please read the following guidelines when considering a dual fabric
approach.
• Number of Servers and Storage Systems - As many as the available switch ports, provided each server follows the
fan-out rule below and each storage system follows the fan-in rule below, using WWPN switch zoning with a
maximum of 1 ISL (e-port) hop.
• Fan-In Rule - A server can be zoned to a maximum of 4 storage systems.
• AX100SC Fan-Out Rule – Single SP system supports a maximum of 8 HBAs (Initiators).
• AX100 Fan-Out Rule – Dual SP system supports a maximum of 16 HBAs (Initiators).
• Number of Switches – 1 to 4. Fabrics are limited to 2 switches per fabric; only 1 ISL or e-port (one “hop” between
switches).
• Failover – PowerPath required for Win2k, Win2k3, Linux, and NetWare. SP supports 2 Active FC Links.
Note: PowerPath Base does not support multi-path load balancing when connecting to both FE ports on an SP but,
standard/full PowerPath does.
Zone 1, Host A to CX
Zone 2, Host B to AX100
Zone 3, CX SANCopy to/from
AX100 client
SP A SP B SP A SP B
CX-series
EMC Global Education
© 2005 EMC Corporation. All rights reserved. 71
The above diagram shows zoning between an AX100 series array and an CX series array. Host A in zone # 1 zoned to the
CX array, and host B in zone # 2 zoned to the AX array, with zone # 3 running SAN copy between arrays.
Please refer to the guidelines below.
• Number of Servers and Storage Systems - As many as the available switch ports, provided each server follows the
fan-out rule below and each storage system follows the fan-in rule below, using WWPN switch zoning with a
maximum of 1 ISL (e-port) hop.
• Fan-In Rule - A server can be zoned to a maximum of 4 storage systems.
• AX100SC Fan-Out Rule – Single SP system supports a maximum of 8 HBAs (Initiators).
• AX100 Fan-Out Rule – Dual SP system supports a maximum of 16 HBAs (Initiators).
• Number of Switches – 1 to 4. Fabrics are limited to 2 switches per fabric; only 1 ISL or e-port (one “hop” between
switches).
• Failover – PowerPath required for Win2k, Win2k3, Linux, and NetWare. SP supports 2 Active FC Links.
Note, PowerPath Base does not support multi-path load balancing when connecting to both FE ports on an SP but,
standard/full PowerPath does.
Course Summary
These are the key points covered in this training. Please take a moment to review them.
Appendix A
CLARiiON AX100i Hardware
Differences
Welcome to the AX100i Hardware differences overview. This section will introduce the AX100i series array with native
iSCSI connectivity.
AX 100i Introduction
Flare Release 16 introduces the AX100i with native iSCSI support, as well as RAID 1/0. The CLARiiON AX100i provides
lower-cost storage connectivity options with hallmark CLARiiON attributes, including proven architecture, high
availability, flexible local replication, easy-to-use management, and affordability. New iSCSI options enable customers to
benefit from consolidating more of their storage environment. iSCSI (Internet Small Computer Systems Interface) allows IP
(Internet Protocol) networked SANs to be built with low-cost Ethernet-networking components, leading to the following
benefits:
• Greater usage of storage assets
• Improved application availability
• Higher service levels
The AX100i uses the iSCSI (Internet Small Computer Systems Interface) protocol for server input/output. The AX100i
connects to a 1-gigabit Ethernet environment using standard RJ45 LAN connectors and category 6 Ethernet cables. Hosts
can connect to an iSCSI array using a Layer 2 (switched) or Layer 3 (routed) network. Also, the presence of VLANs, VPNs
or NAT devices on the network should be transparent to the array and are supported as long as the network quality is
adequate. The major customer benefit of iSCSI on CLARiiON storage systems is that iSCSI based SANs will provide the
same benefits as Fibre Channel SANs using TCP/IP networking, which is more familiar to IT managers. Block storage
networking will benefit from the maturity of IP technology.
AX100i
AX100SCi
EMC Global Education
© 2005 EMC Corporation. All rights reserved. 76
The AX100i front end offers one iSCSI port per SP, for a total of 2 front end ports. Up to 16 initiators (hosts or host HBAs)
can be connected to the storage system. The AX100SCi front end offers 1 iSCSI port with up to 8 initiators (hosts or host
HBAs) allowed per storage system. Hosts connect via 10/100/1000NIC operating at 1Gb or via a QLA4010C HBA only.
The connection to the array must be 1 Gb copper. A 10/100 NIC connection to the array is not supported except to the
management ports. The only currently supported method for booting a Windows system using an iSCSI disk volume is via a
supported HBA. Currently it is not possible to boot a Windows system using an iSCSI disk volume provided by the
Microsoft iSCSI Initiator.
H
Direct Attach
B
A
0
SP A
Supported configurations include Direct Attach, Single NIC/HBA to one subnet. 1Gb HBA or NIC must be used for direct
attach.
Windows
2003
H H
B B
Direct Attach A A
0 0
SP A SP B
Another supported configuration is multiple NIC/HBA to one array. NICs or HBAs must be on different subnets for proper
routing. A server cannot connect to the same array using a mix of NICs and HBAs.
H H
B B
A A
0 0
SP A SP B
This slide illustrates a supported configuration of multiple servers to single storage array. Operating systems do not have to
be the same. 1Gb HBA or NIC must be used for direct attach. NICs and HBAs, even if in different servers, must be on
different subnets for proper network routing.
H H H H
B B B B
A A A A
3 S 7
w
2 6
i
1 t 5
0
c 4
h
0 0
SP A SP B
When a switch or router is present, either a 10/100/1000 NIC or a 1Gb HBA can be used in the server. Operating systems
do not have to be the same.
Switched Networks
Windows Windows Windows
2003 2003 2003
H H H H H H
B B B B B B
A A A A A A
3 S 7
w
2 i 6
1 t 5
c
0 4
h
0 0
SP A SP B
Dual HBAs or NICs are supported. If multiple NICs are on the same subnet, the MS “Advanced” configuration setup must
be utilized in order to take advantage of the additional NICs.
Switched Networks
Windows Windows Windows
2003 2003 2003
H H H H H H
B B B B B B
A A A A A A
3 S 7 3 S 7
w w
2 i 6 2 6
i
1 t 5 1 t 5
c c
0 4 0 4
h h
0 0
SP A SP B
Two NICs or HBAs to separate routers/switches can be supported. However, if two NICs or HBAs are connected to
separate switches or routers, they must be on different subnets. Switch/Router(s) must be capable of 1Gb connection.
Supported Configurations
Switched Networks
Windows Windows Windows
2003 2003 2003
H H H
B B B
A A A
10 11 10 11
8
S 9 8
S 9
w w
6 7 6 7
i i
4 t 5 4 t 5
c c
2 3 2 3
h h
0 1 0 1
0 0 0 0
SP A SP B SP A SP B
Single or multiple NICs/HBAs can be supported. A single NCI or HBA can be connected to up to 8 SP ports. Switch/Router
must be capable of 1Gb connection.
Supported Configurations
Switched Networks
Windows Windows Windows
2003 2003 2003
H H H H H H
B B B B B B
A A A A A A
10 11 10 11
8 S 9 8 S 9
w w
6 i 7 6 i 7
4 t 5 4 t 5
c c
2 3 2 3
h h
0 1 0 1
0 0 0 0
SP A SP B SP A SP B
.
.
Single or multiple HBAs can be supported. Multiple HBAs from one server can be logged into all ports simultaneously.
Routers must be capable of 1Gb connection.
Supported Configurations
Switched Networks
Windows Windows Windows
2003 2003 2003
N N N N N N
I I I I I I
C C C C C C
10 11 10 11
8 S 9 8 S 9
w w
6 i 7 6 i 7
4 t 5 4 t 5
c c
2 3 2 3
h h
0 1 0 1
0 0 0 0
SP A SP B SP A SP B
Single or multiple NICs can be supported. Multiple NICs from one server can log into all ports, but only 1 NIC per server
can be logged into a port at any given time. Router must be capable of 1Gb connection.
With release 16, RAID 1/0 is now supported on the AX-Series storage systems. RAID 1/0 uses disk striping for
performance improvement and disk mirroring for redundancy. The popularity of RAID 1/0 stems from the fact that it is
relatively simple to implement, while providing high performance and good data redundancy. The disadvantage is that there
is a 50% waste in storage space. Enterprise applications and servers are often willing to sacrifice storage for increased
performance and fault tolerance.
Thank you for your attention. This ends our training on CLARiiON AX100 Fundamentals.