Professional Documents
Culture Documents
SAN Fundamentals
SNIA (Storage Network Industry Association)
Storage Network Industry Association is non profitable organization developed to manage the products of
SAN & storage, its goal is to have a common information model for all further storage networking technology
and applications. The Storage Networking Industry Association, or SNIA, was incorporated in December,
1997, and is a registered 501(c)(6) non-profit trade association. Its members are dedicated to "ensuring that
storage networks become complete and trusted solutions across the IT community".
SNIA's mission is to lead the storage industry worldwide in developing and promoting standards to manage
the products of SAN & Storage, Information Life Cycle, Reporting, Monitoring, Provision by providing a
Common Information Model which is uniquely committed to delivering standards
SMI-S, or the Storage Management Initiative - Specification, is a storage standard developed and
maintained by the Storage Networking Industry Association (SNIA). It has also been ratified as ANSI
standard ANSI INCITS 388-2004. SMI-S is based upon the Common Information Model and the Web-Based
Enterprise Management standards defined by the Distributed Management Task Force, Management via
TCP/IP.
What is SAN
A SAN is a high-speed (1 to 2Gb/s data transfer rates, with the future of 10Gb/s) network with
heterogeneous (mixed vendor or platforms) servers accessing a common or shared pool of heterogeneous
storage devices.
SAN environments provide any-to-any communication between servers and storage resources, including
multiple paths.
The parts of a SAN are:
High return on investment (ROI) and reduce the total cost of ownership (TCO) by increasing
performance, manageability, and scalability.
Disaster recovery capabilities SAN devices can mirror the data on the disk to another location.
Increased I/O performance SANs operate faster than internal drives or devices attached to a
LAN.
Connectivity Any-to-any connections
Cost effectiveness Serverless backups and tape library sharing
Modular scalability Dynamic capacity
Consolidated storage Sharing of centralized storage
Page 1 of 27
For companies to continue being successful, data storage has become a business-critical consideration. A
storage solution is a fully integrated and tested storage-product configuration consisting of a combination of
hardware, software, and servicesall of which focus on solving specific customer business problems.
Data storage solutions are:
DAS
DAS is storage connected to a server. The storage itself can be external to the server connected by a cable
to a controller with an external port, or the storage can be internal to the server. Some internal storage
devices use high-availability features such as adding redundant component capabilities.
DAS is accessed by servers attaching directly to a storage system. In a DAS configuration, clients access
storage through the server and the same server resources are used. If the server becomes unavailable, it
disrupts access to any storage directly connected to the server. If certain resources connected to the server
are busy, network traffic increases.
However, DAS is fast and reliable for small-sized networks.
NAS
NAS is storage that resides on the LAN behind the
servers. NAS storage devices require special
storage cabinets providing specialized file access,
security, and network connectivity.
NAS
Page 2 of 27
NAS storage devices require special storage cabinets providing specialized file access, security, and
network connectivity. On the server side, the SCSI HBA is no longer needed for storage access. Servers
access the NAS the same way that clients do.
SAN
A SAN is a network composed of many servers, connections,
and storage devices, including disk, tape, and optical storage.
The storage can be located far from the servers that use it.
One server or many heterogeneous servers can share a
common storage device, or many different storage devices.
SAN components include:
A SAN is different from traditional networks because it is created from storage interfaces.
SAN solutions use a dedicated network behind the servers and are based on primarily Fibre
Channel architecture.
Fibre Channel provides a highly scalable bandwidth over long distances. Fibre Channel has the
ability to provide full redundancy, including switched parallel data paths to deliver high availability
and high performance.
Clients with business-critical data and applications are concerned about high availability. Fibre
Channel SANs help provide the no-single-point-offailure configurations that business-critical
customers require by being able to mirror data or cluster servers over a SAN.
Therefore, a SAN can avoid network bottlenecks. It supports direct, high-speed transfers between servers
and storage devices in the following methods:
Server to storage This is the traditional method of interaction with storage devices. The SAN
advantage is that the same storage device can be accessed serially or concurrently by multiple
servers.
Server to server This provides high-speed, high-volume communications between servers.
Page 3 of 27
Storage to storage In this configuration of a SAN, a disk array could back up its data directly to
tape across the SAN, without server processor intervention. A device could be mirrored remotely
across the SAN for high-availability configurations.
RAID
RAID stands for Redundant Array of Independent Disks and it basically involves combining two or more
drives together to improve the performance and the fault tolerance. Combining two or more drives together
also offers improved reliability and larger data volume sizes. A RAID distributes the data across several
disks and the operating system considers this array as a single disk.
RAID Levels
Several different arrangements are possible and different
standard schemes have evolved which represent a set of tradeoffs between capacity, speed and protection against the data
loss.
Some of the common RAID levels are RAID 0
RAID 0 uses data stripping as the data is broken into fragments
while writing it to the drive. The fragments are then written to
their disks simultaneously on the same sector. While reading, the
data is read off the drive in parallel and so, this type of arrangement offers huge bandwidth.
The trade-off associated with RAID 0 is that a single disk failure destroys the entire array as it offers no fault
tolerance and RAID 0 does not implement error checking.
RAID 1
RAID 1 uses mirroring to write the data to the drives. It also offers fault tolerance from the disk errors and
the array continues to operate efficiently as long as at least one drive is functioning properly.
The trade-off associated with the RAID 1 level is the cost required to purchase the additional disks to store
data.
RAID 2
It uses Hamming Codes for error correction. In RAID 2, the disks are synchronized and they're striped in
very small stripes. It requires multiple parity disks.
Page 4 of 27
RAID 3
This level uses a dedicated parity disk instead of rotated parity stripes and offers improved performance and
fault tolerance. The benefit of the dedicated parity disk is that the operation continues without parity if the
parity drive stops working during the operation.
RAID 4
It is similar to RAID 3 but it does block-level stripping instead of the byte-level stripping and as a result, a
single file can be stored in blocks. RAID 4 allows multiple I/O requests in parallel but the data transfer speed
will be less. Block level parity is used to perform the error detection.
RAID 5
RAID 5 uses block-level stripping with distributed parity and it requires all drives but one to be present to
operate correctly. The reads are calculated from the distributed parity upon the drive failure and the entire
array is not destroyed by a single drive failure. However, the array will lose some data in the event of the
second drive failure.
PROTOCOLS:
What is the difference between SCSI and FC PROTOCOLS?
Two major protocols are used in Fibre Channel SANs The Fibre Channel protocol (used by the hardware to
communicate) and the SCSI protocol (used by software applications to talk to disks).
SCSI Protocol
The SCSI protocol (small computer system interface) is used by operating systems for input/output
operations to disk drives. Data is sent to from the host operating system to the disk drives in large chunks
called "blocks" of data, normally in parallel over a physical interconnect of high-density 68-wire copper
cables. Because SCSI is transmitted in parallel, each bit must arrive at the end of the cable at the same
time. Due to signal strength and "jitter", this limited the maximum distance a disk drive could be from the
host to under 20 meters. This protocol lies on top of the Fibre Channel protocol enabling SAN-attached
server applications to talk to their disks.
FC (Fibre Channel) is just the underlying transport layer that SANs use to transmit data. This is the language
used by the HBAs, hubs, switches and storage controllers in a SAN to talk to each other. The Fibre Channel
protocol is a low-level language meaning that it's just used as a language between the actual hardware, not
the applications running on it.
Actually, two protocols make up the Fibre Channel protocol. Fibre Channel Arbitrated Loop or FC-AL works
with hubs and Fibre Channel Switched or FC-SW works with switches. Fibre Channel is the building block of
the SAN highway. It's like the road of the highway where other protocols can run on top of it just as different
cars and trucks run on top of an actual highway. In other words, if Fibre Channel is the road, then SCSI is
the truck that moves the data cargo down the road.
Page 5 of 27
The operating systems still use SCSI to communicate with the disk drives in a SAN as Fibre Channel SANs
layer the SCSI protocol on top of the FC protocol. FC can run on copper cables or optical cables. Using
optical cables, the SCSI protocol is serialized (the bits are converted from parallel to serial, one bit at a time)
and transmitted as light pulses across the optical cable. Your data can now run at the speed of light and you
are no longer limited to the shorter distances of SCSI cables. (Disks in an FC fabric can be located up to 100
thousand meters from the host! 100Km).
Page 6 of 27
4.
5.
The iFCP protocol enables the implementation of fibre channel fabric functionality on an IP network in which
IP components and technology replace the fibre channel switching and routing infrastructure.
The main function of the iFCP protocol layer is to transport fibre channel frame images between locally and
remotely attached N_PORTs. When transporting frames to a remote N_PORT, the iFCP layer encapsulates
and routes the fibre channel frames comprising each fibre channel Information Unit via a predetermined
TCP connection for transport across the IP network.
When receiving fibre channel frame images from the IP network, the iFCP layer de-encapsulates and
delivers each frame to the appropriate N_PORT. The iFCP layer processes the following types of traffic:
1.
2.
3.
4.
Page 7 of 27
over using separate disks independently and doesn't provide any of the fault tolerance or performance
benefits of RAID.
RAID: Redundant array of independent disks
Redundant Arrays of Independent Disks (RAID) is a type of disk drives with two or more drives in
combination for increasing data integrity, fault tolerance, throughput or capacity and performance. RAID
provides seceral methods of writing data across/to multiple disks at once. RAID is one of many ways to
combine multiple hard drives into one single logical unit. Thus, instead of seeing several different hard
drives, the operating system sees only one. RAID is typically used on server computers, and is usually
implemented with identically-sized disk drives. With decreases in hard drive prices and wider availability of
RAID options built into motherboard chipsets, RAID is also being found and offered as an option in higherend end user computers, especially computers dedicated to storage-intensive tasks, such as video and
audio editing.
There are at least nine types of RAID plus a non-redundant array (RAID-0):
RAID-0: This technique has striping but no redundancy of data. It offers the best performance but
no fault-tolerance.
RAID-1: This type is also known as disk mirroring and consists of at least two drives that duplicate
the storage of data. There is no striping. Read performance is improved since either disk can be
read at the same time. Write performance is the same as for single disk storage. RAID-1 provides
the best performance and the best fault-tolerance in a multi-user system.
RAID-2: This type uses striping across disks with some disks storing error checking and correcting
(ECC) information. It has no advantage over RAID-3.
RAID-3: This type uses striping and dedicates one drive to storing parity information. The
embedded error checking (ECC) information is used to detect errors. Data recovery is
accomplished by calculating the exclusive OR (XOR) of the information recorded on the other
drives. Since an I/O operation addresses all drives at the same time, RAID-3 cannot overlap I/O.
For this reason, RAID-3 is best for single-user systems with long record applications.
RAID-4: This type uses large stripes, which means you can read records from any single drive.
This allows you to take advantage of overlapped I/O for read operations. Since all write operations
have to update the parity drive, no I/O overlapping is possible. RAID-4 offers no advantage over
RAID-5.
RAID-5: This type includes a rotating parity array, thus addressing the write limitation in RAID-4.
Thus, all read and write operations can be overlapped. RAID-5 stores parity information but not
redundant data (but parity information can be used to reconstruct data). RAID-5 requires at least
three and usually five disks for the array. It's best for multi-user systems in which performance is
not critical or which do few write operations.
RAID-6: This type is similar to RAID-5 but includes a second parity scheme that is distributed
across different drives and thus offers extremely high fault- and drive-failure tolerance.
RAID controller
Modern mass storage systems are growing to provide increasing storage capacities to fulfill increasing user
demands from host computer system applications. Redundant array of independent disks (RAID) storage
technology allows for the storing of the same data on multiple hard disks. Within a RAID system, varying
levels of data storage redundancy are utilized to enable reconstruction of stored data in the event of data
corruption or disk failure. RAID subsystems are configured such that each drive serves as the primary
storage device for a first portion of the data stored on the subsystem and serves as the backup storage
Page 8 of 27
device for a second portion of the data. RAID storage subsystems typically utilize a control module that
shields the user or host system from the details of managing the redundant array. The controller makes the
subsystem appear to the host computer as a single, highly reliable, high capacity disk drive. In RAID
subsystems, the storage controller device performs significant management functions to improve reliability
and performance of the storage subsystem. The RAID controller provides an interface between the RAID
subsystem and the computer system. The RAID controller includes the hardware that interfaces between the
computer system and the disks. The RAID storage management techniques improve reliability of a storage
subsystem by providing redundancy information stored on the disk drives along with the host system data to
ensure access to stored data despite partial failures within the storage subsystem
What is RPO & RTO
Recovery Point Objective (RPO) and Recovery Time Objective (RTO) are some of the most important
parameters of a disaster recovery or data protection plan. These objectives guide the enterprises in
choosing an optimal data backup (or rather restore) plan.
RPO Recovery Point Objective
Recovery Point Objective (RPO) describes the amount of data lost measured in time. Example: After an
outage, if the last available good copy of data was from 18 hours ago, then the RPO would be 18 hours.
In other words it is the answer to the question Up to what point in time can the data be recovered? .
RTO Recovery Time Objectives
The Recovery Time Objective (RTO) is the duration of time and a service level within which a business
process must be restored after a disaster in order to avoid unacceptable consequences associated with a
break in continuity.
It should be noted that the RTO attaches to the business process and not the resources required to support
the process.
In another words it is the answer to the question How much time did you take to recover after
notification of a business process disruption ?
The RTO/RPO and the results of the Business Impact Analysis (BIA) in its entirety provide the basis for
identifying and analyzing viable strategies for inclusion in the business continuity plan. Viable strategy
options would include any which would enable resumption of a business process in a time frame at or near
the RTO/RPO. This would include alternate or manual workaround procedures and would not necessarily
require computer systems to meet the objectives.
Storage subsystems
SAN File System conforms to small computer system interface (SCSI) standards and is designed to work
with any SCSI-compliant storage devices, including Just a Bunch Of Disks (JBOD), redundant array of
independent disks (RAID) with mirroring, and hierarchically-managed storage devices. You can attach tape
devices to SAN File System for backups and long-term storage, although tape devices cannot be part of a
storage pool.
Page 9 of 27
All storage subsystems attached to SAN File System can be accessed by all clients (unless you use zoning
to allow only specific clients to access specific devices). This enables data sharing among heterogeneous
clients.
SAN File System supports heterogeneous, simultaneously-connected storage and host-bus adapter (HBA)
sharing, subject to client platform, driver, and storage-vendor limitations.
Why Fibre Channel:
Industry requires an efficient and high-performance transfer of information between devices such as
computers, storage devices, and other peripherals.
Fibre Channel is a multilayered network based on a series of standards from the American National
Standards Institute (ANSI). These standards define characteristics and functions for moving data across the
network. They include definitions of physical interfaces such as cabling, distances, and signaling; data
encoding and link controls; data delivery in terms of frames, flow control, and classes of service; common
services; and protocol interfaces.
With Fibre Channel:
Hosts and applications see storage devices attached to the SAN as if they are locally attached
storage.
Multiple protocols and a broad range of devices can be supported.
Connections can be either optical fiber (for distance) or copper cable links (for short distance at low
cost).
Protocols
Fibre Channel uses three protocols:
Point-to-point Devices are directly connected to other devices without the use of hubs,
switches, or routers.
Fibre Channel Arbitrated Loop (FC-AL) FC-AL has a shared bandwidth, distributed topology,
connects with hubs, and is the simplest form of a fabric topology.
Fibre Channel Switched Fabric (FC-SW) FC-SW provides the highest performance and
connectivity of the three topologies. It has nondisruptive scalability and switch connection.
Fibre Channel supports 126 nodes on an FC-AL, and 16 million nodes on an FC-SW and provides
connectivity over several kilometers (up to 10km) when using optical fiber.
Page 10 of 27
Framing protocol
Framing protocol is a communication procedure that:
A frame is a string of data bytes, prefixed by a start of frame (SOF) delimiter and followed by an end of
frame (EOF) delimiter.
Frame specifications
Page 11 of 27
All node (server or storage) ports are called N_Ports. An N_Port attaches to an F_Port in a pointto-point protocol. N-port to N-port is uncommon, so when two nodes are direct-attached it is
through an arbitrated loop (NL_Port to NL_Port).
L_port
All loop-hub ports are called L_Ports, which stands for loop ports.
NL_port
An N_Port that contains arbitrated loop functions associated with arbitrated loop topology is
called an NL_Port.
F_port
The F_Port, or fabric port, is the Link_Control_Facility within the fabric (switch) that attaches to
an N_Port.
FL_port
An F_Port, that contains arbitrated loop functions associated with arbitrated loop topology is
called an FL_Port, which stands for fabric loop port.
E_port
An E_Port is used for connecting fabrics (switches). The link is called the inter-switch link (ISL).
G_port
A G_Port (generic port) can auto-discover its type. It automatically configures itself as an E, N, or
NL port.
Page 12 of 27
Switch ports
Switch ports become a port type depending on what
gets plugged into them.
Point-to-point
Although the term fibre channel implies some form of fibre optic technology, the fibre channel specification
allows for both fibre optic interconnects as well as copper coaxial cables.
Point-to-Point
Point-to-point fibre channel is a simple way to connect two (and only two) devices directly together, as
shown in Figure 1 below. It is the fibre channel equivalent of direct attached storage (DAS).
Host
Storage
Page 13 of 27
From a cluster and storage infrastructure perspective, point-to-point is not a scalable enterprise
configuration and we will not consider it again in this document.
Arbitrated Loops
A fibre channel arbitrated loop is exactly what it says; it is a set of hosts and devices that are connected into
a single loop, as shown in Figure 2 below. It is a cost-effective way to connect up to 126 devices and hosts
into a single network.
Host A
Host B
Device
C
Device
E
Device
D
Page 14 of 27
Note: Not all devices and host bus adapters support loop configurations since it is an optional part of the
fibre channel standard. However, for a loop to operate correctly, all devices on the loop MUST have
1
arbitrated loop support . Figure 3 below shows a schematic of the wiring for a simple arbitrated loop
configuration.
Most devices today, except for some McData switches, support FC-AL.
Page 15 of 27
In a switched fibre channel fabric, devices are connected in a many-to-many topology using fibre channel
switches, as shown in Figure 4 below. When a host or device communicates with another host or device, the
source and target setup a point-to-point connection (just like a virtual circuit) between them and
communicate directly with each other. The fabric itself routes data from the source to the target. In a fibre
channel switched fabric, the media is not shared. Any device can communicate with any other device
(assuming it is not busy) and communication occurs at full bus speed (1Gbit/Sec or 2Gbit/sec today
depending on technology) irrespective of other devices and hosts communicating.
Host A
Host B
Host C
Host D
Device
E
Device
F
Device
G
Device
H
Device
I
Page 16 of 27
Page 17 of 27
Hubs
Hubs are the simplest form of fibre channel devices and are used to connect devices and hosts into
arbitrated loop configurations. Hubs typically have 4, 8, 12 or 16 ports allowing up to 16 devices and hosts to
be attached, however, the bandwidth on a hub is shared by all devices on the hub. In addition, hubs are
typically half-duplex (newer full duplex hubs are becoming available). In other words, communication
between devices or hosts on a hub can only occur in one direction at a time. Because of these performance
constraints, hubs are typically used in small and/or low bandwidth configurations.
Figure 5 below shows two hosts and two storage devices connected to the hub with the dark arrows
showing the physical loop provided by the hub.
Host A
Host B
Device D
Device C
Page 18 of 27
Switches typically support 16, 32, 64 or even 128 ports today. This allows for complex fabric configurations.
In addition, switches can be connected together in a variety of ways to provide larger configurations that
consist of multiple switches. Several manufacturers such as Brocade and McData provide a range of
switches for different deployment configurations, from very high performance switches that can be
connected together to provide a core fabric to edge switches that connect servers and devices with less
intensive requirements.
Figure 7 below shows how switches can be interconnected to provide a scalable storage fabric supporting
many hundreds of devices and hosts (these configurations are almost certainly deployed in highly available
topologies, section Highly Available Solutions deals with high availability).
Core backbone
switch fabric
provide a very
high performance
storage area
interconnect
Data
Data
Data
Data
Data
Datacenter
Server
Datacenter
Server
Edge switches
connect
departmental
storage and
servers into a
common fabric
Page 19 of 27
network, as shown in Figure 8 below. In the future, bridges will allow iSCSI (iSCSI is a device interconnect
using IP as the communications mechanism and layering the SCSI protocol on top of IP) devices to connect
into a switch SAN fabric.
Bridge
SCSI bus
Logical disks
presented to
storage fabric
Mirror
Set
Page 20 of 27
In the example in Figure 9, although there are five physical disk drives in the storage cabinet, only two
logical devices are visible to the hosts and can be addressed through the storage fabric. The controller does
not expose the physical disks themselves.
Many controllers today are capable of connecting directly to a switched fabric; however, the disk drives
themselves are typically either SCSI, or more common now, are disks that have a built-in FC-AL interface.
As you can see in Figure 10 below, the storage infrastructure that the disks connect to is totally independent
from the infrastructure presented to the storage fabric.
A controller typically has a small number of ports for connection to the fibre channel fabric (at least two are
required for highly available storage controllers). The logical devices themselves are exposed through the
controller ports as logical units (LUNs).
Logical
Physical
Storage controller
backplane
Storage
fabric adapter
CPU
Memory
Fiber Channel Switch
Data
Cache
Logical devices
exposed by
storage controller
Arbitrated
loops for
connecting
physical disk
drives to
backplane.
Typically two
redundant
loops per
controller
shelf
Physical
disk drives
in the
storage
cabinet
Storage Controller
Page 21 of 27
Topologies include:
Federated fabrics
Core Backbone
Switch
Switch
Switch
Switch
Inter-switch link
to federate
switches into a
single highly
available fabric
Page 22 of 27
Pros
Management is simplified, the configuration is a highly available, single fabric, and therefore there is only
one set of zoning information and one set of security information to manage.
The fabric itself can route around failures such as link failures and switch failures.
Cons
Hosts with multiple adapters must run additional multi-pathing software such as Compaq SecurePath or
EMC PowerPath to ensure that the host gets a single view of the devices where there are multiple paths
from the HBAs to the devices.
Management errors are propagated to the entire fabric.
Core Backbone
A core backbone configuration is really a way to scale-out a federated fabric environment. Figure 7 shows a
backbone configuration. The core of the fabric is built using highly scalable, high performance switches
where the inter-switch connections provide high performance communication (e.g. 8-10GBit/Sec using
todays technology). Redundant edge switches can be cascaded from the core infrastructure to provide high
numbers of ports for storage and hosts devices.
Pros
Highly scalable and available storage area network configuration.
Management is simplified, the configuration is a highly available, single fabric, and therefore there is only
one set of zoning information and one set of security information to manage.
The fabric itself can route around failures such as link failures and switch failures.
Cons
Hosts with multiple adapters must run additional multi-pathing software such as Compaq SecurePath or
EMC PowerPath to ensure that the host gets a single view of the devices where there are multiple paths
from the HBAs to the devices.
Management errors are propagated to the entire fabric.
FC Layers
Fibre Channel consists of multiple layers similar to the Open Systems Interconnect (OSI) layers in network
protocols. These layers communicate instructions for transmitting data.
The functions of each layer are:
Node level
o Upper-level protocol
(ULP) Provides the
communication path for
the operating system,
drivers, and software
applications over Fibre
Channel
o FC-4 Defines the
mapping of the ULP to the
Fibre Channel
o FC-3 Provides common
Page 23 of 27
Class 1
Class 2
Class 3
Class 4
Dedicated connection
In-order delivery, acknowledge first frame only
No flow control after first frame of connection
Connectionless
Frame switched
Out-of-order delivery possible
Acknowledge each frame
Buffer-to-buffer and end-to-end flow control for all frames
Frame switched
Out-of-order delivery possible
No acknowledgments
Buffer-to-buffer frame control for all frames
Connection oriented
Virtual circuit
In-order delivery
Class 5
Reserved
Class 6
Connection oriented
Multicast service
Page 24 of 27
802.2 The IEEE logical link control layer of the OSI model.
Acknowledgement frame (ACK) Used for end-to-end flow control. An ACK is sent to verify
receipt of one or more frames in Class 1 and Class 2 services.
Arbitrated Loop Physical Address (AL_PA) A 1-byte value used in the Arbitrated Loop
topology used to identify L_Ports. This value will then also become the last byte of the address
identifier for each public L_Port on the loop.
Arbitrated loop One of the three Fibre Channel topologies. Up to 126 NL_Ports and one
FL_Port are configured in a unidirectional loop. Ports arbitrate for access to the loop based on their
AL_PA. Ports with lower AL_PAs have higher priority than those with higher AL_PAs.
Buffer-to-buffer credit (BB_Credit) Used for buffer-to-buffer flow control which determines the
number of frame buffers available in the port it is attached to.
Close primitive signal (CLS) Applies only to the Arbitrated Loop topology. It is sent by an
L_Port, which is currently communicating on the loop to close communication with the other L_Port.
End of frame (EOF) delimiter An ordered set that is always the last transmission word of a
frame. It is used to indicate that a frame has ended and indicates whether the frame is valid.
ESCON Enterprise Systems Connection, a type of fiber jumper.
Fabric A set of one or more connected Fibre Channel switches acting as a Fibre Channel
network.
Fiber optic (or optical fiber) The medium and technology associated with the transmission of
information as light impulses along a glass or plastic wire or fiber.
Frame The basic unit of communication between two N_Ports. Frames are composed of a
starting delimiter (SOF), a header, the payload, the cyclic redundancy check (CRC), and an ending
delimiter (EOF).
HIPPI High-Performance Parallel Interface standards.
IEEE Institute of Electrical and Electronics Engineers standards.
IP Internet Protocol.
Page 25 of 27
Link Two adjacent unidirectional fibers (signal lines) transmitting in opposite directions, using
their associated transmitters and receivers. The pair of fibers can be copper electrical wires
(differential pairs) or optical strands. One fiber sends data out of the port and the other fiber
receives data into the port.
Link service A facility used between an N_Port and a fabric or between two N_Ports. Link
services are used for such purposes as login, sequence and exchange management, and
maintaining connections.
Node A server, storage system, tape backup device, or video display terminal. Any source or
destination of transmitted data is a node. Each node must hold at least one port for providing
access to other devices.
Nonparticipating mode Where an L_Port enters the nonparticipating mode if more than 127
devices are on a loop and it cannot acquire an AL_PA. An L_Port can also voluntarily enter the
nonparticipating mode if it is still physically connected to the loop, but does not participate. An
L_Port in the nonparticipating mode cannot generate transmission words on the loop and can only
retransmit words received on its inbound fiber.
Ordered set A 4-byte transmission word, which has a special character as its first transmission
character. An ordered set can be a frame delimiter, primitive signal, or primitive sequence. Ordered
sets are used to distinguish Fibre Channel control information from data.
Originator An N_Port that originates an exchange.
Participating mode A normal operating mode for an L_Port on a loop. An L_Port in this mode
has acquired an AL_PA and is capable of communicating on the loop.
Port The connector and supporting logic for one end of a Fibre Channel link.
Primitive sequence An ordered set transmitted repeatedly and used to establish and maintain a
link.
Primitive signal An ordered set used to indicate an event.
Private loop An arbitrated loop that stands on its own. It is not connected to a fabric.
Protocol In a Fibre Channel SAN, a data transmission convention encompassing timing,
control, formatting, and data representation.
Public loop An arbitrated loop connected to a fabric.
Responder Where the N_Port is the exchange that the originator communicates with.
SAN One or more Fibre Channel fabrics used to connect storage systems, servers, and
management appliances. Typical SANs have one fabric (for environments where moderate data
availability is required), two fabrics (when redundancy is required in the storage networks), or even
three or more fabrics (when an extremely large number of ports is required). Use caution with the
terms fabric and SAN because many SANs have two redundant fabrics.
SCSI Small Computer System Interface.
Sequence A group of related frames transmitted unidirectional from one N_Port to another.
Sequence initiator The N_Port that begins a new sequence and transmits frames to another
N_Port.
Sequence recipient The N_Port that receives a particular sequence of data frames.
Start of frame (SOF) delimiter The ordered set that is always the first transmission word of a
frame. It is used to indicate that a frame will immediately follow and indicates which class of service
the frame will use.
Special character A special 10-bit transmission character, which does not have a
corresponding 8-bit value, but is still considered valid. The special character is used to indicate that
a particular transmission word is an ordered set.
Switch A device that connects the fabric using a virtual circuit or a virtual packet circuit. The
switch can make an electric connection between ports, or it can reroute packets through the switch.
Transmission character A 10-bit character transmitted serially over the fiber.
Page 26 of 27
San topologies
SAN devices connected by Fibre Channel
can be arranged using one of three
topologies:
F switch
FL switch
Device
NLpub NLpri
Yes
No
No
No
Yes
No
QL switch
No
Yes
FC-AL hub
No
Yes
Hub
NLpub
NLpri
FC-AL
Yes
No
No
No
No
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
Note: NLpub and NLpri are loop ports for private and public loops, which are explained later in the course.
Page 27 of 27