You are on page 1of 4

The difference between SCSI and FC protocols

Two major protocols are used in Fibre Channel SANs The Fibre Channel protocol (used by the hardware to

communicate) and the SCSI protocol (used by software applications to talk to disks).

The SCSI protocol (small computer system interface) is used by operating systems for input/output operations to disk

drives. Data is sent to from the host operating system to the disk drives in large chunks called "blocks" of data,

normally in parallel over a physical interconnect of high-density 68-wire copper cables. Because SCSI is transmitted

in parallel, each bit must arrive at the end of the cable at the same time. Due to signal strength and "jitter", this limited

the maximum distance a disk drive could be from the host to under 20 meters. This protocol lies on top of the Fibre

Channel protocol enabling SAN-attached server applications to talk to their disks.

FC (Fibre Channel) is just the underlying transport layer that SANs use to transmit data. This is the language used by

the HBAs, hubs, switches and storage controllers in a SAN to talk to each other. The Fibre Channel protocol is a low-

level language meaning that it's just used as a language between the actual hardware, not the applications running

on it.

Actually, two protocols make up the Fibre Channel protocol. Fibre Channel Arbitrated Loop or FC-AL works with hubs

and Fibre Channel Switched or FC-SW works with switches. Fibre Channel is the building block of the SAN highway.

It's like the road of the highway where other protocols can run on top of it just as different cars and trucks run on top

of an actual highway. In other words, if Fibre Channel is the road, then SCSI is the truck that moves the data cargo

down the road.

The operating systems still use SCSI to communicate with the disk drives in a SAN as Fibre Channel SANs layer the

SCSI protocol on top of the FC protocol. FC can run on copper cables or optical cables. Using optical cables, the

SCSI protocol is serialized (the bits are converted from parallel to serial, one bit at a time) and transmitted as light

pulses across the optical cable. Your data can now run at the speed of light and you are no longer limited to the

shorter distances of SCSI cables. (Disks in an FC fabric can be located up to 100 thousand meters from the host!

100Km).

=========================

Wavelength vs. dark fiber for SAN


Which do you consider the best option for connection of a SAN network:
wavelength products from the local LEC or dark fiber? Please explain the pros
and cons of each.
The answer is "it depends" on what you are trying to do.

The local cable plant (all the devices connected to a fabric at one site) should
be using 50u (u = micron) multimode cabling (850 nanometer wavelength),
which allows a distance of up to 300 meters using 2 gigabit (Gb) lasers
(GBICS), and 500 meters using 1 Gb lasers. So, if all you are trying to do is
connect servers to SAN switches and SAN switches to storage, then your best
option is to use standard multimode SAN cables as described above.

Dark fiber is a term used to describe 9u single-mode cabling. Single-mode


cables have a narrower core, which limits light defraction and absorption
during transmission, which means greater distances can be traveled before db
loss is an issue. Single-mode transmission also uses a 1300 nanometer (nm)
wavelength source, which also allows for greater distance. Single-mode
cables can be run up to 10 kilometers before repeaters are required. So, if you
are trying to connect two SAN islands together over greater distances, than
dark fiber (9u single-mode 1300 nm) is the better choice.

If there is currently no dark fiber owned by your company between buildings


though, installing or leasing dark fiber can be VERY expensive. It can cost the
same as leasing an OC-48 connection from a Telco. If you own the fiber
though, you can connect up dense wave division multiplexor (DWDM)
equipment at either end, and actually slice up your dark fiber into multiple
wavelengths (usually 32 or more) that will allow for massive bi-directional
bandwidth between sites.

If you don't need all that bandwidth and just want to connect two SAN
networks together for, say, data replication between sites for disaster
recovery, then using a standard leased IP connection could also work for you.
All you would require is an FC to IP bridge at each location. The bridge can
run either the FC-IP or iFCP protocol to tunnel your FC frames across the IP
network to the other location. I would recommend a leased T3 or above for the
IP network, although a T1 may do if the amount of data is low, and the bridge
also does compression.

SATA and SCSI compatibility.

I have been informed by Dell tech support that SCSI and parallel ATA hard disk drives are not compatible and

can not be combined on the same bus. Does this also apply for SATA and SCSI? I currently have two SCSI

hard disk drives and want to add a SATA card and hard disk. Will I encounter problems?

The folks at Dell are giving you the correct answer. SCSI drives use a different logical and physical interface than

parallel ATA, SATA and IDEdrives, and cannot be used on the same "bus".

That does not preclude you from adding a SATA controller in your server, and adding your SATA disks to that

controller. You should be able to have a number of different types of controllers and disks within a single server, such

as internal SCSI boot disks and either an internal or external SATA shelf.

As long as the correct controller type is used with the same drive type, you should be fine. SCSI to SCSI, SATA to

SATA, parallel ATA to parallel ATA.

--------------------

Why do HBAs in a SAN have same base?

Why do HBAs in a SAN have same base?


In a SAN, a world wide name (WWN) is a 64-bit identifier for each device in
the fabric.
All WWNs are registered with and assigned by the The Institute of Electrical
and Electronics Engineers (IEEE). This is done to assure each and every
WWN is unique. A WWN is similar in concept to a network card's MAC
address in an IP network, but is formatted differently.

WWNs are formatted as follows:

WWN= XYYYYYYZ ZZZZZZZZ (Example: 500060E8064183F3)

X = the first 4-bit field, called the NNA, specifies the format the WWN follows.
This number is a "5" for storage target ports, and either a "1" or a "2" for an
HBA port.

YYYYYY = called the organizational unique identifier (OUI) which is assigned


by the IEEE to specific vendors (Such as Qlogic, Emulex, Hitachi, EMC, IBM,
etc...) This is why the address in this area always seems to be similar if you
are buying from a specific vendor.

ZZZZZZZZZ = the vendor specified identifier (VSI), which is defined by the


vendor owning the OUI. These numbers can be changed by the vendor to
develop an offset of the node WWN to get a port WWN. As an example,
an HBA may have dual ports. The HBA is assigned a node WWN, and an
offset is used to get the port WWN for each port on the HBA by changing the
value of the VSID.

You might also like