You are on page 1of 45

Sieci SAN, cz.

II

Urządzenia w systemach teleinformatycznych

prof. dr hab. inż. Andrzej R. Pach


Plan wykładu

 SCSI
 iSCSI
 FCIP
 Storages (RAID)

UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 2


Podstawowe protokoły - przypomnienie
Applications
SCSI SCSI SCSI SCSI SCSI
FCP FCP FCP iSCSI
FC FC FCIP
TCP TCP
FCoE IP IP
Ethernet Ethernet Ethernet
Physical media
SCSI FC FCoE FCIP iSCSI

SCSI = Small Computer System Interface


iSCSI = Internet Small Computer System Interface
FC = Fiber Channel
FCP = Fiber Channel Protocol
FCoE = Fibre Channel over Ethernet
FCIP = Fiber Channel over IP

UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 3


Small Computer System Interface (SCSI) (1)
• SCSI (czyt. „skazi”) is the conventional, server-centric method
of connecting peripheral devices (disks, tapes, and printers) in the open
client/server environment.
• SCSI was designed for the personal computer and small computer
environment.
• SCSI is a bus architecture, with dedicated, parallel cabling between
the host and storage devices, such as disk arrays.
• The SCSI standards define commands, protocols, electrical, optical
and logical interfaces.

UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 4


Small Computer System Interface (SCSI) (2)
• SCSI specifies commands and controls for sending blocks of data
between the host and the attached devices. The SCSI commands are
issued by the host operating system in response to user requests
for data.
• The physical transport was originally a parallel cable that consisted
of eight data lines to transmit 8 bits in parallel, plus control lines.
Later implementations widened the parallel data transfers to 16-bit
paths (wide SCSI) to achieve higher bandwidths.
• SCSI is an American National Standards Institute (ANSI) standard.

UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 5


Rodzina standardów SCSI
SCSI Standard Max. Cable Length (m) Max. Throughput (MB/s) Max. No. Of Devices
SCSI-1 6 5 8
SCSI-2 6 5 do 10 8 lub 16
Fast SCSI-2 3 10 do 20 8
Wide SCSI-2 3 20 16
Fast Wide SCSI-2 3 20 16
Ultra SCSI-3, 8-bit 1,5 20 8
Ultra SCSI-3, 16-bit 1,5 40 16
Ultra-2 SCSI 12 40 8
Wide Ultra-2 SCSI 12 80 16
Ultra-3 SCSI 12 160 16
Ultra-4 SCSI 12 320 16
8-bit bus; 16-bit bus (Wide SCSI) 1MB/s = 8 Mbit/s

UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 6


External SCSI Connectors

UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 7


Device sharing

Multi-Drop Data/Control Bus


UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 8
SCSI evolution: parallel → serial
• Recent physical versions of SCSI — Serial Attached SCSI (SAS), SCSI-over-
Fibre Channel Protocol (FCP), and USB Attached SCSI (UAS) — break
from the traditional parallel SCSI bus and perform data transfer via serial
communications using point-to-point links.
• Although much of the SCSI documentation talks about the parallel
interface, all modern development efforts use serial interfaces.
• Serial interfaces have a number of advantages over parallel SCSI, including
higher data rates, simplified cabling, longer reach, improved fault isolation
and full-duplex capability.
• The primary reason for the shift to serial interfaces is the clock skew issue
of high speed parallel interfaces, which makes the faster variants of parallel
SCSI susceptible to problems caused by cabling and termination.
UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 9
Examples of SCSI Interfaces
Interface Width (bits) Clock Max. Throughput Max. Lenght Max. No. of Devices

Ultra-4 SCSI 16 80 MHz DDR 320 MB/s 12 m 16

Serial Storage Architecture (SSA) Serial 200 Mbit/s 20 MB/s 25 m 96

SSA 40 Serial 400 Mbit/s 40 MB/s 25 m 96

Fibre Channel 1GFC Serial 1.06 Gbit/s 98.4 MB/s 500 m MM/ 10 km SM 127 FC-AL / FC-SW 224
Fibre Channel 16GFC Serial 14.02 Gbit/s 1.58 GB/s 500 m MM/ 10 km SM 127 FC-AL / FC-SW 224

Serial Attached SCSI (SAS) 1.1 Serial 3 Gbit/s 300 MB/s 6m 16 256

SAS 4.0 (draft) Serial 22.5 Gbit/s 2.4 GB/s tbd 16 256

IEEE 1394 (Fire Wire) Serial 3.15 Gbit/s 315 MB/s 4.5 m 63

SCSI Express (PCIe 3.0) Serial 8 GT/s 985 MB/s Short, backplane only 258

USB Attached SCSI 2 (USB 3.1) Serial 10 Gbit/s 1.2 GB/s 3m 127

ATAPI over Parallel ATA 16 33 MHz DDR 133 MB/s 457 mm 2

ATAPI over Serial ATA Serial 6 Gbit/s 600 MB/s 1m 1 (15 with port multiplier)

UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 10


SCSI Command Protocol (1)
według en.wikipedia.org

• In addition to many different hardware implementations,


the. SCSI standards also include an extensive set of command
definitions
• The SCSI command architecture was originally defined
for parallel SCSI buses but has been carried forward
with minimal change for use with iSCSI and serial SCSI.
• Other technologies which use the SCSI command set include
the Advanced Technology Attachment (ATA) Packet Interface
(ATAPI), USB Mass Storage class and FireWire SBP-2.
UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 11
SCSI Command Protocol (2)
według en.wikipedia.org

• In SCSI terminology, communication takes place between an initiator


and a target. The initiator sends a command to the target, which then responds.
• SCSI commands are sent in a Command Descriptor Block (CDB). The CDB
consists of a one byte operation code followed by five or more bytes containing
command-specific parameters.
• At the end of the command sequence, the target returns a status code byte,
such as 00h for success, 02h for an error (called a Check Condition), or 08h
for busy. When the target returns a Check Condition in response to a command,
the initiator usually then issues a SCSI Request Sense command in order
to obtain a key code qualifier (KCQ) from the target. The Check Condition
and Request Sense sequence involves a special SCSI protocol called a Contingent
Allegiance Condition.

UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 12


SCSI Command Protocol (3)
według en.wikipedia.org

There are four categories of SCSI commands: N (non-data), W (writing data from
initiator to target), R (reading data), and B (bidirectional). There are about 60
different SCSI commands in total, with the most commonly used being:
• Test unit ready: Queries device to see if it is ready for data transfers (disk spin up,
media loaded, etc.).
• Inquiry: Returns basic device information.
• Request sense: Returns any error codes from the previous command
that returned an error status.
• Send diagnostic and Receive diagnostic results: runs a simple self-test,
or a specialised test defined in a diagnostic page.
• Start/Stop unit: Spins disks up and down, or loads/unloads media (CD, tape,
etc.).

UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 13


SCSI Command Protocol (4)
według en.wikipedia.org

• Read capacity: Returns storage capacity.


• Format unit: Prepares a storage medium for use. In a disk, a low level
format will occur. Some tape drives will erase the tape
in response to this command.
• Read: (four variants): Reads data from a device.
• Write: (four variants): Writes data to a device.
• Log sense: Returns current information from log pages.
• Mode sense: Returns current device parameters from mode pages.
• Mode select: Sets device parameters in a mode page.

UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 14


SCSI Command Protocol (5)
według en.wikipedia.org

• Each device on the SCSI bus is assigned a unique SCSI


identification number or ID.
• Devices may encompass multiple logical units, which are
addressed by logical unit number (LUN).
• Simple devices have just one LUN, more complex devices may
have multiple LUNs.

UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 15


Internet Small Computer System Interface (iSCSI)
• The iSCSI protocol is a means of transporting SCSI packets over TCP/IP to take advantage of the existing Internet
infrastructure.
• The architecture of iSCSI is outlined in IETF RFC 3720: Internet Small Computer Systems Interface (iSCSI).

iSCSI packet format

UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 16


Fibre Channel over Internet Protocol (FCIP)
• FCIP is a method for tunneling Fibre Channel packets through an IP network.
• FCIP encapsulates Fibre Channel block data and transports it over a TCP socket, or tunnel.
• TCP/IP services are used to establish connectivity between remote devices.
• The Fibre Channel packets are not altered in any way. They are simply encapsulated in IP and transmitted.

FC frame structure

The major advantage of FCIP is that it overcomes the distance limitations of basic Fibre Channel. It also enables
geographically distributed devices to be linked by using the existing IP infrastructure, while it keeps the fabric
services intact.
UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 17
HDD vs. SATA vs. SSD
HDD (ATA) HDD (SATA) SDD
Hard Disk Drive Hard Disk Drive Solid-State Drive
(Advanced Technology (Serial Advanced
Attachment) Technology Attachment)

Części ruchome TAK TAK NIE


(niezawodność)
Rozmiar DUŻY DUŻY MAŁY
Wykorzystanie UMIARKOWANE UMIARKOWANE BARDZO DOBRE
pamięci
Szybkość dostępu MAŁA UMIARKOWANA DUŻA
Pojemność pamięci DUŻA DUŻA UMIARKOWANA
Czas życia DUŻY DUŻY UMIARKOWANY
Koszt MAŁY UMIARKOWANY DUŻY

UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 18


Architecture of Intelligent Disk Subsytems

UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 19


Cash Memory
 A cache is a bunch of very fast memory — just like, but faster than, the memory in your
server.
 The cache memory is used by the storage array to store your data before it gets sent to
the disk drives. This is good, because storing data in memory is much faster than storing it
on disk. Memory runs much faster than spinning physical disk drives.
 As soon as your data hits the cache, the array tells the server that sent your credit-card
number that the array has safely received the number, and the server can now move on
to something else.
 The more cache memory the storage array has, the more it can store in cache, and faster it
goes. This makes your servers run faster, too. If the server needs the same data again, it’s
already in the cache; thus, the server doesn’t have to wait for the disk array to move the data
up from the disks before it can perform another operation on the data.

UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 20


Storage Arrays and Flash Storage
• Initially built for HDDs for SANs, block-based storage or NAS,
file-based storage.
• There are now systems built for flash or solid-state drive (SSD)
storage arrays. Flash arrays contain flash memory drives designed
to overcome the performance and capacity limitations
of mechanical, spinning disk drives. A flash array can read data
from SSDs much faster than disk drives and are increasingly used
to boost application performance.
• Storage arrays can be all-flash, all-spinning disk or hybrids
combining both types of media.
UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 21
Macierz RAID
RAID (ang. Redundant Array of Independent Disks), czyli nadmiarowa
macierz niezależnych dysków, której działanie polega na współpracy
dwóch lub więcej dysków twardych w taki sposób, aby zapewnić
dodatkowe możliwości, nieosiągalne przy użyciu jednego dysku
jak i kilku dysków podłączonych indywidualnie.
RAID pozwala na:
• zwiększenie niezawodności (odporności na awarie),
• zwiększenie wydajności transmisji danych,
• powiększenie przestrzeni dostępnej jako jedna całość.
UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 22
Korzyści ze stosowania macierzy RAID
Using RAID has two advantages:
1) better performance and
2) higher availability.
Thus, it goes faster and breaks down less often.
Performance is increased because the server has more disks to read from when data is accessed
from a drive.
Availability is increased because the RAID controller can recreate lost data from a failed drive by using
the parity information (parity information is created while the data is written to the disks). The server
accessing the data on a RAID set never knows that one of the drives in the RAID set went bad. The controller
recreates the data that was lost when the drive went bad by using the parity information stored
on the surviving disks in the RAID set.

UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 23


Typy macierzy RAID

 Drives can be grouped to form RAID sets in a number of different ways — RAID
types — which are numbered from 0 to 6.
 The numbers represent the level of RAID being used. RAID levels 0, 1, and 5 are
the most common methods of grouping drives into RAID sets because they give you
the best variation of redundancy and performance. Since RAID 6 uses two parity
drives, it’s a bit slower than the other RAID types, but is normally used when data
loss is out of the question.
 Combinations of RAID types can be used together. For example, you can create
two RAID 0 sets and then combine the RAID 0 sets into a RAID 1 set. This will
essentially give you the performance benefits of RAID 0 with the availability benefits
of RAID 1.

UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 24


RAID 0: block-by-block striping

RAID 0 to tzw. disk striping.


W tej konfiguracji kontroler macierzy dzieli dane
na małe fragmenty i zapisuje każdy na innym dysku
macierzy: np. pierwszy blok na pierwszym dysku,
drugi na drugim itd.
Dzięki temu uzyskujemy znaczne przyspieszenie operacji
odczytu i zapisu.
Wadą tego systemu jest brak odporności na awarię.
Jeśli dowolny dysk ulegnie uszkodzeniu, to najczęściej
tracimy wszystkie dane, gdyż każdy dysk zawiera
tylko część danych.

UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 25


RAID 1: block-by-block mirroring

RAID 1 to tzw. mirroring.


Połowa dysków zawiera dane a druga połowa jest
ich dokładną kopią. W tym przypadku jesteśmy
zabezpieczeni na awarię każdego z dysków, jednakże
tracimy 50% całkowitej pojemności.
Powstający nadmiar danych oraz brak podziału danych
na dyski i wysoki stopień zwartości danych łączy się z dużą
ilością wymaganego miejsca i wysokim kosztem. Z tego
powodu konfiguracja RAID 1 jest stosowana w aplikacjach
z wysoką niezawodnością dostępu do danych.

UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 26


RAID 0+1 / RAID 10: stripping and mirroring combined

To połączenie RAID 0 z RAID 1, nazywane również RAID


1+0, co łączy wydajność RAID 0 z niezawodnością RAID 1.
W tym przypadku dane są zapisywane na połowie
dysków jako striping (RAID 0), a druga połowa jest
ich lustrzaną kopią (RAID 1) – mirroring.
System jest odporny na awarię jednego dysku lub nawet
dwóch, ale nie dowolnych dwóch.
Należy jednak pamiętać, iż w przypadku awarii dysku
i jego lustrzanej kopii wszystkie dane zostaną utracone.

UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 27


RAID 4: parity instead of mirroring

W RAID 4 dane zapisywane są na kilku dyskach,


a dodatkowo na osobnym dysku są umieszczane
sumy kontrolne, które w przypadku awarii
jednego z dysków umożliwiają jego odtworzenie
poprzez wykonanie określonych operacji
algebraicznych.
RAID 4 was previously used by NetApp, but has
now been largely replaced by a proprietary
implementation of RAID 4 with two parity disks,
called RAID DP.

Źródło: https://macierze-netapp.pl/technologie/rodzaje-raid.html

UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 28


RAID DP: dual parity
RAID DP jest to opatentowany przez firmę
NetApp system RAID, który charakteryzuje
się odpornością na awarię dwóch dowolnych
dysków (DP – Dual Parity).
Jest to modyfikacja RAID 4, z tą różnicą,
że zamiast jednego mamy do dyspozycji dwa dyski
parzystości, każdy w inny sposób przelicza sumy
kontrolne.
A,B,C to dyski z danymi, natomiast P oraz Q to
dyski zawierające sumy kontrolne.

Źródło: https://macierze-netapp.pl/technologie/rodzaje-raid.html

UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 29


RAID 5: parity instead of mirroring
W przypadku RAID 5 dane są również zapisane
na wszystkich dostępnych dyskach, tak jak ma to
miejsce w przypadku RAID 4. Z tą różnicą jednak,
że częściowe sumy kontrolne są rozłożone na
wszystkie dyski.
Takie rozwiązanie ma jednak zasadniczą wadę.
Po awarii jednego z dysków, macierz musi
odbudować sumy kontrolne, obciążając mocno
system. Drugi aspekt to rozbudowa – jeśli chcemy
dołożyć do istniejącego systemu dodatkowy dysk,
cała macierz musi zostać przebudowana.
RAID 5 jest odporny na awarię wyłączenie jednego
dysku.
Źródło: https://macierze-netapp.pl/technologie/rodzaje-raid.html

UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 30


RAID 6: double parity
RAID 6 extends RAID 5 by adding another parity block; thus, it uses block-level striping with
two parity blocks distributed across all member disks.
As in RAID 5, there are many layouts of RAID 6 disk arrays depending upon the direction the data
blocks are written, the location of the parity blocks with respect to the data blocks and whether
or not the first data block of a subsequent stripe is written to the same drive as the last parity
block of the prior stripe. The figure to the left is just one of many such layouts.
According to the Storage Networking Industry Association (SNIA), the definition of RAID 6 is:
"Any form of RAID that can continue to execute read and write requests to all of a RAID array's
virtual disks in the presence of any two concurrent disk failures. Several methods, including
dual check data computations (parity and Reed–Solomon), orthogonal dual parity check data
and diagonal parity, have been used to implement RAID Level 6."
Performance. RAID 6 does not have a performance penalty for read operations, but it does have
a performance penalty on write operations because of the overhead associated with parity
calculations. Performance varies greatly depending on how RAID 6 is implemented
in the manufacturer's storage architecture—in software, firmware, or by using firmware
and specialized ASICs for intensive parity calculations. RAID 6 can read up to the same speed
as RAID 5 with the same number of physical drives.
When either diagonal or orthogonal dual parity is used, a second parity calculation is necessary
for write operations. This doubles CPU overhead for RAID-6 writes, versus single-parity RAID
levels. When a Reed Solomon code is used, the second parity calculation is unnecessary.
Reed–Solomon has the advantage of allowing all redundancy information to be contained
within a given stripe.
UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 31
Comparison of RAIDs
Minimum Read Write The following table provides an overview of some
performance performance
Level Description number of
drives
Space efficiency Fault tolerance considerations for standard RAID levels.
as factor of single disk
Block- Array space efficiency is given as an expression in terms
level striping wit
RAID 0 2 None
hout parity or mi
1 n n
of the number of drives, n; this expression designates
rroring
Mirroring
a fractional value between zero and one, representing
n−1
RAID 1 without parity 2 1/n
drive failures
n 1 the fraction of the sum of the drives' capacities that is
or striping

Bit-level striping
available for use.
with Hamming
RAID 2
code for error
3 1 − 1/n log2 (n + 1) One drive failure Depends Depends For example, if three drives are arranged in RAID 3, this
correction
gives an array space efficiency of 1 − 1/n = 1 − 1/3 = 2/3
Byte-level ≈ 67%; thus, if each drive in this example has a capacity
RAID 3 striping with 3 1 − 1/n One drive failure n−1 n−1
dedicated parity of 250 GB, then the array has a total capacity of 750 GB
Block-level but the capacity that is usable for data storage is only
RAID 4 striping with 3 1 − 1/n One drive failure n−1 n−1 500 GB.
dedicated parity

Block-level
Single
RAID 1 and RAID 5 are the most common RAID levels.
striping with
RAID 5
distributed
3 1 − 1/n One drive failure n sector: 1/4
full stripe: n − 1
However, many other levels are not covered in this
parity
Block-level
presentation. Levels that are not mentioned include RAID
striping with
Two drive
Single 2 and 3; or nested (hybrid) types, such as RAID 5+1.
RAID 6 double 4 1 − 2/n n sector: 1/6
distributed
failures
full stripe: n − 2 These hybrid types are used in environments where
parity
reliability and performance are key points to be covered
from the storage perspective.
Źródło: en.wikipedia.org
UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 32
Typy dużych macierzy – Modular Storage Arrays
 The other type of storage arrays consist of the smaller boxes, called modular arrays.
 Modular arrays cannot connect to mainframe computers. The ability to connect to
and store mainframe data is usually what sets modular and monolithic storage arrays
apart.
 Some modular arrays have many of the same redundancy features that their big
monolithic brothers have, but modular arrays have limited cache memory and port
connectivity, so they can’t connect to as many servers as a monolithic array without
degraded performance.
Modular arrays, typically based on two controllers, are kept apart from the unit’s
disks. In essence, this ensures that, if one controller experiences a failure, the second
will take over from the first automatically. These are held on a shelf which runs
on a separate power source to the disks.
Modular arrays can still be used in large data centers, but they also work just as well
in smaller departments or remote offices.
 One of the key advantages of modular systems is that it’s cheaper than monolithic
options. Therefore, they expanded over time. You may want to start with a single
controller and one housing disks. You can then add more than your needs dictate,
until you reach optimum capacity.

UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 33


Typy dużych macierzy – Monolithic Storage Arrays
 They’re the big, expensive boxes with all the redundant features
in them that make them good candidates for use with expensive
mainframe computers.
 Monolithic arrays can have hundreds of storage ports, and have all
the internal goodies like massive amounts of cache memory to
accommodate a lot of servers accessing data at the same time without
performance issues occurring.
 The internal features are also duplicated; if one part fails, another
takes over.
 Monolithic arrays are also used with mainframe computers, so they’re
usually found in the large data centers of big corporations.
 Monolithic arrays require a raised floor, conditioned power and air,
and multiple large-amperage 3-phase electrical connectors.
 Monolithic arrays cannot be used in the general office environments,
only in data centers.
UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 34
Comparison of modular & monolithic arrays (1)
Źródo: https://searchstorage.techtarget.com/magazineContent/Modular-vs-Monolithic

MODULAR MONOLITHIC
Availability Implementations should have path Robust failover and availability. Initially led and still
and reliability failover/redundancy in host I/O path, switch perceived to provide the premier remote
and storage components (dual drive bus, mirroring solutions, although this gap is rapidly
dual controllers, fans, power supplies, closing
and hot spare drive(s) for immediate RAID
rebuild process in the event of any disk
failure). Remote mirroring
and snapshot/backup techniques are
available. Validates cluster server testing.
Connectivity SCSI, FC and iSCSI attach, however, lacking ESCON, FICON, SCSI, FC connections for mainframe
mainframe attach. Variations between and open systems attach. Large server
vendors for number of logical volumes, connectivity, but being matched by some modular.
channels and operating systems.

UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 35


Comparison of modular & monolithic arrays (2)
Źródo: https://searchstorage.techtarget.com/magazineContent/Modular-vs-Monolithic

MODULAR MONOLITHIC
Interoperability Getting much better, but not ubiquitous. Look for Since these systems have the largest
interoperability processes/certifications and standards base, in general, more interoperability
(i.e., FCIA SANmark certification). Complete solutions testing is available for their systems.
available from various vendors. Some modular vendors As volumes shift towards modular
lead the market in this area. systems, this advantage will go away.

Manageability Vendor-specific usability and capability. Check for Probably requires service call to expand
dynamic, online scaling capabilities (i.e., adding disks, storage and change configurations.
expanding volumes, changing RAID or configurations). May have an advantage to stay with
Heterogeneous management products coming onto homogenous brand of storage. Each
the market. Some vendors providing all management vendor has future strategy to manage
from within a single application. heterogeneous storage. Often requires
multiple applications to manage in
open systems.

UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 36


Comparison of modular & monolithic arrays (3)
Źródo: https://searchstorage.techtarget.com/magazineContent/Modular-vs-Monolithic
MODULAR MONOLITHIC
Performance Cache-centric architectures were
For open systems environments, the Storage Performance developed to counteract
Council has established the first industry standard benchmark the architectural limitations
for measuring I/O transactions per second, and modular of the mainframe environments.
storage came out a leader in actual performance and, scalable The cache architecture - beneficial
back-end channels and effective read-ahead algorithms assist for mainframe performance - can't
in anticipatory reads in the random nature of open systems keep pace with the modular open-
systems design which can scale
controllers and back-end channels
to the disks.
Scalability/ Scales controller modules and drive modules independently. Can attach storage to mainframe and
flexibility Pay as you grow approach. Designed for simple expansion. open systems, but not recommended
Industry standard rack-mount cabinets allow flexible to have both on the same subsystem.
appliance integration. Vendor unique capacity per each Typically requires service call to add
storage system with some approaching same capacity or partition storage.
as monolithic. May accommodate more storage
UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach with fewer subsystems. 37
Comparison of modular & monolithic arrays (4)
Źródo: https://searchstorage.techtarget.com/magazineContent/Modular-vs-Monolithic

MODULAR MONOLITHIC
Service Vendor, channel partner dependent. Professional Complete service and professional
services not required. Ease of use can be built into services offered and typically required.
software interface. This can be valuable, but costly.
Total Cost of Architectures, innovation and competition allow for Higher costs for open system-attach
Ownership much better pricing and scalability. Gartner estimates (Windows, Unix) and less performance,
25% savings on storage costs. Additional service and however, management of homogenous
maintenance savings. storage may be of value.

UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 38


Macierze dyskowe
NetApp, Dot Hill

UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 39


NetApp FA S2240

• Cechy:

• Pojemność – 576 TB
• 24 dyski
• 12GB pamięci
• 4 porty FC 8Gb/4Gb
• 8 portów 1GbE, 4 porty 10GbE
• Obsługiwane protokoły: FC, iSCSI, NFS, CIFS/SMB,HTTP
• Do 144 dysków (zewnętrznych i wewnętrznych)
• Wykorzystanie RAID-4/RAID DP
• Cena ~ 20 000 zł

UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 40


NetApp FA S3250

• Cechy:

• Pojemność – 2,880 TB
• 720 dysków
• 40GB pamięci
• 24 porty FC 16Gb/8Gb/4Gb
• 56 portów 1GbE, 24 porty 10GbE
• Obsługiwane protokoły: FC,FCoE, iSCSI, NFS, CIFS/SMB,HTTP
• Do 127,000 kopii Snapshot
• Do 512 obsługiwanych hostów na każdą parę Host Adapter
• Cena ~ 65 000 zł

UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 41


NetApp FA S6290

• Cechy:

• Pojemność – 5,760 TB
• 1440 dysków
• 192GB pamięci
• 16TB pamięci flash
• 64 porty FC 16Gb/8Gb/4Gb
• 64 porty 10GbE, maksymalnie 68 portów
• Obsługiwane protokoły: FC,FCoE, iSCSI, NFS, CIFS/SMB,HTTP
• Wykorzystanie RAID 6 (RAID-DP)/RAID 4
• Cena ~ 240 000 zł

UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 42


Dot Hill AssuredSAN 2333

• Cechy:

• Podwójny kontroler RAID


• Szybkości transmisji danych
do 1Gbit/s. iSCSI
• 4 porty iSCSI
• SimulCache ™ - Natychmiast kopiuje pamięć podręczną między
kontrolerami
• Do 7 modułów rozszerzeń na system
• Maksymalnie 96 dyski, 288TB
• Obsługiwane protokoły: iSCSI, SNMP, SSL, SSH, SMTP, SMI-S
Provider, HTTP(S)

UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 43


Dot Hill AssuredSAN
3920 and 3930 Data Protection Solutions

• Cechy:

• Podwójny kontroler RAID


• Szybkości transmisji danych
do 8Gbit/s FC, iSCSI 1Gbit/s
• SimulCache
• wsparcie dla SAS, SATA, SSD
• maksymalnie 144 dyski (3920) lub 96 dysków (3930), 384TB
• obsługuje następujące protokoły i standardy: iSCSI, SCSI-2 i SCSI-3,
FC, IP, TCP, ICMP, SNMP, SSL, SSH, SMTP, SMI-S Provider, HTTP(S)

UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 44


Dot Hill AssuredSAN™ 4004
series

• Cechy:

• 6400 MB/s przepustowości


i 100.000 IOPS na dysk
• SimulCache ™
• Szybkości transmisji danych do 16Gbit/s FC 10Gbit/s. iSCSI
lub 12Gbit/s. SAS
• Wsparcie dla SAS, SAS, SSD Nearline
• Max 192 dysków, 4824 - 4524, 96 - 4534, 576TB
• obsługuje następujące protokoły i standardy: iSCSI, SCSI-2 i SCSI-3,
FC, IP, TCP, ICMP, SNMP, SSL, SSH, SMTP, SMI-S Provider, HTTP(S)

UwST - Sieci SAN, cz. II - prof. Andrzej R. Pach 45

You might also like