You are on page 1of 60

PRIMERGY Windows

Install and Manage

RAID Basics and Controllers

Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009


RAID Levels

Raid and Controllers PY-WIN-IM 1 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
RAID level 0
 “Striping”
 2 (1) or more disks are coupled,
data is interleaved across the
disks
 No capacity loss RAID 0 / Striping
40 GB 40 GB
 All disks in the array are used
for user data, no “overhead” data1 data2
data3 data4
 No fault tolerance
data5 data6

80 GB

Raid and Controllers PY-WIN-IM 2 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
RAID level 1
 “Mirroring”
 2 disks
 The array is divided in
two halves, data stored on RAID 1 / Mirroring
one half are mirrored (copied) 40 GB 40 GB
onto the other half
data1 data1
 50% of total capacity “lost” data2 data2
 Two identical copies of the data data3 data3
provide fault tolerance but
during normal operation one
copy is just “overhead”
 Fault tolerant
 One disk per array may fail 40 GB

Raid and Controllers PY-WIN-IM 3 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
RAID level 1E
 “Mirrored Stripe Set”
 LSI Specific RAID Level
 3 .. 6 disks
 Two copies of the data are RAID 1E / Striping + Mirroring
maintained by a cyclic 40 GB 40 GB 40 GB
mirroring between the disks Data1 Data1
B.3 Data2
 50 % of total capacity “lost” Data2 Data3
A.3 Data3
Data4 Data4
A.1 Data5
 Fault tolerant
Data5 Data6 Data6
 One disk per array may fail

60 GB

Raid and Controllers PY-WIN-IM 4 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
RAID level 5
 “Striping with Parity”
 3 disks minimum
 Data and additionally generated
parity information are “striped”
(interleaved) across all disks RAID 5 / Striping + parity
 One disk per array “lost” 40 GB 40 GB 40 GB
parity1+2 data1 data2
 The parity information (ECC
to provide fault tolerance) data3 parity3+4 data4
as additional “overhead” data5 data6 parity5+6
consumes the equivalent of
one disk’s capacity
 Fault tolerant
 One disk per array may fail

80 GB

Raid and Controllers PY-WIN-IM 5 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
RAID level 6
 “Striping with 2 Parity drives”
 4 disks minimum
 Data and 2 additionally generated
parity information are “striped”
(interleaved) across all disks
 Two disks per array “loss” RAID 6 / Striping+2 parity
40 GB 40 GB 40 GB 40 GB
 The parity information (ECC D0 D1 Q0 P0
D2 Q1 P1 D3
to provide fault tolerance) Q2 P2 D4 D5
as additional “overhead” P3 D6 D7 Q3
etc … etc … etc … etc …
consumes the equivalent of
two disks capacity
 Fault tolerant
 Two disk per array may fail
 Additional Level of Degradedness
80 GB
 Partially-Degraded

Raid and Controllers PY-WIN-IM 6 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
Modular RAID 5/6 solution - RAID 6
 RAID 6 is like a RAID 5 with a 2nd independent distributed
parity scheme
 2 drives can fail without data loss
 P and Q blocks are build based on Galois Field equations

Raid and Controllers PY-WIN-IM 7 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
Modular RAID 5/6 solution - RAID 6

 Optimal
40 GB 40 GB 40 GB 40 GB
D0 D1 P0
 All drives belonging to the VD are online D2
Q2
Q1
P2
Q0
P1 D3
D5
D4
P3 D6 D7 Q3
 Virtual Disk is fully redundant etc … etc … etc … etc …

 Partially-Degraded (RAID 6)
40 GB 40 GB 40 GB 40 GB

 Virtual Disk can survive the loss of one drive D0


D2
Q2
D1
Q1
Q0
P1
P0
D3
P2 D4 D5
P3 D6 D7 Q3
 Degraded etc … etc … etc … etc …

 Virtual Disk cannot survive the loss of any


40 GB 40 GB 40 GB 40 GB
other drive D0
D2
D1
Q1
Q0
P1
P0
D3
Q2 P2 D4 D5
 Virtual Disk is not redundant P3
etc …
D6
etc …
D7
etc …
Q3
etc …

 Offline
40 GB 40 GB 40 GB 40 GB
 Virtual Disk is not available D0
D2
D1
Q1
Q0
P1
P0
D3
Q2 P2 D4 D5
 Disks may have lost data and array may be P3
etc …
D6
etc …
D7
etc …
Q3
etc …

inconsistent

Raid and Controllers PY-WIN-IM 8 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
Modular RAID 5/6 solution - RAID 6
 Rebuild
 Simultaneous loss of 2 HDDs in a RAID 6
 2 rebuilds required (serialized rebuild for each replaced HDD)
 Available hot spare HDDs will be used one after the other
•  No simultaneous rebuild of 2 lost HDDs
 Performance
 RAID 6 is about 10% slower than RAID 5

Raid and Controllers PY-WIN-IM 9 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
RAID level 0/1
 “Mirrored Stripe Sets”
 Naming convention is
• adapter manufacturer specific RAID 0 / Striping
 4 disks minimum 40 GB 40 GB
 Two copies of the data are data1 data2
maintained (“mirroring”), data3 data4
data are “striped” (interleaved) data5 data6
across each copy
 50% of total capacity “lost” Mirroring 80 GB
 Two identical copies of the
data provide fault tolerance
but during normal operation 40 GB 40 GB
one copy is just “overhead” data1 data2
 Fault tolerant data3 data4
data5 data6
 One disk per array may fail
RAID 0 / Striping

Raid and Controllers PY-WIN-IM 10 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
RAID level 10
 "Spanning RAID 1"
 4 disks minimum
 Two copies of the data are
maintained (“mirroring”), RAID 1 / Mirroring
both copies’ data are spanned
between the two mirror sets 40 GB 40 GB
data1 data1
 50% of total capacity “lost” 40 GB
data3 data3
 Two identical copies of the data5 data5
data provide fault tolerance
but during normal operation Striping 80 GB
one copy is just “overhead”
 Fault tolerant 40 GB 40 GB
 1 disk may fail, with FW >=7 data2
data2 data2
data2
40 GB
2 disks (1 per mirror set) data4
data4 data4
data4
may fail data6
data6 data6
data6
RAID 1 / Mirroring
Raid and Controllers PY-WIN-IM 11 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
RAID level 50
 “Spanning RAID 5”
 Two or more equal sized
RAID 5 arrays as RAID 0 RAID 5 / Striping + parity
40 GB 40 GB 40 GB
 6 disks minimum
parity1+2 data1 data2
 Data and additionally generated data5 parity5+6 data6
parity information are “striped” data9 data10 parity9+10
(interleaved) across all disks
 Better performance than a single
RAID 5
Striping
 One disk per array “lost” 160 GB
 Fault tolerant
 One disk per array may fail
40 GB 40 GB 40 GB
parity3+4 data3 data4
data7 parity7+8 data8
data11 data12 parity11+12
RAID 5 / Striping + parity

Raid and Controllers PY-WIN-IM 12 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
JBOD
 Not a "real" RAID level - stands for “Just a Bunch Of Drives”
 Consisting of a single disk drive
 Some manufacturers call it a single drive RAID0 instead of a JBOD
 No fault tolerance, no performance advantage

Raid and Controllers PY-WIN-IM 13 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
COD

Raid and Controllers PY-WIN-IM 14 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
Configuration on Disk
 COD
 The RAID adapter uses the COD (Configuration On Disk) to store RAID
specific configuration information (RAID / Controller configurations..)
directly on the physical disks inside the array(s)

e.g. RAID 1

COD COD

1x
60 GB 60 GB 60
GB
Operating
System

Raid and Controllers PY-WIN-IM 15 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
Helpful Hints

Raid and Controllers PY-WIN-IM 16 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
HDD LED’s (1)
 HD dead

 remove/change drive
 HD rebuild

 wait until rebuild is finished


 HD normal access

 DO NOT to remove/replace HD
 For RAID testing purpose – offline drive first

Raid and Controllers PY-WIN-IM 17 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
HDD LED’s (2)
 HD Rebuild

 After HD replacement wait at least 2 minutes for the rebuild process to


start automatically.
 If the rebuild still does not start, start a “manual rebuild” from WEB BIOS
or ServerView RAID.

Raid and Controllers PY-WIN-IM 18 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
SAS/SATA Comparison

Raid and Controllers PY-WIN-IM 19 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
Disk Interface Comparison

Raid and Controllers PY-WIN-IM 20 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
Spec.: Mixing of SAS and SATA Disks

Raid and Controllers PY-WIN-IM 21 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
Fujitsu: Mixing of SAS and SATA Disks

SATA SAS
 ok SATA SAS
SATA SAS

 not supported SATA SAS


SATA SAS

SAS SATA
 mix on expander not possible

Raid and Controllers PY-WIN-IM 22 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
RAID 2,5 “ and 3,5 “

Raid and Controllers PY-WIN-IM 23 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
PRIMERGY 2.5” and 3.5” HDD
RX300 S5 6 x 3.5”
RX200 S5

6 x 2.5”

6x 2.5”

8 x 2.5”

9x 2.5”
TX120 2 x 2.5” + 2 x 2.5”

TX200S5 (2x) 8 x 2.5”

TX300S4
4 x 3.5”
+ 2 x 3.5”
(2x) 6 x 2.5” 6 x 3.5”
+ 8 x 2,5” +2 x 3.5”

Raid and Controllers PY-WIN-IM 24 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
SAS TX/RX and SAS SX
 PRIMERGY RX or TX  FibreCAT SX40

 exchange between PRIMERGY and FibreCAT SX not


possible (different cases) and not supported.

Raid and Controllers PY-WIN-IM 25 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
SATA / SAS HDD 06/2009

Size Technology Rotation Capacity

73, 146, 300 &


3,5” SAS 15k
450 GB

2,5” SAS 15k 36 & 73 GB

2,5” SAS 10k 73 & 146 GB

2.5” SATA 5.4K 120GB

160, 250, 500,


3,5” SATA 7.2k
750 & 1000 GB

Raid and Controllers PY-WIN-IM 26 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
SATA / SAS 2.5” vs. 3.5”

2.5“ 3.5“
SAS 10k - 15k SAS 10k -15k
Rotation (RPM) SATA 7.2k SATA 7.2k

SAS: 1.2 M - 1.6 M h SAS: 1.2 M h


MTBF SATA: 300k - 500k h SATA: >1Mh

2.5“ 15k < 10% better


Performance performance
compared to 3.5“ 15k

Power consumption < 50% of 3.5“ HDDs


Heat dissipation

Raid and Controllers PY-WIN-IM 27 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
Helpful hints

Raid and Controllers PY-WIN-IM 28 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
HDD LED’s (1)
 HD dead

 remove/change drive
 HD rebuild

 wait until rebuild is finished


 HD normal access

 DO NOT to remove/replace HD
 For RAID testing purpose – offline drive first

Raid and Controllers PY-WIN-IM 29 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
HDD LED’s (2)
 HD Rebuild

 After HD replacement wait at least 2 minutes for the rebuild process to


start automatically
 If the rebuild does not start, pull the disk, wait another minute and insert it
again.
 If the rebuild still does not start, start a “manual rebuild” from WEB BIOS
or ServerView RAID

Raid and Controllers PY-WIN-IM 30 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
WebBIOS
VDisk / Controller
settings

Raid and Controllers PY-WIN-IM 31 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
Virtual Drive – Access Policy
 Access Policy:
 RW: Allow read/write
access. This is the default.
 Read Only: Allow read-
only access.
 Blocked: Do not allow
access.

Raid and Controllers PY-WIN-IM 32 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
Virtual Drive – Read Cache
 Normal:
 This disables the read ahead
capability. This is the default.
 Ahead:
 Read sequentially ahead of
requested data. This speeds up
reads for sequential data, but
there is little improvement when
accessing random data.
 Adaptive:
 The controller begins
using read ahead if the two
most recent drive accesses
occurred in sequential sectors.
If the read requests are random,
the controller returns to Normal
(no read ahead).
Raid and Controllers PY-WIN-IM 33 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
Virtual Drive – Write Cache
 Write back
 Only enable with BBU Support
 Write through
 Does not use Write cache
 Big Impact on Parity RAID performance
 Wrthru for Bad BBU:
 If you choose this option,
the controller firmware
automatically switches to “Write
Through” mode if it detects a bad or
missing BBU.
 BBU and Cache
 BBU Must Support cache for a
weekend
 Larger cache boosts parity raid if using
write back policy
Raid and Controllers PY-WIN-IM 34 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
Virtual Drive – I/O Policy
 The I/O Policy applies to
reads on a specific virtual
drive. It does not affect the
read ahead cache.
 Direct: In direct I/O mode, reads
are not buffered in cache
memory..
 Cached: In cached I/O mode, all
reads are buffered in cache
memory.

Raid and Controllers PY-WIN-IM 35 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
Virtual Drive – Disk Cache
 Disk Cache Policy:
 Enable
 Disable.
 Unchanged: Leave the
current drive cache policy
unchanged.

Data

Disk cache is not


BBU protected
Raid and Controllers PY-WIN-IM 36 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
Virtual Drive - properties
 Disable BGI: Specify the
background initialization
status:
 No: Leave background
initialization enabled. This
is the default.
 Yes: Select Yes if you do
not want to allow
background initializations
for configurations.

Raid and Controllers PY-WIN-IM 37 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
Virtual Drive - BGI

Disable BGI :NO Disable BGI :YES

nothing
BGI nothing Fast Init
keep Data
keep Data
Manual VDISK Operation

3
• Best practice:1-3
•Disable BGI=no and select YES complete init
 overwrites every block

•If you have Problems with the File system on your RAID select disable BGI=YES
and then NO and select Slow init manually. This really deletes all data on the HDD

Raid and Controllers PY-WIN-IM 38 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
Size
 Multiple Virtual disk on the same physical disks possible.
 Auto sizing of virtual disk for suggested RAID Level.
 Always check the size of the Virtual disk before proceeding.

Raid and Controllers PY-WIN-IM 39 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
LSI Modular RAID

Raid and Controllers PY-WIN-IM 40 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
Overview
 Current RAID solutions
 Embedded SATA RAID
 Modular RAID RAID01
 Modular RAID 5/6

Embedded SATA
RAID solution Modular RAID IME Modular RAID 5/6
solution solution

optional RAID5
iButton

Raid and Controllers PY-WIN-IM 41 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
Embedded SATA RAID solution

Supported RAID
Codename Modular RAID levels SCSI PIC
+ facts

Embedded 0, 1, 10
Embedded
Southbridge (RAID 5 with iButton
SATA RAID
(ESB2)
SATA II
solution only)
ICHxR
max 4/6 HDDs

Raid and Controllers PY-WIN-IM 42 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
Modular RAID IME - 5/6
Supported RAID
Code Modular Internal
levels PIC
name RAID connectors
+ facts
RAID01 0, 1, 1E
(LSI 1064e max. 4 HDs + 2 hot-
Lynx/4 spare
based)
D2507-A1 max. 2 RAID arrays
0, 1, 1E
RAID01
max. 8 HDs + 2 hot-
Lynx/8 (LSI 1068e spare
based) max. 2 RAID arrays
D2507-B1
Tape (SAS)
0, 1, 10, 5, 6, 50, 60
5/6 8 HDs w/o extender
(LSI 1078 mode
based) max. 128 arrays
Cougar D2516-A1 Expander support
(256MB)
256 MB or 512 MB
D2516-B1
Cache
(512MB)
BBU optional

Raid and Controllers PY-WIN-IM 43 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
Overview used RAID controllers

SATA RAID Entry RAID High-end RAID


PRIMERGY
embedded MegaRAID Modular 0/1 Modular 5/6

---
X (1064e)
RX100S5 X
onboard (add. PCI card no
longer an option)

RX200S5 X X (1064e) X(1078)

SATA onboard (only for


RX300S5 DVD) X (1068e) X(1078)
---
TX150S6 X X (1068e) X(1078)
TX200S5 X X (1068e) X(1078)
TX300S5 SATA onboard (only for
DVD) X (1068e) X(1078)
---
Raid and Controllers PY-WIN-IM 44 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
Battery Backup Unit
iBBU option
 iBBU is mounted with an adapter plate in the PRIMERGY
 Recalibration every 30 days by default (message log)
 Supports cache for 72 hours

 2’nd BBU for 2’nd controller

Raid and Controllers PY-WIN-IM 45 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
SAS cabling 3.5” - Tower

TX150 S6 (3.5”) TX200 S5 (3.5”) TX300 S5 (3.5”)


Basic configuration HD

Lynx/8 Lynx/8 Lynx/8


Cougar Cougar Cougar

Lynx/4 or SCSI or USB Lynx/4 or SCSI or USB Lynx/4 or SCSI or USB


Configuration

Lynx/8
Tape

Lynx/8
Cougar Cougar Lynx/8
Cougar

standard system backplane optional HDD expansion box backplane

Lynx/4= Modular 0/1, Lynx/8= Modular 0/1, Cougar= Modular RAID 5/6

Raid and Controllers PY-WIN-IM 46 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
SAS cabling 2.5”
 TX150 S6 (2.5”) TX200 S5 (2.5”) TX300 S5 (2.5”)
Lynx/4 or SCSI or USB Lynx/4 or SCSI or USB Lynx/4 or SCSI or USB

E x p a n d e r
Configuration

Lynx/8 Lynx/8 Cougar


Tape

Cougar Cougar
Expansion Box (no
Configuration with

E x p a n d e r
Tape possible)

Cougar/Lynx 8 Cougar

Cougar Cougar
Lynx 8 Lynx 8

standard system backplane optional expansion box backplane


Lynx/4= Modular 0/1, Lynx/8= Modular 0/1, Cougar= Modular RAID 5/6

Raid and Controllers PY-WIN-IM 47 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
LSI Controllers for
external devices

Raid and Controllers PY-WIN-IM 48 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
LSI Controllers for external devices

Supported Devices+RAID
Name connector PICTURE
levels + facts

LSI MegaRAID 0, 1, 10, 5, 50


8344ELP FibreCAT SX40 only
(Intel® IOP333 256 MB Cache + Optional
RAID

I/O based) BBU


SFF8470

0, 1, 10, 5, 6, 50, 60
LSI MegaRAID
8880EM2 FibreCAT SX40 and SX650
only
(LSI 1078
based) 512 MB Cache + Optional SX650 SX40
BBU SFF8470 SFF8470
SFF8088  SFF8088

LSI SAS 3442E- No RAID


R External Tape only
(LSI 1068e TX48 S2 with LTO3 and LTO4
Tape

based) TX24 S2 with LTO3 and LTO4


Use of internal SAS
Connector is not released. SFF8470

Raid and Controllers PY-WIN-IM 49 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
Overview - Usage

LSI MegaRAID LSI MegaRAID


PRIMERGY 8344ELP 8880EM2 LSI SAS 3442E-R

Econel230
S1
- - x
RX100S5 - - x
RX200S4 X X x
RX330S1 X - -
RX300S4 X X x
TX150S6 X X -
TX200S4
X X -
TX300S4 X X x

Raid and Controllers PY-WIN-IM 50 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
Tape Configurations
Tape devices Controller

 Controller Adaptec ASC-29320LPE


DDS Gen6
 PCI eXpress x1
Parallel Entry LTO
 Released for all PRIMERGYs
SCSI LTO3HH - Half height
 Single Channel Ultra 320 SCSI HBA
LTO4FH - Full height
 Internal or external drives allowed

DDS Gen5
USB Drives connected to USB connector on
DDS Gen6
motherboard
RDX

LTO4HH
SAS  Modular IME - D2507-A1 (Lynx 4)
LTO4FH
 LSI SAS3442E-R (SAS HBA)

Raid and Controllers PY-WIN-IM 51 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
FibreCAT SX40

Raid and Controllers PY-WIN-IM 52 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
Quality and Availability
 The FibreCAT SX40 offers:
 Hot-plug SAS hard disks in SX40 HDD carrier
 An I/O module with 2 SAS x4 wide ports (12 GBit, SAS IN and SAS Out)
 Integration in PRIMERGY server management and RAID management
software

Raid and Controllers PY-WIN-IM 53 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
FibreCAT SX40 - Facts
 PRIMERGY SX40 storage subsystem
 “Just a Bunch of Disks” for easy hard disk expansion
 2 SAS x4 wide ports (SFF 8470 connector)
 Max. 12 x 3.5-inch hot-plug SAS hard disks (up to 36 disks with 3 SX40)
 Measures just 2 height units in the rack
 LED status display for monitoring

Raid and Controllers PY-WIN-IM 54 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
FibreCAT® SX40 Mechanical Layout

Power supply

I/O module

Midplane

Raid and Controllers PY-WIN-IM 55 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
Mounting and Cabling Rules
 SAS cabling
 Mount server and JBOD close together
in the rack
 Max. cable length
• 2m
CTRL – JBOD
• 0.5m; 2m
JBOD – JBOD
 Power cabling
 Connect each server power supply and
JBOD to the same phase

Raid and Controllers PY-WIN-IM 56 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
Exercises

Ex01: BIOS Configuration


Ex02: Configure LSI Embedded SATA (dependant on classroom)
Ex03: Configure LSI SAS IME
Ex04: Configure LSI Web BIOS

Raid and Controllers PY-WIN-IM 57 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
Exercise 01/02/03/04
 BIOS Configuration
 Spend some time in the BIOS familiarizing yourself with
the additional features offered by Fujitsu server BIOS’s
 Configure LSI Embedded SATA
 Use the SATA RAID BIOS to configure a RAID 1 and
initialize it.
 Configure LSI SAS IME
 Configure a RAID array using the SAS IME BIOS of an
LSI 1068e Modular RAID controller.
 Configure LSI Web BIOS
 Configure multiple RAID Arrays using the WebBIOS of
an LSI 1078e Modular RAID controller.

Raid and Controllers PY-WIN-IM 58 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009
Raid and Controllers PY-WIN-IM 59 Copyright FUJITSU TECHNOLOGY SOLUTIONS Nov 2009

You might also like