Sca|e·cuI SIcrage w|Ih EHC* AImcs

*
AUDIENCE AND PUPP0SE
|c. cc,-¤|e .¤c -.e |cc'|¤¸ !c 'J||c !¤e|. c.¤ c|cJc cc,J!|¤¸
|¤í.-!.Jc!J.e. |¤c|Jc|¤¸ 'c!¤ |¤!e.,.|e '` c.¸-¤|.-!|c¤ -¤c ´|cJc Se.|ce
|.c|ce. c. ´|cJc |c!|¤¸ |.c|ce.. !¤e '¤c.|ec¸e -¤c ex,e.|e¤ce ¸-|¤ec
í.c ,.e|cJ .c.' .||| ¤e|, í-c|||!-!e !¤e cec||c¤ !c Je - c|cJc íc. ce||e., cí
'` e.|ce. `¤| .eíe.e¤ce -.c¤|!ec!J.e ¸-!¤e. |¤!c c¤e ,|-ce !¤e ee¤!|-| cí
- c-|e-cJ! !c.-¸e c|cJc -.c¤|!ec!J.e '-ec c¤ ||´* /!c* c|cJc-c,!||.ec
!c.-¸e. `¤| .eíe.e¤ce -.c¤|!ec!J.e. '-ec c¤ '¤!e|` ¨ec¤` e.e.. c.e-!e -
J|!|-|!e. c-,-c|!,-c,!||.ec c|cJc !c.-¸e ce,|c,e¤!. `¤| ,-,e. cc¤!-|¤
ce!-|| c¤. !¤e c|cJc !c,c|c¸, ¤-.c.-.e -¤c cí!.-.e ce,|c,ec |¤!-||-!|c¤
-¤c cc¤í¸J.-!|c¤ !e, -¤c !e! íc. .e-|-.c.|c Je c-e !¤-! ¤cJ|c
|¸¤|íc-¤!|, .ecJce !¤e |e-.¤|¤¸ cJ.e - ,cJ 'J||c -¤c c,e.-!e ,cJ. í.! c|cJc
|¤í.-!.Jc!J.e.
`¤e c.e-!|c¤ -¤c c,e.-!|c¤ cí c|cJc !c.-¸e .e(J|.e |¸¤|íc-¤! |¤!e¸.-!|c¤ -¤c
cJ!c|.-!|c¤ '-ec c¤ ex|!|¤¸ '` |¤í.-!.Jc!J.e -¤c 'J|¤e .e(J|.ee¤!.
/ - .eJ|!. .e cc ¤c! ex,ec! !¤-! cc¤í¸J.-!|c¤ cec.|'ec |¤ !¤| ,-,e. c-¤ 'e
Jec --|. |c. ex-,|e. -c-,!-!|c¤ !c -¤ ex|!|¤¸ ¤e!.c.' -¤c |ce¤!|íc-!|c¤
-¤-¸ee¤! .e(J|.ee¤! -.e cJ! cí cc,e íc. !¤| ,-,e.. `¤e.eíc.e. .e
-¤!|c|,-!e !¤-! !¤e Je. cí !¤| ,-,e. .||| -'e |¸¤|íc-¤! -c,J!e¤! !c !¤e
ce|¸¤ .e ,.ee¤! |¤ c.ce. !c ee! ,ec|íc .e(J|.ee¤!.
`¤| ,-,e. -|c -Je !¤-! !¤e .e-ce. ¤- '-|c '¤c.|ec¸e cí c|cJc !c.-¸e
|¤í.-!.Jc!J.e cc,c¤e¤! -¤c e.|ce.
InIe|' C|cud ßu||ders ûu|de
InIe|' Xecn' Prccesscr·based Servers
Sca|e·cuI SIcrage w|Ih EHC* AImcs*
InIe|' Xecn' Prccesscr 5500 Ser|es
InIe|' Xecn' Prccesscr 5600 Ser|es
lnIel' Clcud Builders Cuide. Clcud 0esign
and 0eplcymenI cn lnIel' PlaI!crms
Tab|e 0f CcnIenIs
ExecuI|ve Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
InIrcducI|cn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Scale·cuI SIcrage Usage Ncdels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Backup/Archive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Large 0bjecI SIcre . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Large Scale 0aIa warehcusing and AnalyIics SIcre . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Usage Ncdel 0verview in Ihis Paper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
EHC AImcs Sca|e·cuI SIcrage C|cud Arch|IecIure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
ENC AImcs 0verview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
TesI ßed Arch|IecIure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
ENC AImcs Rack CcnñguraIicn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
ENC AImcs lnsIallaIicn and ValidaIicn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
ENC AImcs CcnñguraIicn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
EHC ATH0S Sca|e·cuI SIcrage TesIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Linux RSync TesI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
ENC File NanagemenI Appliance (FNA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B
CcnñguraIicn c! VirIual Celerra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B
CcnñguraIicn c! AImcs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
CcnñguraIicn c! FNA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Running Archive |cb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
SymanIec NeIbackup TesI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
SymanIec NeIBackup 0verview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
NeIBackup lnsIallaIicn NcIes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
NeIBackup TesI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Nicrcsc!I* S0L Server* Backup TesI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
ENC AImcs ClFS SubIenanI CcnñguraIicn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
NS S0L Server CcnñguraIicn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Veri!y NS S0L Server Backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1B
Th|ngs Ic Ccns|der . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Z0
NeIwcrking ArchiIecIure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Z0
Per!crmance c! Ihe NeIadaIa Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Z0
ApplicaIicn Prccessing cn Ihe 0aIa Ncdes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Z0
Caching c! 0aIa aI Ihe CcmpuIe Server Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Z0
Pcwer 0pIimizaIicn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Z0
NexI SIeµs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Z0
Ccnc|us|cns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Z0
û|cssary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Z1
Add|I|cna| InfcrmaI|cn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ZZ
Z
InIe|' C|cud ßu||ders ûu|de fcr Sca|e·cuI SIcrage w|Ih EHC* AImcs*
search indexes and Ic preview ccnIenI,
sccial neIwcrking in!crmaIicn, videcs,
phcIcs, Ihumbnails, and sIandard c!ñce
dccumenIs (!cr example, pd!, wcrd*,
PcwerPcinI*). 0pIimizaIicn c! sIcrage Ic
achieve Ihe besI e!ñciency is especially
signiñcanI !cr pcrIal applicaIicns like
YcuTube*, Flickr*, and Facebcck* IhaI
c!!er !ree sIcrage c! ccnsumer videc
and phcIc ccnIenI. EnIerprises sIrive Ic
achieve a similar e!ñciency as Ihey deplcy
scale·cuI sIcrage as repcsiIcries !cr daIa
and dccumenIs sIcred in applicaIicns
like Nicrcsc!I SharePcinI* and ENC*
0ccumenIum*.
Z
This paper deñnes a scale·cuI sIcrage
clcud re!erence archiIecIure based cn
ENC AImcs. The re!erence archiIecIure
highlighIs ENC AImcs
3
deplcyed as a
single privaIe clcud sIcrage acrcss Ihe
lnIel' Fclscm and PcrIland clcud labs. ln
addiIicn, a number c! real·wcrld use cases
are validaIed Ic highlighI Ihe !eaIures c!
Ihe ENC AImcs implemenIaIicn.
Sca|e·cuI SIcrage Usage Hcde|s
ßackuµlArch|ve
As ccnsumer and enIerprise daIa sIcrage
needs have risen, a lcw·ccsI and high·
IhrcughpuI means Ic back up and reIrieve
ccmpuIer ñles has beccme criIical. Nany
clcud sIcrage Sc!Iware as a Service
(SaaS) scluIicns exisI Ic suppcrI ccnsumer
PC, smarI phcne, and IableI backup. These
SaaS scluIicns alsc suppcrI PC and server
backup !cr small and medium business.
EnIerprises Iypically deplcy backups as a
privaIely managed service which direcIly
ccnnecIs Ic privaIe sIcrage based SAN
cr NAS, cr direcIly Ic Iape. Tc achieve
beIIer e!ñciency, enIerprises ncw mcve
Ic replace Ihese scluIicns wiIh scale·cuI
sIcrage clcud archiIecIure.
lT Iypically builds backup sysIems arcund
a clienI sc!Iware prcgram IhaI runs cn a
schedule, Iypically cnce a day. Fcr higher
reliabiliIy, ycu can alsc implemenI backups
as ccnIinucus daIa prcIecIicn. The backup
prcgram ccllecIs, ccmpresses, encrypIs,
ExecuI|ve Summary
This paper describes Ihe archiIecIure and
implemenIaIicn deIails c! a small scale·
cuI sIcrage (S0S) clcud scluIicn builI
jcinIly by lnIel and ENC Ic demcnsIraIe
a privaIe clcud sIcrage deplcymenI
acrcss Iwc gecgraphically separaIe daIa
cenIers. we builI Ihe scale·cuI sIcrage
clcud scluIicn wiIh Ihe ENC AImcs clcud·
cpIimized sIcrage scluIicn wiIh 0ell*
servers based cn lnIel' Xecn' Iechnclcgy.
ln addiIicn Ic Ihe sIandard !uncIicnaliIy
expecIed !rcm a clcud sIcrage scluIicn
such as lccal and remcIe sIcrage access
inIer!aces, Ihe dynamic prcvisicning
c! sIcrage and sIcrage users, and lcw
pcwer / high capaciIy sIcrage scaling Ic
peIabyIes (PBs) and beycnd, Ihe scluIicn
c!!ers a ccmpleIe seI c! Iccls Ic auIcmaIe
managemenI c! Ihe clcud sIcrage. These
Iccls include a high availabiliIy meIadaIa
service IhaI suppcrIs user·deñned
pclicies and sel!·managemenI c! sIcrage
placemenI, reliabiliIy, ccmpressicn, and
deduplicaIicn. This paper illusIraIes Ihe
clcud sIcrage !uncIicnaliIy builI and
IesIed cver a Iwc·mcnIh pericd wiIh
several real·wcrld use cases, such as
server backup and auIcmaIed ñle archive.
InIrcducI|cn
The emergence c! clcud ccmpuIing and
Ihe explcsicn c! digiIal ccnIenI has driven
Ihe develcpmenI c! capaciIy·based, scale·
cuI sIcrage clcud archiIecIure. Figure
1 shcws cne esIimaIe c! Ihe prcjecIed
grcwIh c! digiIal ccnIenI IhaI all devices
will generaIe cver Ihe nexI 10 years. The
expecIaIicn is IhaI digiIal ccnIenI grcwIh
will apprcximaIely dcuble every year. 0ne
elemenI IhaI enables Ihis rapid grcwIh is
a reducIicn in Ihe invesImenI per gigabyIe
Ic sIcre Ihe daIa. As much as 15% c!
Ihe in!crmaIicn in Ihe 0igiIal Universe in
Z0Z0 cculd be parI c! a clcud service.
1
As
a resulI, clcud sIcrage invesImenI needs
Ic decrease aI Ihe same raIe Ic make Ihe
sIcrage e!ñcienIly suppcrI Ihe massive
prcjecIed grcwIh raIe.
Tc achieve Ihe besI e!ñciency, inIerneI
pcrIals IhaI cpIimize !cr bcIh clcud
ccmpuIing and digiIal ccnIenI have
creaIed clcud sIcrage archiIecIures
based cn indusIry sIandard xB6 servers
wiIh direcIly aIIached disks. This
implemenIaIicn c! sIcrage is ccmmcnly
re!erred Ic as scale·cuI sIcrage. PcrIal
applicaIicns uIilize scale·cuI sIcrage
!cr a number c! purpcses such as !cr
3
InIe|' C|cud ßu||ders ûu|de fcr Sca|e·cuI SIcrage w|Ih EHC* AImcs*
F|gure 1: D|g|Ia| CcnIenI CreaI|cn EsI|maIes
and Irans!ers Ihe daIa Ic Ihe clcud
sIcrage-a privaIe clcud, a SaaS clcud
sIcrage, cr ñrsI Ic a privaIe clcud and
Ihen Ic Ihe SaaS clcud.
Large 0bjecI SIcre
The biggesI grcwIh c! daIa in Ihe lasI
Ien years has been semi·sIrucIured
large cbjecIs. PhcIc, videc, Ihumbnails,
and dccumenIs (!cr example, pd!, wcrd,
PcwerPcinI) are Ihe mcsI prevalenI
examples. The challenge is especially
signiñcanI !cr pcrIal applicaIicns like
YcuTube, Flickr, and Facebcck IhaI sIcre
ccnsumer videc and phcIc ccnIenI.
EnIerprises have a similar requiremenI
Ic sIcre dccumenIs wiIh applicaIicns
like Nicrcsc!I SharePcinI and ENC
0ccumenIum.
Large·Sca|e DaIa warehcus|ng and
Ana|yI|cs SIcre
EnIerprises can analyze daIa ccllecIed
as parI c! Iheir business cperaIicns
Ihrcugh cnline daIabases (business Ic
business and business Ic clcud), lcg
ñles, senscrs, and general dccumenIs.
we Iypically re!er Ic Ihis ccllecIicn c!
daIa as a daIa warehcuse. Scale·cuI
sIcrage is an cpIimal archiIecIure Ic
bcIh ccsI e!!ecIively sIcre Ihe daIa, and
aI Ihe same Iime make iI available !cr
daIa analyIics. The Apache* Hadccp
4

applicaIicn !ramewcrk is ccmmcnly used
cn a scale·cuI sIcrage clcud Ic per!crm
Ihe daIa Irans!crmaIicn necessary Ic
creaIe Ihe analyIics daIabase.
Usage Hcde| 0verv|ew |n Ih|s Paµer
This paper cnly !ccuses cn ccnñguraIicn
and IesIing c! backup and archive usage
mcdels. Fcur applicaIicns were selecIed
based cn discussicns wiIh lnIel lT cn use
c! a scale·cuI sIcrage clcud !cr backup
and archiving. The !cur applicaIicns IesIed
are described in Ihe ENC AImcs scale·cuI
sIcrage IesIs secIicn.
FederaIed
Ccpy
AImcs*·pcwered
Clcud Service
TCP/lP TCP/lP TCP/lP
0aIa CenIer Fclscm 0regcn
TenanI 1
TenanI Z
TenanI 3
H
a
r
d
w
a
r
e
S
c
!
I
w
a
r
e
A
c
c
e
s
s
SIcrage Services
NeIa 0aIa/RN Services
ClienI 0bjecI Services
web Services File SysIem
REST
REST NFS REST lFS ClFS
S0AP
ClFS NFS lFS
Usage Ncdels
ª |-c'J,//.c¤|e
ª 0',ec! S!c.e ¸e.¸. V|cec.
||c!J.e. |ccJe¤!)
ª /¤-|,!|c/|-!- w-.e¤cJe
ª ´c,J!e ´|cJc ´-,-c|!, `|e.
web
ApplicaIicn
File Share ENC*
FNA
SymanIec*
NeIBackup
App
Backup
ApplicaIicn ApplicaIicn
ENC*
AImcs*
0ell* R610, Z4 CB
0ell R610, Z4 CB
0ell R610, Z4 CB
0ell R610, Z4 CB
15 x Z TB SATA
15 x Z TB SATA
15 x Z TB SATA
15 x Z TB SATA
ENC*
AImcs*
0ell* R610, Z4 CB
0ell R610, Z4 CB
0ell R610, Z4 CB
0ell R610, Z4 CB
15 x Z TB SATA
15 x Z TB SATA
15 x Z TB SATA
15 x Z TB SATA ReplicaIicn
CbE CbE
ª REST HTTP
ª lFS
ª NFS, ClFS
ª REST HTTP
ª lFS
ª NFS, ClFS
lnIel' Fclscm
Clcud Lab
lnIel' 0regcn
Clcud Lab
Ciscc* Z9Z1 Ciscc Z9Z1
Ciscc* Nexus*
50Z0
ExIender
Ciscc Nexus
50Z0
ExIender
Scale·cuI
SIcrage Clcud
lnIel' wAN
(L3 PZP
lPSec VPN)
EHC AImcs Sca|e·cuI SIcrage C|cud
Arch|IecIure
EHC AImcs 0verv|ew
ENC designed ENC AImcs wiIh !eaIures
IhaI suppcrI massive scalabiliIy (See
Figure Z). ENC AImcs prcvides a plaI!crm
IhaI is designed Ic suppcrI mulIiple
¨IenanIs,¨ and inIegraIes an exIremely
pcwer!ul pclicy engine and a varieIy c!
sc!Iware !eaIures Ic deliver a mulIi·
use sIcrage plaI!crm IhaI can meeI Ihe
demands c! e!ñcienI, wide scale sIcrage
scughI in clcud·based archiIecIures.
AImcs !eaIures include CecPrcIecI,
5

which is an inIelligenI cbjecI level
prcIecIicn scheme IhaI uIilizes cbjecI
replicaIicn and Erasure ccding. wiIh bcIh
4
InIe|' C|cud ßu||ders ûu|de fcr Sca|e·cuI SIcrage w|Ih EHC* AImcs*
F|gure Z: EHC AImcs Sca|e·cuI SIcrage C|cud
F|gure 3: EHC AImcs Deµ|cymenI |n InIe| C|cud Labs
eighI servers and hard disk enclcsures
in a wSZ·1Z0 ccn!iguraIicn).
ª Each c! Ihe !cur servers has cne CbE
exIernal NlC which is ccnnecIed Ic a
Ciscc Nexus* 50Z0 SwiIch. Server NlCs
can be dcubled cr upgraded Ic 10 CbE
i! needed.
ª Each c! Ihe !cur servers has an inIernal
privaIe CbE* NlC ccnnecIed Ic a Icp·c!·
rack swiIch IhaI is used !cr PXE bccI
during Ihe iniIial AImcs ccn!iguraIicn.
ª Each server has a x4 SAS 6 CB/s ccn·
necIicn Ic cne ENC disk enclcsure.
ª Each ENC disk enclcsure ccnIains 15
ENC* SATA ll 5.4K, Z TB hard disk drives
!cr a IcIal c! 30 TBs c! raw sIcrage per
hard disk enclcsure.
Each server is a 0ell* PcwerEdge*
R610 Rack Server wiIh Ihe !cllcwing
ccnñguraIicn.
ª 0ual scckeI lnIel' 5Z00 ChipseI
plaI!crm
ª Twc lnIel' Xecn' Prccesscr E5504
ª Z4 CB 1066 00R3 Nemcry
ª XB6·64 Red HaI* Linux (Kernel.
Z.6.Z9.6·4.Z.smp.gcc4.1.xB6_64)
ª ENC* AImcs* Appliance 1.3.Z.5Z930
EHC AImcs InsIa||aI|cn and Va||daI|cn
EHC AImcs ScfIware InsIa||aI|cn
we ñrsI insIalled Ihe ENC AImcs sc!Iware
cn Ihe masIer server in Ihe rack wiIh
Ihe ENC AImcs Appliance C0 Ihrcugh
Ihe keybcard, visual·display uniI, mcuse
(KVN) inIer!ace. we Ihen pcwered cn
each c! Ihe cIher Ihree servers cne aI a
Iime. Each server PXE bccIed !rcm Ihe
masIer server cver Ihe privaIe neIwcrk
and Ihen ccpied Ihe masIer server AImcs
insIallaIicn Ic Iheir lccal bccI disk.
and de·duplicaIicn. wiIh Ihis seI c! rcbusI
!uncIicnaliIy, ENC packages Ihe e!ñciency
achieved by large lnIerneI ccmpanies inIc
a suppcrIed and validaIed prcducI.
AImcs achieves scaling e!ñciency because
iI separaIes sIcrage meIadaIa !rcm
sIcrage cbjecIs/daIa. A scale·cuI sIcrage
archiIecIure has Ihree majcr ccmpcnenIs.
sIcrage clienI, sIcrage daIa ncde, and
meIadaIa sIcre. This archiIecIure enables
cpIimal deplcymenI c! AImcs cn sIandard
server and sIcrage hardware ccmpcnenIs
(e.g. xB6 prccesscrs, gigabiI EIherneI
(CbE) and 10 CbE neIwcrk inIer!ace cards
(NlCs), hard disk drives, sclid sIaIe drives,
hcsI bus adapIers, eIc.) IhaI use Ihe
sIandard Linux* cperaIing sysIem. Use c!
sIandard server and sc!Iware ccmpcnenIs
are key IeneIs Ic meeIing Ihe sIaIed clcud
sIcrage requiremenIs.
TesI ßed Arch|IecIure
Fcr Ihe purpcse c! Ihis re!erence
archiIecIure, ENC and lnIel deplcyed
AImcs as Iwc !cur·ncde racks. cne in Ihe
lnIel Fclscm, Cali!crnia clcud lab and cne
in Ihe lnIel Hillsbcrc, 0regcn clcud lab
as shcwn in Figure 3. we ccnñgured Ihe
AImcs sysIems Ic be a single scale·cuI
sIcrage clcud ccnnecIed by Ihe lnIel'
wAN. we ccnñgured Ihe AImcs sysIem
wiIh a pclicy IhaI auIcmaIically replicaIed
daIa sIcred aI eiIher lccaIicn Ic Ihe cIher
AImcs rack cver Ihe lnIel wAN.
EHC AImcs Pack Ccnf|guraI|cn
we based Ihe racks cn Ihe ENC AImcs
wSZ·1Z0 ccn!iguraIicn (see Figure 4). Fcr
each deplcyed rack.
ª Each rack has !cur servers and !cur
hard disk enclcsures (hal! Ihe ncrmal
prcIecIicn schemes available, Ihe user
can inIegraIe Ihe ccrrecI ENC AImcs
CecPrcIecI scheme Ihrcugh Ihe pclicy
engine, which enables a business·level
pclicy apprcach Ic run cn any sIandard
xB6 server plaI!crm. ENC has inIegraIed
AImcs inIc racks using 0ell* R610,
6
lnIel'
Xecn' servers and ENC's lcw·ccsI, high·
densiIy disk enclcsures and SATA hard
disk drives. AImcs addresses managemenI
ccmplexiIy issues presenIed Ic a
massively scalable, gecgraphic !ccIprinI,
wiIh iIs glcbally disIribuIed archiIecIure
and uniñed namespace. AImcs is Iypically
deplcyed as a mulIi·peIabyIe, mulIi·siIe,
scale·cuI sIcrage clcud plaI!crm !cr
in!crmaIicn sIcrage and disIribuIicn
acrcss mulIiple physical sIcrage ncdes. As
a scale·cuI sIcrage clcud AImcs alleviaIes
limiIaIicns c! IradiIicnal neIwcrk aIIached
sIcrage (NAS) cr sIcrage area neIwcrk
(SAN) sIcrage plaI!crms by managing
ccnIenI as cbjecIs in a virIually unlimiIed
namespace IhaI can span gecgraphies.
AImcs sc!Iware allcws Ihis disIribuIed
sysIem Ic be managed as a single enIiIy.
ln addiIicn, managemenI is simpliñed and
auIcmaIed wiIh inIegraIed pclicy·based
daIa services IhaI handle daIa prcIecIicn,
placemenI, drive spin·dcwn, ccmpressicn
M
D
S
D
A
T
D
A
T
D
A
T
D
A
T
D
A
T
D
A
T
D
A
T
D
A
T
D
A
T
D
A
T
D
A
T
D
A
T
D
A
T
D
A
T
5
InIe|' C|cud ßu||ders ûu|de fcr Sca|e·cuI SIcrage w|Ih EHC* AImcs*
F|gure 4: EHC wSZ·1Z0 F|gure 5: HeIadaIa Ic DaIa SIcrage Ccnñg
EHC AImcs Hardware and ScfIware
Va||daI|cn
we execuIed a IesI plan Ic validaIe
hardware and sc!Iware. The IesI plan
includes validaIicn c!.
ª meIadaIa daIabase
ª services as running and healIhy
ª insIallaIicn lcgs
ª AImcs clcud Ihrcugh read and wriIe
sample cbjecIs
ª pclicies as we viewed Ihe lccaIicn c!
cbjecI creaIicn
EHC AImcs Ccnf|guraI|cn
ENC AImcs ccnñguraIicn cccurs primarily
Ihrcugh a Nicrcsc!I* lnIerneI Explcrer*
versicn 7 brcwser. we ccnñgured bcIh
AImcs racks (i.e. Ihe AImcs scale·cuI
sIcrage clcud) Ihrcugh a single ccnnecIicn
and sessicn.
Pescurce HanagemenI ûrcuµ
Ccnf|guraI|cn
The ENC AImcs rescurce managemenI
grcup (RNC) deñnes Ihe racks in a
single daIa cenIer lccaIicn. we deñned
a rescurce managemenI grcup !cr Ihe
ENC AImcs racks in Ihe lnIel Fclscm and
Hillsbcrc labs.
1. Launched Nicrcsc!I lE wiIh Ihe URL.
hIIps.//19Z.16B.155.Z01/mgmI_lcgin.
The NasIerServerlP address in cur
clcud lab was 19Z.16B.155.Z01. we
insIalled Ihe ENC AImcs clcud cn a
dedicaIed VLAN.
Z. Lcgged in as ¨SecuriIyAdmin.¨
SelecIed ¨lniIiaIe a new sysIem¨ and
changed Ihe de!aulI ¨SecuriIyAdmin¨
passwcrd.
3. CreaIed a rescurce managemenI
grcup (RNC) !cr Ihe lnIel Fclscm
clcud lab.
AImcs RNC Name. FclscmClcudLab
AImcs LccaIicn Name. Fclscm
HcsI Name Preñx. !clscm1
4. Ccn!igured Ihe insIallaIicn segmenIs.
SegmenI name. FclscmClcudLab1·lS·1
lP range.
19Z.16B.155.Z01·19Z.16B.155.Z04
5. CreaIed a rescurce managemenI
grcup (RNC) !cr Ihe lnIel 0regcn
clcud lab.
AImcs RNC Name. 0regcnClcudLab
AImcs LccaIicn Name. 0regcn
HcsI Name Preñx. cregcn1
6. Clicked ¨NexI.¨ ApplicaIicn
ccn!iguraIicn seIup !cr meIadaIa
versus daIa. SeI Ihe meIadaIa Ic daIa
raIic Ic 1.14 (see Figure 5).
7. Ccn!igured Ihe insIallaIicn segmenIs.
SegmenI name.
0reganClcudLab1·lS·1
lP range.
19Z.16B.3Z.Z00·19Z.16B.155.Z03
The IcIal sIcrage amcunI used !cr Ihe
meIadaIa is B TB. The resI c! Ihe sIcrage
(11Z TB) is used Ic sIcre daIa. l! Ihe daIa
is sIcred Ihrcugh Ihe use c! a pclicy wiIh
cne lccal cbjecI and cne remcIe cbjecI (Z
cbjecIs in IcIal similar Ic a mirrcred ccpy
in a IradiIicnal sIcrage sysIem), Ihen Ihe
IcIal available sIcrage is 56 TB.
TenanI Ccnf|guraI|cn
ENC AImcs prcvides Ihe IenanI and
subIenanI !eaIures Ic enable pclicy
and adminisIraIive parIiIicning c! Ihe
clcud. we deñned Ihe Iwc AImcs racks
wiIh a single IenanI, ¨lnIelClcudLab.¨ we
allccaIed Ihe !cur daIa ncdes (0ell R610
server and ENC hard disk enclcsure wiIh
hard disk drives (H00s)) in each lccaIicn
Ic Ihe IenanIs. we ccnñgured Ihe IenanIs
Ihrcugh Ihe use c! pclicies. we execuIed
Ihe !cllcwing ccmmands Ic creaIe Ihe
IenanIs.
1. Launched Nicrcsc!I lE wiIh Ihe URL
hIIps.//19Z.16B.155.Z01/mgmI_lcgin.
Z. Clicked ¨CreaIe TenanI¨ in Ihe
navigaIicn pane.
3. Ccn!igured Ihe IenanI.
AuIhenIicaIicn scurce. lccal
TenanI name. lnIelClcudLab
4. Clicked ¨SubmiI¨ buIIcn Ic creaIe Ihe
IenanI.
5. 0e!ined Ihe IenanI admin as
¨SysAdmin.¨
6. Clicked ¨Add Access Ncdes¨ Ic add
access ncdes. Added Ihe !cllcwing.
!clscm1·001 Ihrcugh !clscm1·003,
ñle sysIem ncne
cregcn1·001 Ihrcugh cregcn1·003,
ñle sysIem ncne
EHC AImcs Peµ||caI|cn Ccnf|guraI|cn
we per!crmed Ihe !cllcwing sIeps Ic
ccnñgure Ihe ENC AImcs replicaIicn.
1. Clicked ¨N0S RemcIe Replica¨ cn
Ihe sysIem dashbcard Ic ccn!igure
meIadaIa sIcre service (N0S) Ic
enable remcIe replicaIicn.
Z. SelecIed ¨FclscmClcudLab·lS·1¨
as ¨insIallaIicn segmenI 1¨ and
¨0regcnClcudLab·lS·1¨ as ¨insIallaIicn
segmenI Z¨ Ic enable replicaIicn c!
daIa acrcss Ihe ENC AImcs ncdes in
Ihe Fclscm and 0regcn clcud labs.
3. Added ENC AImcs ¨NeIbackup¨ pclicy.
SeI Replica1 and ReplicaZ Ic Ihe
!cllcwing values.
Type. Sync
LccaIicn. SameAs, SclienI
Server AIIribuIes. 0pIimal,
Ccmpressicn
EHC AImcs L|nux InsIa||ab|e F||e
SysIem InsIa||aI|cn and Ccnf|guraI|cn
The ENC AImcs insIallable ñle sysIem
(lFS), which uses Ihe Linux ñle sysIem
in user space (FUSE
7
), enables high
per!crmance and reliable direcI ñle
sysIem access cn a Linux server Ic Ihe
ENC AImcs clcud sIcrage. we used Ihe
!cllcwing sequence c! ccmmands Ic seI up
ENC AImcs lFS cn Iwc clcud lab lnIel Xecn
servers running Red HaI Linux 5.5, cne
server in Ihe lnIel Fclscm clcud lab and
cne in Ihe lnIel 0regcn clcud lab.
6
InIe|' C|cud ßu||ders ûu|de fcr Sca|e·cuI SIcrage w|Ih EHC* AImcs*
1. lnsIalled
B
FUSE (Red HaI) package
manager (RPN) reIrieved !rcm
scurce!crge. !use·Z.7.4.Iar.gz.
9
Z. lnsIalled ¨AImcs·1.3.Z.5Z930.
xB6_64.rpm¨ by execuIing ¨rpm -hiv
AImcs·1.3.Z.5Z930.xB6_64.rpm¨ cn
Ihe Red HaI servers.
3. Ccn!igured Ihe ENC AImcs !ile
sysIem by execuIing ¨service maui!s
ccn!igure¨ cn Ihe Red HaI servers.
4. EnIered ¨Fclscm¨ and ¨0regcn¨
as respecIive lccaIicns and
¨lnIelClcudLab¨ IenanI id and
subIenanI id.
5. EnIered Ihe !cur ENC AImcs ncde
lP addresses !cr Fclscm and 0regcn
respecIively.
6. 0isabled NFS expcrIing. 0nly used
Ihe ENC AImcs lFS cn Ihe servers.
7. SIarIed ENC AImcs lFS by execuIing
¨service maui!s sIarI.¨
B. CreaIed !cur direcIcries sIrucIures
Ic enable AImcs Ic disIribuIe !iles
uni!crmly acrcss Ihe !cur AImcs
ncdes.
cd/mnI/maui!s
mkdir NeIBackup1
mkdir NeIBackupZ
mkdir NeIBackup3
mkdir NeIBackup4
9. TesIed Ihe ENC AImcs lFS by
execuIing Ihe !cllcwing.
cd NeIBackup1
Icuch IesIFile
EHC ATH0S Sca|e·cuI SIcrage TesIs
L|nux PSync TesI
Linux* Rsync
10
is an cpen scurce
incremenIal ñle Irans!er uIiliIy IhaI
synchrcnizes Iwc direcIcry Irees acrcss
di!!erenI ñle sysIems IhaI can be lccaIed
cn di!!erenI ccmpuIers and lccaIicns.
we used Rsync as a simple IesI c! Ihe
ENC AImcs scale·cuI sIcrage clcud. The
IesI was Ic backup and resIcre a Linux
user's hcme direcIcry's ccnIenIs wiIh Ihe
!cllcwing sequence c! ccmmands.
1. ExecuIed ¨mkdir/mnI/maui!s/
NeIBackup1/nccskun¨ cn Ihe Linux
server where Ihe AImcs FUSE
insIallaIicn was dcne. This creaIed a
backup direcIcry in Ihe AImcs clcud
sIcrage.
Z. ExecuIed ¨Rsync -rnccskunQpdx1Z3.
inIel.ccm./n!s/pdx/hcme/nccskun/
NeI0aIa /mnI/maui!s/NeIBackup1/
nccskun¨ Ic back up Ihe hcme
direcIcry. The ccmmand replicaIed
Ihe user direcIcry Ic Ihe AImcs clcud
sIcrage.
3. ExecuIed ¨Rsync -r/mnI/maui!s/
NeIBackup1/nccskun nccskunQ
pdx1Z3.inIel.ccm./n!s/pdx/hcme/
nccskun/NeI0aIa¨ Ic validaIe IhaI
Ihe backup cculd be reIrieved (see
Figure 6).
VirIual
ENC* Celerra*
ENC* File
NanagemenI
Appliance VE
AImcsVSA FNAFlLESERVER
VNware* vSphere*
lnIel' Xecn'
Prccesscr·based Server
ENC* AImcs* Scale·cuI SIcrage Clcud
ClFS.
/AImcsShare
/0regcnVSA
Z0 NB
REST/HTTP
CbE
F|gure 7: EHC FHA TesI SeIuµ
7
InIe|' C|cud ßu||ders ûu|de fcr Sca|e·cuI SIcrage w|Ih EHC* AImcs*
F|gure 6: Psync Pesu|Is
EHC F||e HanagemenI Aµµ||ance (FHA)
we used Ihe ENC File NanagemenI Appliance (FNA)
11
Ic archive inacIive daIa !rcm a NAS scurce Ic Ihe AImcs scale·cuI sIcrage clcud.
A small (BKB) sIub ñle is le!I behind cn Ihe NAS scurce, which Ihe user sees as Ihe acIual daIa ñle, buI which acIually pcinIs Ic Ihe
archived daIa. when an end user cr applicaIicn aIIempIs Ic access an archived ñle in iIs criginal NAS lccaIicn, ENC FNA IransparenIly
recalls and presenIs Ihe requesIed ñle. FNA daIa archival is !ully auIcmaIed Ihrcugh user·deñned pclicies and scheduling.
File Iiering cr archiving lcwers Ihe IcIal ccsI c! cwnership c! NAS sIcrage because iI mcves inacIive cr in!requenIly accessed daIa
Ic Ihe AImcs scale·cuI sIcrage clcud, which Ihus reclaims capaciIy cn mcre ccsIly primary sIcrage. we IesIed Ihe ñle Iiering usage
mcdel wiIh Ihe ENC File NanagemenI Appliance/VE (FNA/VE), a VNware virIual appliance. FNA/VE was seI up Ic migraIe daIa !rcm a
virIual ENC* Celerra* as shcwn in Figure 7 wiIh a ClFS expcrI. FNA/VE ccnnecIs direcIly Ic Ihe ENC AImcs clcud sIcrage Ihrcugh Ihe
REST/HTTP inIer!ace.
Ccnf|guraI|cn cf V|rIua| Ce|erra
we per!crmed Ihe !cllcwing sIeps Ic ccnñgure Ihe ENC virIual Celerra.
1. lmpcrIed Ihe virIual Celerra .0VA !ile inIc VNware vSphere*. lnsIalled Ihe virIual Celerra wiIh pre·ccn!igured hardware seIIings.
Z. Lcgged inIc Ihe virIual Celerra ccnscle and launched Ihe ccn!iguraIicn wizard.
3. Ccn!igured Ihe !cllcwing lP/SubneI/CaIeway deIails.
HcsIname. AImcsVSA
lP address. 19Z.16B.3Z.Z1Z
0NS. 19Z.16B.3Z.Z11
0cmain name. 0regcn.aImcs.ccm
4. a. Launched Ihe virIual Celerra graphical user inIer!ace (CUl).
b. Expanded F||esysIems.
c. SelecIed DaIamcver.
d. SelecIed Server_Z.
5. Clicked New.
FilesysIem name. 0regcnVSA
Radic buIIcn. SIcrage pccl
SIcrage space. 5 CB
0aIamcver. Server_Z
6. a. Clicked NeIwcrk.
b. SelecIed New NeIwcrk.
EIherneI adapIer. eIh0
lP address
7. Clicked CIFS !clder.
B. Clicked CIFS Servers Iab.
9. a. Clicked New.
b. SelecIed Server_Z.
c. SelecIed w|ndcws Z008.
HcsIname. FNAFlLESERVER
0cmain name. 0regcn.aImcs.ccm
B
InIe|' C|cud ßu||ders ûu|de fcr Sca|e·cuI SIcrage w|Ih EHC* AImcs*
10. a. Clicked Ihe Ccnf|guraI|cn Iab.
b. SelecIed Un|ccde enab|ed.
c. SIarIed Ihe ClFS service.
d. Prcvided a dcmain admin username.
e. Prcvided a passwcrd.
11. SelecIed Ihe ClFS server checkbcx !cr Ihe AImcsVSA lP address..
1Z. SelecIed Ihe CIFS shares Iab.
13. Clicked New share. See Figure B.
0aIamcver. Server_Z
Name. AImcsshare
File sysIem. 0regcnVSA
PaIh. ###BOT_TEXT###regcnVSA
ClFS server. FNAFlLESERVER
F|gure 8: New Ce|erra Share
14. Lcgged in Ic Ihe Celerra appliance using secure shell (SSH).
15. SIarIed Ihe dhsm api service wiIh Ihe !cllcwing ccmmands.
#Server_hIIp server_Z -service dhsm -sIarI
#Server_hIIp server_Z -append dhsm -hcsIs 19Z.16B.3Z.Z14
#Fs_dhsm -c 0regcnVSA -d 0 -recall_pclicy nc
Ccnf|guraI|cn cf AImcs
&RQILJXUDWLRQRI1RGHV
1. Lcgged inIc Ihe AImcs managemenI ccnscle as SysAdmin. hIIps.//cregcn1·001/mgmI_lcgin.
Z. Clicked Ed|I cn lnIelClcudLab. Clicked ¨Add.¨
3. SelecIed ¨cregcn1·001.¨
9
InIe|' C|cud ßu||ders ûu|de fcr Sca|e·cuI SIcrage w|Ih EHC* AImcs*
4. Clicked Ihe webserv|ce radic buIIcn.
&RQILJXUDWLRQRI&UHGHQWLDOV
1. Lcgged inIc Ihe AImcs IenanI page. hIIps.//cregcn1·001/.
TenanI name. lnIelClcudLab
Passwcrd. SysAdmin
Z. Clicked Ed|I cn lnIelClcudLab user.
3. Clicked Add under Ul0.
Ul0. FNA
4. NcIe Ihe shared secreI sIring and subIenanI l0 generaIed by Ihe sysIem. See Figure 9.
Ccnf|guraI|cn cf FHA
1. lmpcrIed Ihe FNA virIual ediIicn .0VA !ile inIc VNware vSphere. lnsIalled Ihe FNA VE wiIh Ihe pre·ccn!igured hardware seIIings.
Z. Lcgged inIc Ihe FNA VE ccnscle and launched Ihe ccn!iguraIicn wizard.
0aIe. currenI daIe
Time. currenI Iime
HcsIname. FNAFlLESERVER
lP address. 19Z.16B.3Z.Z13
0NS seIIings. 19Z.16B.3Z.Z11
0cmain name. 0regcn.aImcs.ccm
Time ccnñguraIicn. 0cmain ccnIrcller NTP server.
3. Typed ¨l¨ reIurn Ic exiI Ihe seIup wizard.
10
InIe|' C|cud ßu||ders ûu|de fcr Sca|e·cuI SIcrage w|Ih EHC* AImcs*
F|gure 9: CcnñguraI|cn cf AImcs CredenI|a|s
9. Clicked Ccmm|I and exiIed Ihe ccn!iguraIicn windcw.
10. Clicked F||emcver seII|ngs.
Username. dhsm
Passwcrd. nasadmin
11. Clicked Ccmm|I and Ex|I.
4. Typed Ihe !cllcwing ccmmands Ic sIarI Ihe AImcs service and exiIed cuI c! ccmmand line.
#aImcscallback sIarI
#exiI
5. Lcgged inIc FNA CUl. hIIp.//·!maApplaincelP`/lcgin.
6. Clicked Ihe Ccnf|guraI|cn Iab.
7. Clicked New f||eservers under Server ccn!iguraIicn.
B. SelecIed New server and Ce|erra !rcm Ihe drcp dcwn lisI. See Figure 10.
lP address. 19Z.16B.3Z.Z13
neIbics name. FNAFlLESERVER
0ART versicn. 5.6
CcnIrcl sIaIicn. Celerra 0S lP
0cmain name. 0regcn.aImcs.ccm
Admin passwcrd. nasadmin
Scurce Iick bcx. Celerra
AImcs call back agenI. !m.cregcn.aImcs.ccm
11
InIe|' C|cud ßu||ders ûu|de fcr Sca|e·cuI SIcrage w|Ih EHC* AImcs*
F|gure 10: CcnñguraI|cn cf V|rIua| Ce|erra fcr FHA
13. Clicked Ver|fy Ic check IhaI Ihe end·Ic·end ccnnecIicn cn Ihe !ileserver is wcrking and validaIed CcnnecI|cn successfu|
message.
14. Clicked Ihe Pc||cy Iab and click CreaIe new µc||cy.
Name. IesI
Pclicy Iype. archive
ReIenIicn pericd. 0
0elayed pericd. 0
1Z. Clicked New f||eserver and selecIed AImcs under Ihe drcp dcwn menu. See Figure 11.
Name. cregcn1·001
0NA Name. cregcn1·001.cregcn.aImcs.ccm
PcrI. B0 HTTP
Username. ·aImcs_sIring\uid` sIring ccpied !rcm AImcs subIenanI su!ñxed by Ul0
Passwcrd. shared secreI sIring creaIed ccpied !rcm AImcs CUl
1Z
InIe|' C|cud ßu||ders ûu|de fcr Sca|e·cuI SIcrage w|Ih EHC* AImcs*
F|gure 11: CcnñguraI|cn cf AImcs F||e Server
15. Clicked Ihe Add ru|e Ic buIIcn. See Figure 1Z.
CreaIed a rule Ic mcve ñles greaIer Ihan BKB Ic Ihe archive.
Archive desIinaIicn. AImcs
Server. 0regcn1·001
16. Clicked Save and exiIed rules.
17. Clicked Save µc||cy and Schedu|e Ic exiIed pclicy Iab.
Punn|ng Arch|ve |cb
1. Clicked Ihe Schedu|e Iab.
Z. Clicked Ihe crange iccn nexI Ic pclicy ¨IesI.¨
3. SelecIed Pun ncw. The !ile migraIicn jcb sIarIs.
4. 0bserved Ihe green line shcwing jcb was in prcgress.
5. 0bserved IhaI Ihe line Iurns Ic black. This indicaIed IhaI Ihe jcb had ccmpleIed.
13
InIe|' C|cud ßu||ders ûu|de fcr Sca|e·cuI SIcrage w|Ih EHC* AImcs*
F|gure 1Z: FHA F||e HaIch|ng Cr|Ier|a Ic Pu|e
6. Clicked V|ew summary and viewed Ihe deIails. See Figure 13.
7. Checked IhaI !iles were archived by cpening a !clder cn virIual Celerra. Archived !iles are sized aI B KB FNA sIub regardless c!
criginal size. See Figure 14.
14
InIe|' C|cud ßu||ders ûu|de fcr Sca|e·cuI SIcrage w|Ih EHC* AImcs*
F|gure 13: FHA Task Summary
F|gure 14: Arch|ved F||es cn V|rIua| Ce|erra
servers IhaI wcrk IcgeIher under Ihe adminisIraIive ccnIrcl c! cne
NeIBackup masIer server in Ihe !cllcwing ways.
ª The masIer server manages backups, archives, and resIcres. The
masIer server is respcnsible !cr media and device selecIicn !cr
NeIBackup. The masIer server ccnIains in!crmaIicn abcuI IesIed
backups and ccn!iguraIicn.
ª The NeIBackup media servers prcvide Ihe ccnnecIicn Ic Ihe ENC
AImcs scale·cuI sIcrage clcud Ihrcugh Ihe ENC lFS inIer!ace (ENC
AImcs Linux FUSE). 0uring a backup cr archive, Ihe clienI sends
backup daIa acrcss Ihe neIwcrk Ic a NeIBackup media server.
0uring a resIcre, users can brcwse and selecI Ihe !iles and direcIcries
Ic reccver. NeIBackup !inds Ihe selecIed !iles and direcIcries and
resIcres Ihem Ic Ihe disk cn Ihe clienI.
The NeIBackup adminisIraIicn ccnscle prcvides a graphical user
inIer!ace Ihrcugh which Ihe adminisIraIcr can manage NeIBackup.
NeIBackup uses Ihe AdminisIraIicn Ccnscle Ic ccn!igure, manage,
and mcniIcr Ihe sIcrage devices, sIcrage servers, disk pccls, sIcrage
vclumes, caIalcgs, pclicies, hcsI prcperIies, backup, and archives, as
well as Ic resIcre jcbs, daemcns, prccesses, and repcrIs.
NeIBackup was inIegraIed wiIh ENC AImcs Ihrcugh Ihe lnsIallable
File SysIem (lFS)
13
inIer!ace. RedHaI 5.4 was insIalled cn an lnIel Xecn
server. The server was ccn!igured wiIh Iwc NlCs. 0ne NlC was used
Ic ccnnecI Ic Ihe lnIel neIwcrk. The cIher NlC was ccnnecIed Ic Ihe
AImcs servers.
NeIßackuµ InsIa||aI|cn NcIes
NeIBackup media server sc!Iware manages Ihe sIcrage devices
wiIhin Ihe NeIBackup envircnmenI. we insIalled Linux NeIBackup
media server cn Ihe Red HaI Linux server IhaI ccnIained Ihe AImcs
lFS (/mnI/maui!s) insIallaIicn.
ENC* Celerra*
1Z0 C 0aIa
ENC* AImcs* Scale·cuI SIcrage Clcud
NeIBackup
Backup AgenI
NeIBackup
Nedia AgenI
NeIBackup
NasIer Server
ClienI Sw
Server
Sw
/!s_ebar_IesI
!mZns001
/mnI/maui!s
!msepiex01
CbE
CbE
CbE
N0NP
lFS
SymanIec NeIbackuµ TesI
SymanIec NeIBackup* is a daIa prcIecIicn
scluIicn !cr a varieIy c! plaI!crms
including Nicrcsc!I windcws*, UNlX* and
Linux. Ycu can use NeIBackup Ic seI up
pericdic cr calendar·based schedules Ic
per!crm auIcmaIic, unaIIended backups
!cr clienIs acrcss a neIwcrk.
SymanIec NeIßackuµ 0verv|ew
NeIBackup
1Z
includes bcIh Ihe server
and clienI sc!Iware. Server sc!Iware
resides cn Ihe ccmpuIer IhaI manages Ihe
sIcrage devices. ClienI sc!Iware resides
cn ccmpuIer(s) IhaI ccnIain daIa Ic back
up. Figure 15 shcws NeIBackup as we
deplcyed and IesIed iI wiIh ENC AImcs in
Ihe lnIel labs.
NeIBackup acccmmcdaIes mulIiple
15
InIe|' C|cud ßu||ders ûu|de fcr Sca|e·cuI SIcrage w|Ih EHC* AImcs*
F|gure 16
F|gure 15: SymanIec NeIback Deµ|cymenI |n InIe| C|cud Labs
NeIßackuµ TesI
we used Ihe !cllcwing sIeps Ic IesI NeIBackup Ic Ihe AImcs in Ihe lab.
1. CreaIed a NeIBackup sIcrage uniI !cr Ihe AImcs clcud sIcrage. See Figure 16.
Z. CreaIed a NeIBackup pclicy !cr Ihe sIcrage uniI.
Pclicy name. AImcs_Backup_IesI
Pclicy Iype. N0NP
Pclicy sIcrage. AImcs·STU
ClienI server hardware and cperaIing sysIem. N0NP, N0NP
Backup selecIicn. /!s_ebar_IesI
3. RighI clicked ¨AImcs_!mZns001_ndmp¨ and selecIed Hanua| ßackuµ.
4. SelecIed AcI|v|Iy Hcn|Icr Ic view deIails. See Figure 17.
H|crcscfI SÇL Server* ßackuµ TesI
The ENC AImcs scale·cuI sIcrage clcud is ideal Ic sIcre S0L Server daIabase backup (0B) snapshcIs. we did a simple IesI Ic back up a
0V0 sIcre Ic Ihe S0L Server 0B wiIh Ihe use c! Ihe AImcs ClFS clienI inIer!ace.
EHC AImcs CIFS SubIenanI Ccnf|guraI|cn
Ccnñgured AImcs Ihrcugh Ihe use c! lnIerneI Explcrer (see Figure 1B).
1. Lcgged in Ic sysIem managemenI ccnscle hIIps.//·AImcsncdeip`/mgmI_lcgin/ and used SysAdmin as user acccunI.
16
InIe|' C|cud ßu||ders ûu|de fcr Sca|e·cuI SIcrage w|Ih EHC* AImcs*
F|gure 17
Z. NavigaIed Ic TenanI HanagemenI `TenanI L|sI` Enab|e CIFS cn Ihe desired ncdes Ihrcugh Ihe selecIicn c! Ihe apprcpriaIe
radic buIIcn. Alsc selecI Hu|I| SubIenanI Access. Save and exiI Ihe menu.
3. CreaIed a new subIenanI.
4. Lcgged inIc Ihe TenanI NanagemenI ccnscle hIIps.//·AImcsncdeip`/ , used ¨lnIelClcudLab¨ as Ihe IenanI l0.
5. Clicked Add under Ihe subIenanI header Ic creaIe new a subIenanI under ¨lnIelClcudLab.¨ This new subIenanI hcsIed ClFS share
Ic sIcre cur backup.
6. Prcvided Ihe name c! Ihe subIenanI as ¨NSS0LClFS¨ and click CreaIe. This sIep creaIes Ihe NSS0LClFS subIenanI which appeared
under Ihe subIenanI lisI.
7. Clicked Ed|I nexI Ic Ihe subIenanI Ic ccnIinue wiIh ccn!iguraIicn. This sIep ccn!igured Ihe subIenanI by de!aulI as ClFS.
B. Clicked Change cn Ihe de!aulI pclicy speci!icaIicn and selecI ¨NeIbackup¨ as Ihe pre!erred pclicy. This pclicy ensured Ihe
replicaIicn c! daIa wriIIen Ic Ihe ncde cn Ihe lccal siIe and alsc Ic Ihe remcIe siIe in Fclscm.
9. Clicked Ihe hyperlink named ¨ClFS¨ nexI Ic Ihe ncde name. Clicked Add Ic add a new share.
10. Ccn!igured Ihe share name as ¨SqlBackup.¨ we did ncI make changes Ic Ihe cIher parameIers.
11. Clicked Subm|I Ic creaIe a new share.
HS SÇL Server Ccnf|guraI|cn
we implemenIed Ihe NS S0L server cn Ihe !cllcwing hardware ccn!iguraIicn.
ª Super Nicrc XB0TU Chassis, lnIel 5Z00 ChipseI
ª lnIel' Xecn' X5667 Q 3.07 CHz quad ccre Iwc scckeIs
ª 1Z CB 00R3 memcry
ª lnIel' X·Z5 160 CB SS0 bccI drive
ª LSl NegaRAl0* SAS
ª 1Z HiIachi* Z TB drives
17
InIe|' C|cud ßu||ders ûu|de fcr Sca|e·cuI SIcrage w|Ih EHC* AImcs*
F|gure 18: AImcs HS SÇL Server SubIenanI
ª Nicrcsc!I windcws Server* Z00B RZ - EnIerprise EdiIicn 64·biI
ª Nicrcsc!I S0L Z005 wiIh SP3*
ª 0ell 0V0 SIcre (00S) daIabase wcrklcad
14
we seI up Ihe NS S0L server wiIh Ihe !cllcwing sIeps.
1. lnsIalled windcws Z00B RZ EnIerprise ediIicn wiIh Ihe de!aulI seIIings.
Z. lnsIalled Nicrcsc!I S0L Z005 64·biI ediIicn wiIh Service Pack 3.
3. Unziped Ihe 00S Ic c.\ .
4. CreaIed Ihe Ihree parIiIicns required by Ihe 00S. ¨E.¨, ¨F.¨, and ¨L..¨
5. 0pened CreaIedb.sql in a query analyzer and execuIed iI Ic creaIe Ihe daIabase and lcg !iles.
6. Ccn!igured windcws Z00B paIh Ic Ihe new ENC AImcs share. ¨\cregcn1·001\SqlBackup\.¨
we per!crmed Ihe !cllcwing sIeps Ic back up Ihe 00S daIabase.
1. 0pened S0L NanagemenI SIudic, navigaIed Ic Server 0bjecIs ` ßackuµ Dev|ces and righI clicked New ßackuµ Dev|ce.
Z. Prcvided device name as AImcsBackup.
3. Prcvided desIinaIicn !ile paIh as \0regcn1·001\SqlBackup###BOT_TEXT###s1.bak.
4. 0pened Server 0bjecIs ` ßackuµ Dev|ces and righI clicked ßack Uµ a DaIabase.
5. SelecIed ¨0SZ¨ as Ihe daIabase and backup Iype as ¨Full.¨
6. Clicked 0k Ic sIarI Ihe backup.
Ver|fy HS SÇL Server ßackuµ
we sIcred Ihe validaIed S0L Server backup ñles in AImcs clcud sIcrage wiIh Ihe !cllcwing sequence c! ccmmands.
1. Lcgged in Ic Ihe sysIem managemenI ccnscle hIIps.//·AImcsncdeip`/mgmI_lcgin wiIh SysAdmin as Ihe user acccunI.
Z. Ccpied Ihe TenanIl0 sIring !cr lnIelClcudLab and Ihe SubIenanI l0 sIring !cr NSS0LClFS.
3. 0pened a SSH sessicn Ic AImcs Ncde.
4. ExecuIed ¨mauicbjbrcwswer ·I ·IenanI_id` ·s ·subIenanI_id` ·p ·!ile_paIh`.
1B
InIe|' C|cud ßu||ders ûu|de fcr Sca|e·cuI SIcrage w|Ih EHC* AImcs*
5. ValidaIed IhaI Ihe backup !ile was presenI cn Ihe 0regcn AImcs rack (Figure 19) as well as cn Ihe Fclscm AImcs rack (Figure Z0).
F|gure 19: HS SÇL Server ßackuµ Peµ||ca cn 0regcn AImcs Pack
F|gure Z0: HS SÇL Server ßackuµ F||e cn Fc|scm AImcs Pack
19
InIe|' C|cud ßu||ders ûu|de fcr Sca|e·cuI SIcrage w|Ih EHC* AImcs*
Th|ngs Ic Ccns|der
NeIwcrk|ng Arch|IecIure
The ENC ATN0S scale·cuI sIcrage clcud
scluIicn dccumenIed in Ihis paper uIilized
CigabiI EIherneI (CbE) ccnnecIiviIy !rcm
each ncde Ihrcugh a CbE swiIch as Ihe
inIer!ace beIween Ihe applicaIicn and
Ihe ENC ncdes. Ycu can achieve higher
IhrcughpuI Ic Ihe clcud sIcrage i! ycu
equip Ihe 0ell R610 servers wiIh an
lnIel 10 CbE neIwcrk inIer!ace card
and engineer 10 CbE swiIch access !cr
Ihe applicaIicns. As an example, each
ATN0S sIcrage ncde suppcrIs 15 Z TB
hard disk drives. The SATA drives are
capable c! achieving clcse Ic 100NB/s
susIained Irans!er raIe. Assuming Ihis,
Ihe susIained sequenIial Irans!er !rcm
15 drives wculd exceed Ihe IhrcughpuI
prcvide by a 10CbE inIer!ace.
Perfcrmance cf Ihe HeIadaIa Access
The AImcs meIadaIa service per!crmance
is currenIly scaled by adding mcre
SATA hard disk drives. The meIadaIa
service per!crmance cculd likely be mcre
e!ñcienIly scaled by using lnIel' XZ5·E cr
XZ5·N SATA sclid sIaIe drives
15
Ic sIcre
Ihe meIadaIa. Sclid sIaIe drives prcvide
well cver 10 Iimes Ihe randcm read and
wriIe IhrcughpuI c! a SATA hard drive.
Aµµ||caI|cn Prccess|ng cn Ihe DaIa
Ncdes
0ne c! Ihe usage mcdels we menIicned
in Ihis paper buI did ncI IesI was
Ihe use c! scale·cuI sIcrage !cr daIa
warehcusing/analyIics. PcrIals ccmmcnly
use Ihe Hadccp applicaIicn !ramewcrk
wiIh scale·cuI sIcrage Ic Irans!crm Ihe
daIa. The Hadccp !ramewcrk runs Ihe
Irans!crm applicaIicn (Iypically called
Ihe ¨map app¨) cn Ihe same ncde cn
which Ihe daIa wculd reside. ln Ihis case,
ycu wculd need Ic size Ihe prccesscr
per!crmance c! Ihe ncde apprcpriaIely
Ic suppcrI bcIh Ihe daIa access and
Ihe applicaIicn prccessing. Tc gain Ihe
addiIicnal per!crmance needed by Ihe
applicaIicn, ycu mighI wanI Ic use a
higher per!crmance lnIel Xecn prccesscr.
Cach|ng cf DaIa aI Ihe CcmµuIe Server
Access
HighesI applicaIicn per!crmance is
Iypically achieved when Ihe sIcrage
resides cn Ihe same server as Ihe
applicaIicn. This allcws Ihe applicaIicn Ic
Iake !ull advanIage c! Ihe high IhrcughpuI
SATA and PCl Express inIer!aces
prcvide by Ihe lnIel 5400 chipseI.
16
0ne
archiIecIure !cr which Ihis cculd be
suppcrIed is equipping Ihe applicaIicn
server wiIh an lnIel XZ5·E cr XZ5·N SATA
sclid sIaIe drive. This drive wculd be used
as a cache !cr daIa being read !rcm Ihe
AImcs sIcrage ncdes. 0nce Ihe drive is
pcpulaIed, subsequenI accesses Ic Ihe
daIa wculd Ihen be access !rcm Ihe SS0.
The Linux cpen scurce ccmmuniIy has
creaIed Iwc scluIicns !cr enabling Ihis
usage mcdel. Linux Flashcache
17
and NFS
CacheFS.
1B
Pcwer 0µI|m|zaI|cn
Pcwer ccnsumpIicn !cr large amcunIs
c! sIcrage (i.e. PeIabyIes (10
15
) cr
even ZeIIabyIes (10
Z1
)) can beccme
a signiñcanI parI c! Ihe IcIal ccsI c!
cwnership !cr a sIcrage clcud. The
scale·cuI sIcrage clcud IesI cculd be
augmenIed by prcñling Ihe pcwer
ccnsumpIicn c! Ihe racks when idle cr
being IesIed by Ihe usage mcdels. The
pcwer uIilizaIicn cculd be cpIimized
Ihrcugh Iechnclcgy like Ihe lnIel
lnIelligenI Pcwer Ncde Nanager
19
and
lnIel lcwer pcwer prccesscrs.
NexI SIeµs
The nexI sIep is Ic prcñle Ihe
per!crmance c! Ihe usage mcdels in
addiIicn Ic Ihe basic !uncIicnaliIy. ln
addiIicn, we ccnIinue Ic plan IesIs !cr Ihe
iIems menIicned in Things Ic Ccnsider.
Ccnc|us|cns
This paper deñned and IesIed a scale·
cuI sIcrage clcud re!erence archiIecIure
based cn ENC AImcs. The scale·cuI
sIcrage archiIecIure demcnsIraIed
hcw Ic maximize a clcud deplcymenI
sIraIegy by enabling a mulIi·use, shared
in!rasIrucIure.
The AImcs scale·cuI sIcrage archiIecIure
demcnsIraIed hcw Ic maximize a
clcud deplcymenI sIraIegy by enabling
a mulIi·use, shared in!rasIrucIure.
lmpcrIanI cusIcmer ccnsideraIicns when
designing clcud re!erence archiIecIures
are eliminaIing single use, silc cr prcjecI
crienIed in!rasIrucIure IhaI is purpcse
builI. 0ne beneñI c! AImcs enabled shared
in!rasIrucIure deplcymenIs is allcwing
cusIcmers Ic imprcve applicaIicn and
rescurce delivery cycles driving business
inncvaIicn. AImcs demcnsIraIed suppcrI
!cr mulIi·use, shared in!rasIrucIure, which
was uIilized Ihrcugh mulIi·Ienancy and
mulIiple access meIhcds.
AImcs prcvides dexible access
mechanisms. The ccnñguraIicn and
IesIs validaIed Ihe REST/HTTP, lFS, and
ClFS inIer!aces suppcrIed by AImcs.
AImcs, by suppcrIing mulIiple access
meIhcds, prcvides an agile scale·cuI
sIcrage sIraIegy IhaI enables end users
Ic acccmmcdaIe a wider varieIy c! use
cases.
AdapIive sIcrage capabiliIies via granular
pclicy ccnIrcl were alsc demcnsIraIed. An
impcrIanI aspecI Ic scale cuI sIcrage is
e!!ecIive, e!ñcienI uIilizaIicn. 0ne meIhcd
Ic maximize Ihe R0l c! scale cuI sIcrage,
as described abcve, is Ic deplcy mulIiple
use cases acrcss a shared in!rasIrucIure.
An equally impcrIanI aspecI Icwards
prcviding e!!ecIive, e!ñcienI uIilizaIicn
is Ic prcvide an inIelligenI mechanism
and an array c! !eaIure seIs IhaI allcws
each use case Ic uniquely handle cbjecIs.
AImcs presenIs a pcwer!ul pclicy
engine ccmbined wiIh several cbjecI
sIcrage capabiliIies, such as replicaIicn,
ccmpressicn, and erasure ccding, Ic
prcvide e!ñcienI sIcrage c! cbjecIs acrcss
many use cases cr mulIiple IenanIs.
ln ccnclusicn, Ihis re!erence archiIecIure
demcnsIraIes ENC AImcs as an agile
implemenIaIicn available Icday IhaI
meeIs enIerprise end·user requiremenIs
Z0
InIe|' C|cud ßu||ders ûu|de fcr Sca|e·cuI SIcrage w|Ih EHC* AImcs*
!cr e!ñciency and elasIiciIy in Ihe scale·
cuI sIcrage clcud usage mcdel. lI alsc
demcnsIraIes Ihe versaIiliIy c! AImcs
wiIh regards Ic mulIiple use cases IhaI
are relevanI Ic a clcud scluIicn. This
paper !urIher demcnsIraIed Ihe value
prcpcsiIicn c! scale·sIcrage wiIh AImcs
Ihrcugh Ihe availabiliIy c! mulIiple
access meIhcds, varicus ENC AImcs
CecPrcIecI schemes, suppcrI !cr mulIi·
Ienancy, and mulIiple daIa services
(such as ccmpressicn and de·duplicaIicn)
inIegraIed Ihrcugh a pcwer!ul and
inIelligenI meIa·daIa driven pclicy engine.
û|cssary
BZB: Business·Ic·business (BZB)
describes ccmmerce IransacIicns
beIween businesses, such as beIween
a manu!acIurer and a whclesaler, cr
beIween a whclesaler and a reIailer. Frcm.
hIIp.//en.wikipedia.crg/wiki/Business·Ic·
business.
BZC: Business·Ic·ccnsumer (BZC,
scmeIimes alsc called Business·Ic·
CusIcmer) describes acIiviIies c!
businesses IhaI serve end ccnsumers
wiIh prcducIs and/cr services. An
example c! a BZC IransacIicn is a perscn
whc buys a pair c! shces !rcm a reIailer.
The IransacIicns IhaI led Ic Ihe shces
being available !cr purchase, IhaI is Ihe
purchase c! Ihe leaIher, laces, rubber,
eIc. as well as Ihe sale c! Ihe shce !rcm
Ihe shcemaker Ic Ihe reIailer wculd be
ccnsidered (BZB) IransacIicns. Frcm.
hIIp.//en.wikipedia.crg/wiki/Business·Ic·
ccnsumer.
ClF5: Common lnternet FiIe 5ystem: Alsc
kncwn as Server Nessage Blcck (SNB) is a
neIwcrk prcIcccl used Ic prcvide shared
access Ic ñles, prinIers, serial pcrIs, and
miscellanecus ccmmunicaIicns beIween
ncdes cn a neIwcrk - Iypically a seI c!
Nicrcsc!I windcws servers and PC clienIs.
See. hIIp.//en.wikipedia.crg/wiki/Server_
Nessage_Blcck.
Compression: 0aIa ccmpressicn is
Ihe prccess c! enccding in!crmaIicn
wiIh !ewer biIs Ihan Ihe unenccded
represenIaIicn wculd use, Ihrcugh use c!
speciñc enccding schemes. Frcm. hIIp.//
en.wikipedia.crg/wiki/0aIa_ccmpressicn.
DedupIication: 0aIa deduplicaIicn (cr
0edup) is a specialized daIa ccmpressicn
Iechnique Ic eliminaIe ccarse·grained
redundanI daIa, Iypically Ic imprcve
sIcrage uIilizaIicn. ln Ihe deduplicaIicn
prccess, duplicaIe daIa is deleIed,
which leaves cnly cne ccpy c! Ihe daIa
Ic be sIcred, alcng wiIh re!erences Ic
Ihe unique ccpy c! daIa. 0eduplicaIicn
reduces Ihe required sIcrage capaciIy
since cnly Ihe unique daIa is sIcred.
Frcm. hIIp.//en.wikipedia.crg/wiki/0aIa_
deduplicaIicn.
Disk encIosure: A disk enclcsure is a
chassis cr shel! designed Ic hcld and
pcwer disk drives while prcviding a
mechanism Ic allcw Ihem Ic ccmmunicaIe
Ic cne cr mcre servers. |B00S (jusI a
bunch c! disks) is ancIher Ierm !cr disk
enclcsures. Frcm. hIIp.//en.wikipedia.crg/
wiki/0isk_enclcsure.
Erasure coding: Erasure ccding is a
!crward errcr ccrrecIicn (FEC) algcriIhm
which uses daIa sIripping Ic Irans!crm k
daIa elemenIs (acrcss scale·cuI sIcrage
daIa ncdes) inIc a lcnger message
sIripe ccded wiIh n daIa elemenIs. This
enables Ihe criginal daIa Ic be reccvered
!rcm a subseI c! Ihe n daIa elemenIs cn
!ailure. See. hIIp.//en.wikipedia.crg/wiki/
Erasure_ccde
Host bus adapter (HBA): CcnnecIs a
hcsI sysIem (a server) Ic cIher neIwcrk
and sIcrage devices including hard disk
drives and sclid sIaIe sIcrage. See. hIIp.//
en.wikipedia.crg/wiki/HcsI_adapIer.
lnstaIIabIe FiIe 5ystem (lF5): lFS enables
users Ic creaIe Iheir cwn ñle sysIems
wiIhcuI ediIing kernel ccde Ihrcugh Ihe
use c! FUSE. See hIIp.//en.wikipedia.crg/
wiki/FilesysIem_in_Userspace.
IVM: A KVN swiIch (wiIh KVN being an
abbreviaIicn !cr keybcard, videc cr visual
display uniI, mcuse) is a hardware device
IhaI allcws a user Ic ccnIrcl mulIiple
servers !rcm a single keybcard, videc
mcniIcr and mcuse.
Metadata: NeIadaIa is lccsely deñned as
daIa abcuI daIa. NeIadaIa is a ccncepI
IhaI applies mainly Ic elecIrcnically
archived cr presenIed daIa and is used
Ic describe Ihe a) deñniIicn, b) sIrucIure,
and c) adminisIraIicn c! daIa ñles wiIh all
ccnIenIs in ccnIexI Ic ease Ihe use c! Ihe
capIured and archived daIa !cr !urIher
use. Frcm. hIIp.//en.wikipedia.crg/wiki/
NeIadaIa.
MuIti-tenant: The abiliIy Ic serve mulIiple
IenanIs cr cusIcmers wiIh a single
scale·cuI sIcrage plaI!crm. Fcr public
clcud sIcrage, a IenanI is Iypically cne
c! Ihe cusIcmers c! Ihe clcud sIcrage
(e.g. a small business using Ihe clcud
!cr backup). Fcr privaIe clcud sIcrage, a
IenanI is Iypically cne applicaIicn using
Ihe sIcrage (e.g. Ihe ENC File Nanage
Appliance archiving daIa). wiIh a mulIi·
IenanI archiIecIure, Ihe scale·cuI sIcrage
is designed Ic securely parIiIicn iIs daIa
and ccnñguraIicn enabling each IenanI Ic
cusIcmize iIs insIance as i! iI was Ihe cnly
user c! Ihe scale·cuI sIcrage clcud.
NA5: NeIwcrk AIIached SIcrage is a
sIcrage server cr appliance IhaI uses ñle·
based prcIcccls such as NFS (neIwcrk ñle
server) cr ClFS Ic enable clienIs (Iypically
servers and PCs) Ic access ñles cver a
TCP/lP neIwcrk. See. hIIp.//en.wikipedia.
crg/wiki/NeIwcrk·aIIached_sIcrage.
NDMP: NeIwcrk 0aIa NanagemenI
PrcIcccl is a prcIcccl invenIed by Ihe
NeIApp and LegaIc IhaI IranspcrIs daIa
beIween NAS devices and backup devices.
This remcves Ihe need Ic IranspcrI Ihe
daIa Ihrcugh Ihe backup server iIsel!,
Ihus enhancing speed and remcving lcad
!rcm Ihe backup server.
NlC: A neIwcrk inIer!ace card is hardware
IhaI enables a server Ic inIer!ace Ic an
EIherneI cr TCP/lP lccal area neIwcrk
(LAN). A NlC is ncI necessarily a card in
Ihe server, iI cculd be inIegraIed as L0N
(LAN cn server mcIherbcard).
Z1
InIe|' C|cud ßu||ders ûu|de fcr Sca|e·cuI SIcrage w|Ih EHC* AImcs*
1Z. hIIp.//www.symanIec.ccm/business/
neIbackup
13. hIIp.//!use.scurce!crge.neI/
14. hIIp.//www.dellIechcenIer.ccm/page/
0V0+SIcre
15. hIIp.//www.inIel.ccm/gc/ssd
16. hIIp.//www.inIel.ccm/prcducIs/server/
chipseIs/5400/5400·cverview.hIm
17. hIIp.//giIhub.ccm/!acebcck/
dashcache/
1B. hIIp.//en.wikipedia.crg/wiki/CacheFS
19. hIIp.//www.inIel.ccm/Iechnclcgy/
inIelligenIpcwer
EndncIes
1. Scurce. l0C 0igiIal Universe SIudy,
Spcnscred by ENC, Nay Z010.
Z. hIIp.//www.emc.ccm/dcmains/
dccumenIum/
3. hIIp.//www.emc.ccm/prcducIs/!amily/
aImcs.hIm
4. hIIp.//hadccp.apache.crg/
5. hIIp.//www.emc.ccm/ccllaIeral/demcs/
micrcsiIes/mediaplayer·videc/emc·aImcs·
gecprcIecI.hIm
6. hIIp.//ccnIenI.dell.ccm/us/
en/enIerprise/d/virIualizaIicn/
PcwerEdge·R610.aspx
7. hIIp.//scurce!crge.neI/prcjecIs/!use/
ñles/!use·Z.X/Z.7.4/!use·Z.7.4.Iar.gz/
dcwnlcad
B. Fcr deIails re!er Ic AImcs lnsIallaIicn
and Upgrade Cuide 1.3.ZA.pd!
9. See number 6.
10. hIIp.//www.samba.crg/rsync/
11. hIIp.//www.emc.ccm/prcducIs/deIail/
sc!Iware/ñle·managemenI·appliance.hIm
PortaI: web pcrIals c!!er services such
as a web search engine, e·mail, news,
sIcck prices, in!crmaIicn, daIabases and
enIerIainmenI. PcrIals prcvide a way
!cr enIerprises Ic prcvide a ccnsisIenI
lcck and !eel wiIh access ccnIrcl and
prccedures !cr mulIiple applicaIicns and
daIabases. See. hIIp.//en.wikipedia.crg/
wiki/web_pcrIal.
PXE boot: The PrebccI eXecuIicn
EnvircnmenI (PXE, and alsc kncwn as Pre·
ExecuIicn EnvircnmenI) is a prccess Ic
bccI a server by remcIely accessing Ihe
bccI image Ihrcugh Ihe use c! Ihe NlC and
a LAN. See. hIIp.//en.wikipedia.crg/wiki/
PrebccI_ExecuIicn_EnvircnmenI.
RepIication: 0aIa replicaIicn is Ihe prccess
c! sharing daIa sc as Ic imprcve reliabiliIy
beIween redundanI sIcrage devices. The
replicaIicn is IransparenI Ic an applicaIicn
cr end·user. ln a !ailure scenaric, !ailcver
c! replicas is hidden as much as pcssible.
RE5T/HTTP: RepresentationaI 5tate
Transjer: An archiIecIure IhaI
ccmmunicaIes beIween clienIs and
servers cver a TCP/lP neIwcrk (e.g.
lnIerneI). ClienIs iniIiaIe requesIs Ic
servers, servers prccess requesIs and
reIurn apprcpriaIe respcnses. AI any
parIicular Iime, a clienI can eiIher be in
IransiIicn beIween applicaIicn sIaIes cr
¨aI resI.¨ A clienI in a resI sIaIe is able
Ic inIeracI wiIh iIs user, buI creaIes nc
lcad and ccnsumes nc per·clienI sIcrage
cn Ihe seI c! servers cr cn Ihe neIwcrk.
The clienI begins Ic send requesIs when
iI is ready Ic make Ihe IransiIicn Ic a new
sIaIe. The HyperIexI Trans!er PrcIcccl
(HTTP) is ccmmcnly used as Ihe IranspcrI
layer basis !cr REST ccmmunicaIicn.
See hIIp.//en.wikipedia.crg/wiki/
RepresenIaIicnal_SIaIe_Trans!er.
5amba: Samba is Ihe sIandard windcws
inIercperabiliIy suiIe c! prcgrams !cr
Linux and Unix. See. hIIp.//www.samba.
crg/
5AN: 5torage Area Network: A sIcrage
server cr appliance IhaI uses blcck·based
prcIcccls Iypically based cn SCSl Ic
access ñles cver a ñbre channel cr TCP/lP
neIwcrk. See. hIIp.//en.wikipedia.crg/wiki/
SIcrage_area_neIwcrk.
5ATA: 5eriaI Advanced TechnoIogy
Attachment: A sIcrage inIer!ace Ic
ccnnecI hcsI bus adapIers Ic hard disk
drives and sclid sIaIe drives. 0eskIcp
and lapIcp ccmpuIers use SATA hard disk
drives which Iypically have Ihe largesI
capaciIy aI Ihe lcwesI ccsI (dcllars per
CigabyIe). hIIp.//en.wikipedia.crg/wiki/
Serial_ATA.
5caIe-out 5torage (5D5): S0S is a usage
mcdel !cr sIcrage IhaI enables an
enIerprise Ic grcw capaciIy incremenIally
as iI adds mcre sIcrage ncdes (Iypically as
a new server cn an lP neIwcrk). The gcal
c! scale·cuI sIcrage is Ic grcw capaciIy
wiIh near linear (versus lump sum)
invesImenI.
55H: The Secure Shell clienI is a
prcgram !cr lcgging inIc a remcIe
machine and !cr execuIing ccmmands
cn Ihe remcIe machine. lI is a secure
versicn c! Ihe rlcgin, rsh, and IeleneI
ccmmands. lI prcvides secure encrypIed
ccmmunicaIicns beIween Iwc unIrusIed
hcsIs cver an insecure neIwcrk. hIIp.//
en.wikipedia.crg/wiki/Secure_Shell
5pin-down: Spin·dcwn re!ers Ic Iurning
c!! a hard disk drive a!Ier a speciñc pericd
c! Iime Ic ccnserve energy..
Add|I|cna| InfcrmaI|cn
1. AImcs CcncepIual 0verview 1.3.ZA.
pd!
Z. AImcs lnsIallaIicn and Upgrade Cuide
1.3.ZA.pd!
3. AImcs Admin Cuide 1.3.ZA.pd!
4. SymanIec NeIBackup AdminisIraIcr's
Cuide, Vclume 1, Unix and Linux,
Release 7.0.1
5. SymanIec NeIBackup lnsIall Cuide !cr
windcws, Release 7.0.1
6. SymanIec NeIBackup lnsIall Cuide !cr
Unix and Linux, Release 7.0.1
ZZ
InIe|' C|cud ßu||ders ûu|de fcr Sca|e·cuI SIcrage w|Ih EHC* AImcs*
Tc learn mcre abcuI deplcymenI c! clcud scluIicns,
visiI www.inIel.ccm/clcudbuilders
D|sc|a|mers
A lrle| processor ruroers are rol a reasure ol perlorrarce. Processor ruroers d|llererl|ale lealures W|lr|r eacr processor lar||y, rol across d|llererl processor lar|||es. 3ee WWW.|rle|.cor/producls/processor_ruroer lor dela||s.
lNF0RVATl0N lN Tll3 00CuVENT l3 PR0vl0E0 lN C0NNECTl0N wlTl lNTELÜ PR00uCT3. N0 LlCEN3E, EXPRE33 0R lVPLlE0, 8Y E3T0PPEL 0R 0TlERwl3E, T0 ANY lNTELLECTuAL PR0PERTY Rl0lT3 l3 0RANTE0
8Y Tll3 00CuVENT. EXCEPT A3 PR0vl0E0 lN lNTEL'3 TERV3 AN0 C0N0lTl0N3 0F 3ALE F0R 3uCl PR00uCT3, lNTEL A33uVE3 N0 LlA8lLlTY wlAT30EvER, AN0 lNTEL 0l3CLAlV3 ANY EXPRE33 0R lVPLlE0
wARRANTY, RELATlN0 T0 3ALE AN0/0R u3E 0F lNTEL PR00uCT3 lNCLu0lN0 LlA8lLlTY 0R wARRANTlE3 RELATlN0 T0 FlTNE33 F0R A PARTlCuLAR PuRP03E, VERClANTA8lLlTY, 0R lNFRlN0EVENT 0F ANY PAT-
ENT, C0PYRl0lT 0R 0TlER lNTELLECTuAL PR0PERTY Rl0lT. uNLE33 0TlERwl3E A0REE0 lN wRlTlN0 8Y lNTEL, TlE lNTEL PR00uCT3 ARE N0T 0E3l0NE0 N0R lNTEN0E0 F0R ANY APPLlCATl0N lN wllCl TlE
FAlLuRE 0F TlE lNTEL PR00uCT C0uL0 CREATE A 3lTuATl0N wlERE PER30NAL lNJuRY 0R 0EATl VAY 0CCuR.
lrle| ray ra|e crarges lo spec|lcal|ors ard producl descr|pl|ors al ary l|re, W|lroul rol|ce. 0es|grers rusl rol re|y or lre aoserce or craracler|sl|cs ol ary lealures or |rslrucl|ors rar|ed 'reserved¨ or 'urdelred.¨ lrle| reserves lrese lor
lulure delr|l|or ard sra|| rave ro respors|o|||ly Wralsoever lor corl|cls or |rcorpal|o|||l|es ar|s|rg lror lulure crarges lo lrer. Tre |rlorral|or rere |s suojecl lo crarge W|lroul rol|ce. 0o rol lra||ze a des|gr W|lr lr|s |rlorral|or.
Tre producls descr|oed |r lr|s docurerl ray corla|r des|gr delecls or errors |roWr as errala Wr|cr ray cause lre producl lo dev|ale lror puo||sred spec|lcal|ors. Currerl craracler|zed errala are ava||ao|e or requesl. Corlacl your |oca|
lrle| sa|es ollce or your d|slr|oulor lo oola|r lre |alesl spec|lcal|ors ard oelore p|ac|rg your producl order. Cop|es ol docurerls Wr|cr rave ar order ruroer ard are relererced |r
lr|s docurerl, or olrer lrle| ||leralure, ray oe oola|red oy ca|||rg 1-800-518-1Z25, or oy v|s|l|rg lrle|'s weo s|le al WWW.|rle|.cor.
Copyr|grl @ 2010 lrle| Corporal|or. A|| r|grls reserved. lrle|, lre lrle| |ogo, Xeor, Xeor |rs|de, ard lrle| lrle|||gerl PoWer Node Varager are lraderar|s ol lrle| Corporal|or |r lre
u.3. ard olrer courlr|es.
¯0lrer rares ard orards ray oe c|a|red as lre properly ol olrers.

Pr|rled |r u3A 1010/N3/PRw/P0F P|ease Recyc|e 321138-001 u3
InIe|' C|cud ßu||ders ûu|de fcr Sca|e·cuI SIcrage w|Ih EHC* AImcs*