Data ONTAP

NetApp Accredited Storage Architect Program (ASAP)

Module Overview
In this module, we will cover the following:  NetApp ® core software technology  Specific on-box and off-box features of Data ONTAP®  NVRAM functionality  NetApp SnapShot technology  Describe NetApp RAID-DP ® implementation

© 2008 NetApp. All rights reserved.

2

Module Objectives
By the end of this module, you should be able to:  Identify NetApp core software
– On-box features of Data ONTAP – Off-box features of Data ONTAP

 Describe the functionality of NVRAM  Describe the values of NetApp RAID 4 and RAID-DP technology  Demonstrate NetApp Snapshot whiteboard

© 2008 NetApp. All rights reserved.

3

Core Software Technology

© 2008 NetApp. All rights reserved.

4

Core Software Technology
Data ONTAP
Off-Box Storage Management Off-Box Administration Tools Data ONTAP 7.X for FAS/NearStore ® Data ONTAP 7.X for V-Series Data ONTAP GX 10.X

Protocol Support

On-box value added software

WAFL® - Core technology  NVRAM  RAID 4 or RAID-DP  Snapshot  FlexVol ®

FC and Ethernet Network Connectivity

© 2008 NetApp. All rights reserved.

5

Core Software Technology
Data ONTAP
Off-Box Storage Management Off-Box Administration Tools Data ONTAP 7.X for FAS/NearStore ® Data ONTAP 7.X for V-Series Data ONTAP GX 10.X

Protocol Support

On-box value added software

WAFL- Core technology  NVRAM  RAID 4 or RAID-DP  Snapshot  FlexVol

FC and Ethernet Network Connectivity

© 2008 NetApp. All rights reserved.

6

Data ONTAP Components: WAFL Versus “Traditional” File Systems
WAFL
File data location Metadata location Updates to existing data / metadata? File system consistency? Crash recovery? Interaction with RAID Snapshot copies / versions? Anywhere on disk Anywhere on disk (except root inode) Put in unallocated blocks (originals intact) Guaranteed by design Reboot, ready to go Can write full stripes, utilizing bandwidth By design

Traditional File Systems
Most of disk Dedicated regions Overwrite existing data Careful ordering of all writes required Slow, complicated fsck Has to seek for all updates Requires extra copy on write

© 2008 NetApp. All rights reserved.

7

NVRAM / RAID-DP

© 2008 NetApp. All rights reserved.

8

NVRAM Operation
Client Storage System

GbE

Dual-attached FC

© 2008 NetApp. All rights reserved.

9

NVRAM Operation (Cont.)
Client
Operation
N V R A M BATT

Storage System
NIC
+

Main Memory

NIC ack

Main Memory

-

NIC = Network Interface Card

 Operation is logged in batterybacked RAM and is now safe from controller failure  Operation is also placed in controller’s main memory where further processing will occur
© 2008 NetApp. All rights reserved.

 Client free to ―forget about it‖ – it’s done!  Purely electronic, memory-tomemory path

10

NVRAM Operation (Cont.)
Client
Main Memory NIC

Storage System
NIC
+ -

N V R A M BATT

Main Memory

 Activities involving the operation consume main memory  Up to 10 seconds can elapse between CP’s while many other ops arrive (not shown)
© 2008 NetApp. All rights reserved.

 The organized data from the operations are written to disk  NVRAM is zeroed

11

NVRAM Benefits for Both SAN and NAS Environments
Blinking LED Memory

NVRAM5

Battery life after shutdown  ―Clean‖
– Weeks

Clustered Failover Ports

 ―Dirty‖
Tavor 4x (10 Gb/s) InfiniBand 5.1Ah Battery (Dirty shutdown life –3-7 days)

– 3 to 7 days – Partial to full charge

NVRAM6
512-MB/2-GB DIMM 3-Cell Battery

IB CFO Connectors

2-Cell Battery (Added into 2GB version only)

© 2008 NetApp. All rights reserved.

13

Data ONTAP® Components: Data Layout – RAID 4

Write Chain

RAID Stripe

Parity Drive

 ―Tetris‖ writes  Try to fill stripes  Recalculate parity

© 2008 NetApp. All rights reserved.

14

Data ONTAP Components: RAID-DP
D D D D P DP

3

1

2 2

3

9 12 RAID-DP protects against any two-disk failure

2 7

 RAID-DP is dual / diagonal parity data protection  NetApp RAID-DP is an implementation of the industry standard RAID 6 as defined by SNIA
– SNIA definition recently updated to include NetApp RAID-DP
http://www.snia.org/education/dictionary/r/
© 2008 NetApp. All rights reserved. 15

NetApp Snapshot

© 2008 NetApp. All rights reserved.

16

Core Software Technology
Data ONTAP
Off-Box Storage Management Off-Box Administration Tools Data ONTAP 7.X for FAS/NearStore ® Data ONTAP 7.X for V-Series Data ONTAP GX 10.X

Protocol Support

On-box value added software

WAFL- Core technology  NVRAM  RAID 4 or RAIS-DP  Snapshot  FlexVol

FC and Ethernet Network Connectivity

© 2008 NetApp. All rights reserved.

17

NetApp Greatest Hits
RAID-DP
FlexClone™ Snapshot™ technology Single OS
®

95%
95% 94% 89%

SnapMirror®
SnapVault® FlexVol performance SnapManager®

81%
76% 73% 69%

SnapRestore

89%
86% 86% 85% 85% 85% 85% 83%

Data ONTAP benefits
FlexVol provisioning V-Series FlexVol priorities SnapDrive® software LockVault™ NAS Leadership Forced disk consistency

68%
68% 64% 61% 60% 60% 59% 40%

WAFL® integration Multi-protocol Data ONTAP simplicity WAFL file system FlexVol virtualization iSCSI leadership SnapLock®

© 2008 NetApp. All rights reserved.

18

NetApp Snapshot Technology
Blocks in LUN or File Blocks on the Disk

 Take snapshot 1
– Copy pointers only – No data movement

A A B B C C

A B C

Snap 1

© 2008 NetApp. All rights reserved.

19

NetApp Snapshot Technology
Blocks in LUN or File Blocks on the Disk

A A B1 B B C C

A B C B1

 Take snapshot 1  Continue writing data  Take snapshot 2
– Copy pointers only – No data movement

A B C

Snap 1 Snap 2

© 2008 NetApp. All rights reserved.

20

NetApp Snapshot Technology
Blocks in LUN or File Blocks on the Disk

A A B1 B C1 C

A B C B1 C2

 Take snapshot 1  Continue writing data  Take snapshot 2
– Copy pointers only – No data movement

A B C

A B1 C

 Continue writing data  Take snapshot 3  Simplicity of model
– Best disk utilization – Fastest performance – Unlimited snapshots

Snap 1 Snap 2 Snap 3

© 2008 NetApp. All rights reserved.

21

Snapshot Performance

 Snapshots

 TPC-C published with 5 active – Point-in-time copy – Created in a few seconds snapshots
– No performance penalty
NetApp Confidential -- Do Not Distribute 22

© 2008 NetApp. All rights reserved.

Competitors’ Snapshots

© 2008 NetApp. All rights reserved.

NetApp Confidential -- Do Not Distribute

23

Others’ Snapshots
Blocks in LUN or File Blocks on the Disk

 Take snapshot 1
– Create copy-out region 1 – Create pointers to old blocks and copy-out

A A B B C C A B C
A
B C

A B C Copy Out 1

Snap 1

© 2008 NetApp. All rights reserved.

24

Others’ Snapshots
Blocks in LUN or File Blocks on the Disk

 Take snapshot 1  Continue writing data
– Block changes – Read old block; write to copy-out region – Updates snap pointer to copy-out region – Update block on disk

A

A B C Copy Out 1

B1 B
C

A B C

 One write requires:
– 1 read (old data) – 1 write (old data) – 1 write (new data)

Snap 1

© 2008 NetApp. All rights reserved.

25

Others’ Snapshots
Blocks in LUN or File Blocks on the Disk

A

A B1 C B Copy Out 1

 Take snapshot 1  Continue writing data  Take snapshot 2
– Create copy-out region 2 – Create pointers to old blocks and copy-out

B1 B
C

A B1 B C 

Copy Out 2

Copy copy Out 1 out 2 Snap 1 2

© 2008 NetApp. All rights reserved.

26

Others’ Snapshots
Blocks in LUN or File Blocks on the Disk

A

A B1 C B Copy Out 1

B1 B
C2 C

   

Take snapshot 1 Continue writing data Take snapshot 2 Continue writing data
– Block changes – Old block written to all copy-out regions – Update all snap pointers to copy-out regions – Update block on disk

A B C 

A B1 B C 

Copy Out 2

Copy Out 1 Snap 1

Copy Out 2 Snap 2

 One write requires:
– 1 read – 3 writes
27

© 2008 NetApp. All rights reserved.

Snapshot Comparison
NetApp Others

A B C B1 C2

A B1

The NetApp Approach  Absolute minimum overhead
– Guarantees disk space efficiency

Used Disk Space

C2 B C

 No data movement
– Guarantees disk performance – Enables more snapshots
Space on disk is better Performance is better Number of snapshots is better
28

C

Side-by-side comparison after two snapshots
© 2008 NetApp. All rights reserved.

Using Snapshot Copies to Restore Data
Blocks in LUN or File Blocks on the Disk

 Block C2 is bad

A B1 B C2

A B C B1 C2

A B C

A B1 C

A B1 C2

Snap 1 Snap 2 Snap 3

© 2008 NetApp. All rights reserved.

29

Using Snapshot Copies to Restore Data
Blocks in LUN or File Blocks on the Disk

A B1 B C2 C

A B C B1

 Block C2 is bad  Let users self-restore from .snapshot directory in NAS environments

.snapshot directory

C2

A B C

A

A B1 C2

B1
C

Snap 1 Snap 2 Snap 3

© 2008 NetApp. All rights reserved.

30

Using Snapshot Copies to Restore Data
Blocks in LUN or File Blocks on the Disk

A B1 B C2 C SnapRestore®

A B C B1 C2

 Block C2 is bad  Let users self-restore from .snapshot directory in NAS environments  Restore from snapshot with SnapRestore
– Move pointers from good snapshot to file system

A B C

A

A B1 C2

B1
C

 Single File SnapRestore
– Allows restoration of a single file from a snapshot – Can take secs. To mins. depending on file size
31

Snap 1 Snap 2 Snap 3

© 2008 NetApp. All rights reserved.

Snapshot Performance

 Snapshot Technology

 TPC-C published with five active – Point-in-time copy – Created in a few seconds snapshot copies
– No performance penalty
NetApp Confidential -- Do Not Distribute 32

© 2008 NetApp. All rights reserved.

Competitors’ Snapshots

© 2008 NetApp. All rights reserved.

NetApp Confidential -- Do Not Distribute

33

Others’ Snapshots
Blocks in LUN or File Blocks on the Disk

 Take snapshot 1
– Create copy-out region 1 – Create pointers to old blocks and copy-out

A A B B C C A B C
A
B C

A B C Copy Out 1

Snap 1

© 2008 NetApp. All rights reserved.

34

Others’ Snapshots
Blocks in LUN or File Blocks on the Disk

 Take snapshot 1  Continue writing data
– Block changes – Read old block; write to copy-out region – Updates snap pointer to copy-out region – Update block on disk

A

A B C Copy Out 1

B1 B
C

A B C

 One write requires:
– 1 read (old data) – 1 write (old data) – 1 write (new data)

Snap 1

© 2008 NetApp. All rights reserved.

35

Others’ Snapshots
Blocks in LUN or File Blocks on the Disk

A

A B1 C B Copy Out 1

 Take snapshot 1  Continue writing data  Take snapshot 2
– Create copy-out region 2 – Create pointers to old blocks and copy-out

B1 B
C

A B1 B C 

Copy Out 2

Copy copy Out 1 out 2 Snap 1 2

© 2008 NetApp. All rights reserved.

36

Others’ Snapshots
Blocks in LUN or File Blocks on the Disk

A

A B1 C B Copy Out 1

B1 B
C2 C

   

Take snapshot 1 Continue writing data Take snapshot 2 Continue writing data
– Block changes – Old block written to all copy-out regions – Update all snap pointers to copy-out regions – Update block on disk

A B C 

A B1 B C 

Copy Out 2

Copy Out 1 Snap 1

Copy Out 2 Snap 2

 One write requires:
– 1 read – 3 writes
37

© 2008 NetApp. All rights reserved.

Snapshot Comparison
NetApp Others

A B C B1 C2

A B1

The NetApp Approach  Absolute minimum overhead
– Guarantees disk space efficiency

Used Disk Space

C2 B C

 No data movement
– Guarantees disk performance – Enables more snapshot copies
Space on disk is better Performance is better Number of snapshot copies is better
38

C

Side-by-side comparison after two snapshot copies
© 2008 NetApp. All rights reserved.

Using Snapshot Copies to Restore Data
Blocks in LUN or File Blocks on the Disk

 Block C2 is bad

A B1 B C2

A B C B1 C2

A B C

A B1 C

A B1 C2

Snap 1 Snap 2 Snap 3

© 2008 NetApp. All rights reserved.

39

Using Snapshot Copies to Restore Data (Cont.)
Blocks in LUN or File Blocks on the Disk

A B1 B C2 C

A B C B1

 Block C2 is bad  Let users self-restore from .snapshot directory in NAS environments

.snapshot directory

C2

A B C

A

A B1 C2

B1
C

Snap 1 Snap 2 Snap 3

© 2008 NetApp. All rights reserved.

40

Using Snapshot Copies to Restore Data (Cont.)
Blocks in LUN or File Blocks on the Disk

A B1 B C2 C SnapRestore®

A B C B1 C2

 Block C2 is bad  Let users self-restore from .snapshot directory in NAS environments  Restore from snapshot with SnapRestore
– Move pointers from good snapshot to file system

A B C

A

A B1 C2

B1
C

 Single File SnapRestore
– Allows restoration of a single file from a snapshot – Can take secs. to mins. depending on file size
41

Snap 1 Snap 2 Snap 3

© 2008 NetApp. All rights reserved.

Exercise
Module 2: Snapshot Whiteboard Estimated Time: 20 minutes

Snapshot Whiteboard Exercise
 The next exercise is a script of how to demonstrate Snapshot on the whiteboard
 Take 10 minutes to study the method of presentation  Volunteers will come to the whiteboard and deliver the Snapshot presentation to the class  Be ready to: – Walk through how NetApp Snapshot technology works and explain what happens on changes of data – Explain how competitors do snapshots and what happens to them on changes
© 2008 NetApp. All rights reserved. NetApp Confidential -- Do Not Distribute 43

Core Software Technology

© 2008 NetApp. All rights reserved.

44

Core Software technology
Data ONTAP
Off-Box Storage Management
Data ONTAP 7.X for FAS/NearStore ® Data ONTAP 7.X for V-Series Data ONTAP GX 10.X

Off-Box Administration Tools

Protocol Support

On-box value added software

WAFL- Core technology  NVRAM  RAID 4 or RAID-DP  Snapshot  FlexVol

FC and Ethernet Network Connectivity

© 2008 NetApp. All rights reserved.

45

NOW-site Software Download Page

http://now.netapp.com/NOW/cgi-bin/software/
© 2008 NetApp. All rights reserved. 46

Data ONTAP 7.0 Release Model
P1 ... Pn P1 ... Pn P1 ... Pn P1 ... Pn

Patch Releases (bi-weekly)

―Release Candidate‖  Initial posting to NOW

―General Availability‖  Key certifications required for solution sets  Basic client compatibility  Tier 1 partner certification

GD

―General Deployment‖  Adoption metrics  Full solution sets  Full client compatibility

RC1

...

RCn

GA

Maintenance Releases

7.0RC1

7.0RCn

7.0 GA date

7.0.1 3-4 months after GA

7.0.2

7.0.3 6-7 months after GA

1-2 months before GA

Time

http://now.netapp.com/NOW/products/ontap_releasemodel/post70.shtml
© 2008 NetApp. All rights reserved. 47

Release Metrics

http://now.netapp.com/NOW/cgi-bin/metrics
© 2008 NetApp. All rights reserved. 48

Data ONTAP Convergence
7.2 ―GordonBiersch‖ 10.0 ―Tricky‖

2007
10.0.x ―Tricky.x‖

2008

7.3 ―Iron City‖ ―Boilermaker‖

10.1 ―Whirlwind‖

2009
7.4 ―Kingfisher‖

―Rolling Rock‖

10.2 ―Stormking‖

―Rolling Rock +1‖

2010
© 2008 NetApp. All rights reserved. 49

Challenges in High-Performance Computing
 Insatiable demand for performance  Storage consumption hard to predict  Must provide continuous operations  Recover rapidly from errors and disasters  Get the most from IT investments  Reduce complexity of high performance apps
Linux Cluster
client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code client code

Client Code

―ZZZ ‖

x
CPU 100% !! ERROR !! Disk Full Hardware Failure Idle Mgmt I/F 1

x
Data Loss

Mgmt I/F 2

Performance demands stress traditional storage
© 2008 NetApp. All rights reserved. 50

Data ONTAP GX-Global Namespace
Servers

 Simplicity
– All clients see all data – Simplifies managing mount points – No client changes

Projects Global Name -Space

A

B

C

A1 A2 A3 B1 B2

C1 C2 C3

 Transparency
Virtual Server

A B1 A3 C1 B C A1 C2 B2 A2 C3

– Expansion – FlexVol move – Failover

 Scalability
– Grow namespace to Petabytes – Manageability
51

Flexible scalable storage system
© 2008 NetApp. All rights reserved.

Data ONTAP GX-Striped Flexible Volumes
Servers

 Customer Benefits
– Scalability and performance  Flexible volumes that span many controller nodes
9 10 11 12 B2

Projects Global Name -Space

A

B

C

A1 A2 A3 B1 B2

C1 C2 C3

1 3

2 4 B1

5 7

6 8 B

Striped Volume X

 Performance to multiGBps throughput
 Scale FlexVol size to hundreds of terabytes

C1 A3

A1 C C2

C3

A2

GX Cluster

High levels of performance and scalability
© 2008 NetApp. All rights reserved. 52

Data ONTAP GX-Transparent Expansion
Servers

 Customer Benefits
– Rapidly/seamlessly deploy new storage and/or applications – No downtime required – Transparent to compute farm, namespace is unchanged

Projects Global Name -Space

A

B

C

A1 A2 A3 B1 B2

C1 C2 C3

A A1 A3

A2

C2 B B2 B C C3
B1 C1

© 2008 NetApp. All rights reserved.

53

Data ONTAP GX – Load Balancing
Servers

 Customer Benefits
– Optimize performance – Maximize disk utilization – No disruption to application – No need to touch clients because namespace is unchanged

Projects Global Name -Space

A

B

C

A1 A2 A3 B1 B2

C1 C2 C3

A B1 A3 C1 B A1 C2 B2 A2 C3

 Example
C

Project A gets dedicated resources

– Optimize project A response time

© 2008 NetApp. All rights reserved.

54

On-Box Value-Added Software
Data ONTAP
Off-Box Storage Management Off-Box Administration Tools Data ONTAP 7.X for FAS/NearStore ® Data ONTAP 7.X for V-Series Data ONTAP GX 10.X

Protocol Support

On-box value added software

WAFL- Core technology  NVRAM  RAID 4 or RAID-DP  Snapshot  FlexVol

FC and Ethernet Network Connectivity

© 2008 NetApp. All rights reserved.

55

Data ONTAP On-Box Technology
SnapSuite Quick Reference Guide $ $ $ Snapshot SnapRestore SnapMirror
Windows Linux Solaris HP-UX AIX

Instant self-service file recovery for end users.

Instant volume recovery, or large individual files.

IP

Async and sync remote replication over inexpensive IP. FC now also supported. Heterogeneous super-efficient hourly disk-based online archiving with versioning up to weeks or months.

$

SnapVault

$ $

SyncMirror
SnapLock

plex0

plex1

Synchronous RAID-1 local mirroring via disk shelf ―plexes.‖ RAID-1 remote mirroring product for DR is MetroCluster. SEC-compliant disk-based WORM technology

$

LockVault

Windows Linux Solaris HP-UX AIX

Heterogeneous data retention solution for unstructured data.

$ $

No License Fee License Fee Location of the Quick Reference Guides: http://www.netapp.com/mycommunities/PartnerCenter/tools/spotlight-presentations.html

© 2008 NetApp. All rights reserved.

56

Other On-Box Technologies
FlexVols

$
$ $

FlexVol

Disks

Disks

Disks

Aggregate

FlexClone

FlexCache
Priority

$

FlexShare

© 2008 NetApp. All rights reserved.

57

Other On-Box Technologies
$
$
SyncMirror (SnapMirror vs. SyncMirror) Clustering
plex0 plex1

$
$

MetroCluster MultiStore/Vfiler vs. V-Series/gFiler
Vfiler A

© 2008 NetApp. All rights reserved.

58

FC and Ethernet Connectivity
Data ONTAP
Off-Box Storage Management Off-Box Administration Tools Data ONTAP 7.X for FAS/NearStore ® Data ONTAP 7.X for V-Series Data ONTAP GX 10.X

Protocol Support

On-box value added software

WAFL- Core technology  NVRAM  RAID 4 or RAID-DP  Snapshot  FlexVol

FC and Ethernet Network Connectivity

© 2008 NetApp. All rights reserved.

59

Data ONTAP Components: The Network Stack
Client

Storage System/Data ONTAP

Client

WAFL
Client

Network Stack

Protocols

RAID

Storage

Client

NVRAM Disks

Client

Network

Clustering

Cluster Interconnect Cluster Partner

© 2008 NetApp. All rights reserved.

60

Data ONTAP Protocol Support
Data ONTAP
Off-Box Storage Management Off-Box Administration Tools Data ONTAP 7.X for FAS/NearStore ® Data ONTAP 7.X for V-Series Data ONTAP GX 10.X

Protocol Support

On-box value added software

WAFL- Core technology  NVRAM  RAID 4 or RAID-DP  Snapshot  FlexVol

FC and Ethernet Network Connectivity

© 2008 NetApp. All rights reserved.

61

Data ONTAP Components: Protocols
Client

Filer/DataONTAP

Client

WAFL
Client

Network Stack

Protocols

RAID

Storage

Client

NVRAM Disks

Client

Network

Clustering

Cluster Interconnect Cluster Partner

© 2008 NetApp. All rights reserved.

62

Data ONTAP Components: Data Access Protocols
The four core data access protocols:
$ CIFS (Common Internet File System, developed by Microsoft) $ NFS (Network File System, developed by Sun Microsystems, v2, v3, v4) $ FCP (Fibre Channel Protocol) $ iSCSI [SCSI over TCP/IP]
$ No License Fee $ License Fee

© 2008 NetApp. All rights reserved.

63

Data ONTAP Components: Other Protocols
 HTTP, HTTPS – Not a full-fledged HTTP server  FTP – Full FTP and TFTP implementation  NDMP

 SNMP
 SMTP

 Telnet, RSH, SSH, RPC
© 2008 NetApp. All rights reserved. NetApp Confidential -- Do Not Distribute 64

Off-Box Storage Management
Data ONTAP
Off-Box Storage Management
Data ONTAP 7.X for FAS/NearStore® Data ONTAP 7.X for V-Series Data ONTAP GX 10.X

Off-Box Administration Tools

Protocol Support

On-box value added software

WAFL- Core technology  NVRAM  RAID 4 or RAID-DP  Snapshot  FlexVol

FC and Ethernet Network Connectivity

© 2008 NetApp. All rights reserved.

65

External to Data ONTAP® - Products
 SnapDrive  SnapManager
– – – – – SnapManager for Exchange SnapManager for Oracle SnapManager for SAP SnapManager for SharePoint SnapManager for SQL Server

© 2008 NetApp. All rights reserved.

NetApp Confidential -- Do Not Distribute

66

External to Data ONTAP - Products

 Open System SnapVault

 VFM (Virtual File Manager)

 NetApp DataFort  IS1200

© 2008 NetApp. All rights reserved.

NetApp Confidential -- Do Not Distribute

67

External to Data ONTAP - Products

 File Storage Resource Manager  VTL  NearStore Personality License

© 2008 NetApp. All rights reserved.

NetApp Confidential -- Do Not Distribute

68

External to Data ONTAP - Administration

 Operations Manager – Protection Manager – Storage Manager  Appliance Watch – MOM – OpenView – TSM  CommandCentral Storage (Symantec)

© 2008 NetApp. All rights reserved.

NetApp Confidential -- Do Not Distribute

69

Storage Layout Agenda

 Aggregate review

 Volume creation and management
 MultiStore

 Competitive layout

© 2008 NetApp. All rights reserved.

NetApp Confidential -- Do Not Distribute

70

Data ONTAP 7.0 Storage Terminology: Aggregate
Aggregate

RG 0

RG 1

RG 2

 Aggregate — collection of physical disk space used as a container to support one or more flexible volumes – Aggregates are the physical layer

© 2008 NetApp. All rights reserved.

71

Basic Aggregate Attributes

 Default RAID type = RAID-DP – One or more RAID groups  RAID group size definable  Supports SyncMirror®  Aggregate snapshot support (default enabled) – Target all flexible volumes contained within aggregate

© 2008 NetApp. All rights reserved.

NetApp Confidential -- Do Not Distribute

72

Aggregate Status Information: FilerView

To get aggregate properties click on aggregate name

© 2008 NetApp. All rights reserved.

73

Aggregate Status Information: FilerView Aggregate Properties Screen
 Get to other details about the aggregate from the buttons at the bottom of this screen

© 2008 NetApp. All rights reserved.

74

Data ONTAP 7.0 Storage Terminology: Traditional Volume
Aggregate

RG 0

RG 1

RG 2

 Traditional volume — collection of physical disk space that is used to support a single volume
– Traditional volumes are directly tied to an aggregate – Traditional volumes are both the physical layer and the logical layer

© 2008 NetApp. All rights reserved.

75

Data ONTAP 7.0 Storage Terminology: Flexible Volume
Aggregate
FlexVol2 FlexVol1 FlexVol2 FlexVol1 FlexVol2 FlexVol1

RG 0

RG 1

RG 2

 Flexible volume — collection of disk space allocated as a subset of the available space within an aggregate
– Flexible volumes are loosely tied to their aggregate – Flexible volumes are the logical layer

© 2008 NetApp. All rights reserved.

76

Flexible Volume Creation: FilerView Example

© 2008 NetApp. All rights reserved.

77

Basic Flexible Volume Status Information: FilerView
 Flexible volume information is provided via FilerView from Volumes Manage and Volume Properties screens

Filter by: Volume Type

If aggregate name, flexible volume

Click on volume name for volume properties

© 2008 NetApp. All rights reserved.

78

Volume Information Commands
NetApp> vol status Volume State vol0 online flex1 online NetApp> vol status flex1 Volume State flex1 online Status raid_dp, flex Options
If it is a flexible volume, the aggregate name appears Both traditional and flexible volumes are visible

Status raid4, trad raid_dp, flex

Options root, nosnap=on

Containing aggregate: 'aggr1'
NetApp> vol container flex1 Volume 'flex1' is contained in aggregate 'aggr1' NetApp> vol size flex1 vol size: Flexible volume 'flex1' has size 10g.

Support commands unique to flexible volumes

© 2008 NetApp. All rights reserved.

79

Aggregate: Flexible / Traditional Volume Initial Space Allocation (NAS Defaults)
Aggregate Space 10% WAFL Overhead 10% WAFL Overhead WAFL Aggregate Space FlexVol Space

FlexVol1

80% 20% Adj.

WAFL Traditional volume 90%

80% 90%
.snapshot

95%

FlexVol#n

80%

.snapshot

20% Adj.

snapshot Reserve
Aggregate snapshot Reserve

(20% Adjustable)

(5% Adjustable)

© 2008 NetApp. All rights reserved.

80

Flexible Volume Creation Space Guarantee Types
 Flexible volume creation can be performed using CLI command or FilerView  Allocated from containing aggregate’s file system’s space
– Controlled by flexible volumes ―space guarantee‖ option
 Three types of ―space guarantee‖ types available

X
Volume None File

© 2008 NetApp. All rights reserved.

81

Space Guarantee - Volume
Aggregate = 50Gb Total Physical Space = 49 cups of water = 50 Cups of Water

Flexible Volume = 10Gb Used Space = 1Gb 0Gb

40 empty cups available for filling 50 empty cups available for filling

Available water reduced by 10 cups immediately to guarantee the cups needed to fill the bucket
Space is not used, but the ability to take that space is reserved

 Space guarantee set to volume – Space allocated from aggregate at creation for entire ―size‖ of volume – Space allocated within volume does not affect aggregate space
© 2008 NetApp. All rights reserved. 82

Space Guarantee - None
Aggregate = 50Gb Total Physical Space = 49 cups of water = 50 Cups of Water

Flexible Volume = 10Gb Used Space = 1Gb 0Gb

49 empty cups available for filling 50 empty cups available for filling

A new volume requests 10 cups, however no ―real‖ space is taken until it is needed by the volume

 Guarantee = none – Space allocated from aggregate as used – Does not support file and LUN space reservations – May run out of aggregate space before achieving volume ―size‖
© 2008 NetApp. All rights reserved. 83

Space Guarantee - None
Aggregate = 50Gb Total Physical Space = 49 cups of water =48 Cups of Water 50 Flexible Volume = 79Gb 78Gb 80Gb Used Space = 1Gb 2Gb 0Gb

48 49 empty cups available for filling 50 empty cups available for filling

 It is possible to have many flexible volumes that can possibly exceed the limit the aggregate has available  This is because they are empty and not taking any ―real‖ space until needed by the volume
© 2008 NetApp. All rights reserved. 84

Space Guarantee - File
Aggregate = 50Gb Total Physical Space = 50 Cups of Water = 48 cups of water Flexible Volume = 8Gb 10Gb Used Space = 2Gb Used Space = 0Gb

48 empty cups available for filling 50 empty cups available for filling

 It is possible to have many flexible volumes that can possibly exceed the limit the aggregate has available  This is because they are empty and not taking any ―real‖ space until needed by the volume
© 2008 NetApp. All rights reserved. 85

Flexible Volume Resizing: CLI Command
 vol size command is used to resize a flexible volume
Syntax:
vol size <vol-name> [[+|-]<size>[k|m|g|t]]
Command vol size FlexVol 50m vol size FlexVol +50m vol size FlexVol -25m Result FlexVol will now be 50M FlexVol will be increased 50M to 100M FlexVol will be decreased 25M to 75M

© 2008 NetApp. All rights reserved.

NetApp Confidential -- Do Not Distribute

86

Aggregate and Flexible Volume Removal
 Aggregates cannot be removed until all flexible volumes on the aggregate are removed
 Both flexible volumes and aggregates can be removed using CLI commands and FilerView
– For flexible volumes, use CLI commands:  vol offline <FlexVol-name> and vol destroy <FlexVol-name> – For aggregates, use CLI commands:  aggr offline <aggr-name> and aggr destroy <aggr-name>
© 2008 NetApp. All rights reserved. NetApp Confidential -- Do Not Distribute 87

What is FlexShare?

 Allows administrators to prioritize how system resources are used
– Standard with Data ONTAP 7.2 – No license required

 Provides workload prioritization
– Prioritizes resource availability for higher priority tasks – No guarantees on performance or resource availability

© 2008 NetApp. All rights reserved.

NetApp Confidential -- Do Not Distribute

88

FlexShare Key Features

 Relative priority of different volumes  Per-volume user vs. system priority  Per-volume cache policies  Administration using CLI or Manage ONTAP API – Dynamic updates of configuration changes

© 2008 NetApp. All rights reserved.

NetApp Confidential -- Do Not Distribute

89

MultiStore
 MultiStore is an optional software product available with Data ONTAP
 Allows partitioning of the storage system and network resources into separate ―storage containers‖  These ―storage containers‖ are called virtual filers
– Sometimes referred to as ―Vfilers‖

 Each virtual filer offers file services to the clients  NOT the same as V-Series

© 2008 NetApp. All rights reserved.

NetApp Confidential -- Do Not Distribute

90

Benefits of MultiStore
 Virtualization
– Provides a logical view of the storage and computing resources – Hides complexity

 Consolidation and ease of management
– Each virtual storage controller appears as a separate physical storage with a unique IP address

 Security
– Delegation of management – Data owned by a virtual storage controller cannot be accessed by other virtual storage controllers
© 2008 NetApp. All rights reserved. NetApp Confidential -- Do Not Distribute 91

Competitive Layout: Static (Them)
Users and Applications Storage Consumer
(e.g. file systems, databases, etc.)

File System

 All tier-1 competitors offer only static/manual virtualization technology
– TagmaStore – IBM SVC – EMC Invista

Static Virtualization System
Composite Disk (creates composite disk object by striping data through underlying physical storage)

Physical Disks or LUNs
(fed from an array or JBOD)

 Start-ups
Users and Applications Storage Consumer
(e.g. file systems, databases, etc.) File System

– DataCore – FalconStor

Static Virtualization System
Composite Disk (creates composite disk object by concatenating data underlying physical storage)

 NO disk optimization

Physical Disks or LUNs
(fed from an array or JBOD)

© 2008 NetApp. All rights reserved.

92

Competitive Layout: Dynamic (Us)
Users and Applications
Aggregate
(Advanced Intellegent Data Container)
Advanced WAFL Layout Heuristics

FlexVol Volumes
RAID-DP Rapid RAID Data Protection Recovery Data NVRAM Accelerated Checksums Logging

 Only NetApp offers dynamic/automatic virtualization technology
– FAS – NearStore – V-Series

Physical Disks or LUNs
(fed from an array or JBOD)

Users and Applications
File System File System File System

Composite LUNs
(striped or concatenated physical LUNs) Composite Disk Composite Disk Volume Manager Composite Disk

 Continuous disk optimization  Most efficient capacity utilization
– Best price/performance – Minimal management

Physical Disks or LUNs
(fed from an array or JBOD)

Source: http://mktg-web.netapp.com/products/launches/nov04/Nov2004LaunchWeb/index.htm
© 2008 NetApp. All rights reserved. 93

Competitive Layout: NetApp Provisioning Advantages
 MetaLUNs and MetaVolumes
– MetaLUN performance limited by number of disks spanned – ―Hot‖ MetaLUNs can’t be helped by disks on other MetaLUNs – Resistant to change

 NetApp FlexVol volumes
– Spindle sharing makes total aggregate performance available to all volumes
© 2008 NetApp. All rights reserved. 94

Competitive Layout: Quick Comparisons
NetApp FAS
Aggregate  100 per system  Min disks = 2  Max disks = 16 TB  RAID-DP Flexible Volume  Create LUN within a FlexVol  200 per system  20 MB min, 16 TB max  Spread across all disks in aggregate Snapshot technology  Patented technology w/o COW overhead  User restore capable SnapMirror  Synchronous, semi-sync, and asynchronous SnapVault

EMC CLARiiON
Disk Groups / RAID  CX300 - 60 disks / 16.4 TB  CX500 - 120 disks / 35.8 TB  CX700 - 240 disks / 74.4 TB  RAID 0/1/3/5/10 (RAID 6 talk – no delivery) MetaLUN (FLU/Flare LUN)  Two or more FLUs joined as one = MetaLUN  Theoretical limit of 2048  Cannot shrink without destroying data  Manually sized SnapView  Copy-on-write = performance impact  snapshots – cannot mount on originating server MirrorView  Synchronous or limited asynchronous BCV / Backup to ATA

© 2008 NetApp. All rights reserved.

95

Exercise
Module02: Aggregate and Volume Creation Demo Estimated Time: 30 minutes

Volume and Aggregate Creation Demo

References
 Competitive Layout Information
http://mktg-web.netapp.com/products/launches/

 PartnerCenter- Solutions
http://www.netapp.com/mycommunities/PartnerCenter/solutions/solution-sets.html

 Capacity Calculator
http://www.netapp.com/seef/mycomm/partnercenter/solutions/roitei/downloads/capacity-calculator.xls

 Thin-provisioning and FlexVol
http://www.netapp.com/mycommunities/PartnerCenter/10245

 MultiStore
http://www.netapp.com/mycommunities/PartnerCenter/10251

© 2008 NetApp. All rights reserved.

NetApp Confidential -- Do Not Distribute

98

Module Summary
In this module, you should have learned to:  Identify NetApp core software
– On-box features of Data ONTAP – Off-box features of Data ONTAP

 Describe the functionality of NVRAM  Describe the values of NetApp RAID 4 and RAID-DP technology  Demonstrate NetApp Snapshot whiteboard

© 2008 NetApp. All rights reserved.

99

Sign up to vote on this title
UsefulNot useful