You are on page 1of 63

Adaptive Flash Cache

Appendix B
HK902 E.00

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Module objectives

After completing this module, you should be able to:


• Explain the benefits of Adaptive Flash Cache (AFC)
• Understand what can and cannot be moved into AFC
• Explain the different LRU (Least Recently Used) queues and the concept of LRU queue demotion
• Use the appropriate CLI commands to setup, enable, disable, remove, and monitor AFC
• Understand the guidelines and rules regarding AFC
• Monitor AFC using the statcache and srstatcache commands

2 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Technet24.ir

Adaptive Flash Cache introduction

When a read request comes to the array from a host the read can be either sequential or random
• With a sequential read stream, an HP 3PAR OS algorithm will detect the sequential read pattern, doing a pre-
fetch putting data in cache resulting in a cache-read hit
• With a random read stream with no determined predictive pattern, the read usually results in a cache-read
miss and the requested data must be read into cache from the back end disks which is non-optimal from a
performance point of view

Adaptive Flash Cache adds a second level of cache (between DRAM and back-end disks) using
SSD capacity to increase the probability that a random read can be serviced at a much more
effective rate improving read performance.

3 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Adaptive Flash Cache explained
Without Adaptive Flash Cache With Adaptive Flash Cache

Host sends read


Host sends read
or write
or write
DRAM Cache

DRAM Cache 16K Page size

16K Page size

AFC 16K Cache Reads


Reads or Writes page size SSD Tier Writes

HDD HDD
HDD
5 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Technet24.ir

What does/does not get cached in AFC

• Only small block random read data (64K or less IO size) from a node’s DRAM cache
is a candidate for moving to AFC
• Data that is pre-fetched using the array sequential read-ahead algorithm into
DRAM cache and data in DRAM cache with IO size >= 64K are not candidates to be
moved to AFC
• Data is only placed into AFC after having been resident in DRAM first—data is
never put in AFC directly from FC and NL back end disks
• Data read into DRAM from SSD media will never be placed in AFC
• AFC is not intended to be an extension for write data

6 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Cache Memory Page (CMP) vs. Flash Memory Page (FMP)
DRAM Cache

16K 16K 16K 16K 16K • DRAM cache on a node is broken down into 16K cache memory pages (CMP)
• A CMP can be utilized by host writes, mirroring of writes from other controllers,
16K 16K 16K 16K 16K or for sequential or random read data requested from hosts
• When DRAM cache utilization reaches 90% (10% of CMPs are free/clean), 16K
16K 16K 16K 16K 16K CMPs used for random read IO are candidates to be moved out to SSD AFC

16K 16K 16K 16K 16K


Adaptive Flash Cache

16K 16K 16K 16K 16K

16K 16K 16K 16K 16K AFC broken down into 16K
flash memory pages (FMP)
16K 16K 16K 16K 16K from 1 GB chunklets on
SSD disks
16K 16K 16K 16K 16K

8 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Technet24.ir

AFC data flow in the controllers


AO VV 1-Tier VV

128MB region
AFC Leverages SSD capacity in an array as a level-2

moves
read cache extension
− Small block random read data that is to be removed from DRAM cache
is copied to AFC
− Provides a second-level caching layer between DRAM and spinning Cache Control
disks (HDD) Write Read
Cache Cache
• If data in flash cache is accessed it is copied into DRAM cache and
remains there until it is once again removed
• AFC is fully compatible with Adaptive Optimization (AO)

16KB CMP moves


• If an array contains both cMLC and eMLC SSDs AFC will be created on
the eMLC SSDs Nearline Fast Solid
(NL) Class State
(SSD)
(FC)
Flash LDs
i.e.
800GB

9 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC data flow in the controllers: how it works
Server

DRAM Read
Cache

16k Page
Flash Cache 16k FMP 16k FMP

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

10 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Technet24.ir

AFC data flow in the controllers: how it works


Server

Server read request of LBA 0x9abc6h

DRAM Read
Cache

16k Page
Flash Cache 16k FMP 16k FMP

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

11 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC data flow in the controllers: how it works
Server

Server read request of LBA 0x9abc6h

DRAM cache read miss


on LBA 0x9abc6h
DRAM Read
Cache

Flash Cache 16k FMP 16k FMP 16k Page

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

12 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Technet24.ir

AFC data flow in the controllers: how it works


Server

Server read request of LBA 0x9abc6h

DRAM Read LBA 0x9abc6h is read


Cache 16k CMP from the VV into DRAM
cache

Flash Cache 16k FMP 16k FMP 16k Page

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

13 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC data flow in the controllers: how it works
Server

16k CMP

Read request completed to server

DRAM Read LBA 0x9abc6h is read


Cache 16k CMP
from the VV into DRAM
cache

Flash Cache 16k FMP 16k FMP

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

14 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Technet24.ir

AFC data flow in the controllers: how it works


Server

The system now wants to remove the16k


CMP for LBA 0x9abc6h from DRAM cache

DRAM Read
Cache 16k CMP

Flash Cache 16k FMP 16k FMP

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

15 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC data flow in the controllers: how it works
Server

The system now wants to remove the16k


CMP for LBA 0x9abc6h from DRAM cache

16k FMP allocated from DRAM Read


“Dormant” LRU queue to Cache 16k CMP

the “Normal” LRU queue

Flash Cache 16k FMP 16k FMP

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

16 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Technet24.ir

AFC data flow in the controllers: how it works


Server

The system now wants to remove the16k


CMP for LBA 0x9abc6h from DRAM cache

16k CMP is copied to AFC


DRAM Read FMP and removed from
Cache DRAM cache

Flash Cache 16k FMP 16k FMP

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

17 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC data flow in the controllers: how it works
Server

Server read request of LBA 0x9abc6h

DRAM Read
Cache

Flash Cache 16k FMP 16k FMP

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

18 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Technet24.ir

AFC data flow in the controllers: how it works


Server

Server read request of LBA 0x9abc6h

DRAM cache read miss


on LBA 0x9abc6h
DRAM Read
Cache

Flash Cache 16k FMP 16k FMP

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

19 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC data flow in the controllers: how it works
Server

Server read request of LBA 0x9abc6h

DRAM Read
Cache 16k CMP
LBA 0x9abc6h is read
from flash cache into
DRAM cache (AFC hit)

Flash Cache 16k FMP 16k FMP

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

20 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Technet24.ir

AFC data flow in the controllers: how it works


Server
16k CMP

Read request completed to server

DRAM Read
Cache 16k CMP

Flash Cache 16k FMP 16k FMP

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

21 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC data flow in the controllers: how it works
Server

16k FMP is promoted from


DRAM Read “Normal” to “Warm” FMP
Cache 16k CMP
because of flash cache hit

Flash Cache 16k FMP 16k FMP

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

22 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Technet24.ir

AFC data flow in the controllers: how it works


Server

16k CMP is copied to AFC


DRAM Read
FMP and removed from
Cache
DRAM cache

Flash Cache 16k FMP 16k FMP

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

23 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
LRU (Least Recently Used) Queues

Hot Warm Norm Cold Dormant

• When a 16K CMP of random read data moves to AFC it is copied to a 16K FMP and into one of five
least recently used queues to track how hot the data is
• A 16K CMP that needs to be moved from DRAM to AFC (16K FMP) is placed in the NORM LRU queue
• FMPs in AFC that are accessed by a host can be promoted to hotter/higher priority LRU queues
• FMPs in AFC can be placed in a lower priority LRU queue as a result of queue demotion

24 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Technet24.ir

AFC data flow in the controllers: LRU Queue Demotion


Server

DRAM Read
Cache

Flash Cache 16k FMP 16k FMP 16k FMP 16k FMP
16k Page

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

25 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC data flow in the controllers: LRU Queue Demotion
Server

Server read request of LBA 0xc619ab

DRAM Read
Cache

Flash Cache 16k FMP 16k FMP


16k FMP 16k FMP 16k Page

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

26 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Technet24.ir

AFC data flow in the controllers: LRU Queue Demotion


Server

Server read request of LBA 0xc619ab

DRAM cache read miss


on LBA 0xc619ab
DRAM Read
Cache

Flash Cache 16k FMP 16k FMP


16k FMP 16k FMP
16k Page

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

27 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC data flow in the controllers: LRU Queue Demotion
Server

Server read request of LBA 0xc619ab

DRAM Read LBA 0xc619ab is read


Cache 16k CMP from the VV into DRAM
read cache

Flash Cache 16k FMP 16k FMP


16k FMP 16k FMP 16k Page

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

28 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Technet24.ir

AFC data flow in the controllers: LRU Queue Demotion


Server

16k CMP

Read request completed to server

DRAM Read
Cache 16k CMP

Flash Cache 16k FMP 16k FMP


16k FMP 16k FMP

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

29 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC data flow in the controllers: LRU Queue Demotion
Server

The system now wants to remove the 16k


DRAM CMP for LBA 0xc619ab from read cache

DRAM Read
Cache 16k CMP

Flash Cache 16k FMP 16k FMP 16k FMP


16k FMP

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

30 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Technet24.ir

AFC data flow in the controllers: LRU Queue Demotion


Server

The system now wants to remove the 16k


DRAM CMP for LBA 0xc619ab from read cache

A 16k FMP is allocated from DRAM Read


the “Dormant” LRU queue to Cache 16k CMP

the “Normal” LRU queue

Flash Cache 16k FMP 16k FMP 16k FMP


16k FMP

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

31 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC data flow in the controllers: LRU Queue Demotion
Server

The system now wants to remove the 16k


DRAM CMP for LBA 0xc619ab from read cache

DRAM Read
Cache 16k CMP

Flash Cache 16k FMP 16k FMP 16k FMP


16k FMP

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

32 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Technet24.ir

AFC data flow in the controllers: LRU Queue Demotion


Server

The system now wants to remove the 16k


DRAM CMP for LBA 0xc619ab from read cache

DRAM Read 16k CMP is written to AFC


Cache 16k CMP
FMP and removed from
DRAM cache

Flash Cache 16k FMP 16k FMP 16k FMP


16k FMP

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

33 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC data flow in the controllers: LRU Queue Demotion
Server

DRAM Read
The AFC “Dormant” LRU queue Cache
has now run out of FMPs

Flash Cache 16k FMP 16k FMP 16k FMP 16k FMP

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

34 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Technet24.ir

AFC data flow in the controllers: LRU Queue Demotion


Server

So all AFC LRU queues are DRAM Read


demoted one level Cache

Flash Cache 16k FMP 16k FMP 16k FMP 16k FMP

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

35 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC data flow in the controllers: LRU Queue Demotion
Server

DRAM Read
Cache

Flash Cache 16k FMP 16k FMP 16k FMP 16k FMP

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

36 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Technet24.ir

LRU movement examples (1 of 2)


Excerpts from statcache output showing LRU queues

----------------- FMP Queue Statistics ------------------

Node Dormant Cold Norm Warm Hot Destage Read Flush WrtBack
0 24056921 0 1107007 1484 412 0 0 0 0

----------------- FMP Queue Statistics ------------------

Node Dormant Cold Norm Warm Hot Destage Read Flush WrtBack
0 0 0 25132171 7000 26653 0 0 0 0

37 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
LRU movement examples (2 of 2)
Excerpts from statcache output showing LRU queues
----------------- FMP Queue Statistics ------------------

Node Dormant Cold Norm Warm Hot Destage Read Flush WrtBack
0 0 25132171 7000 26653 0 0 0 0 0

----------------- FMP Queue Statistics ------------------

Node Dormant Cold Norm Warm Hot Destage Read Flush WrtBack
0 25132171 7000 26653 0 0 0 0 0 0

38 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Technet24.ir

AFC specifics
• No license required
• Must be running HP 3PAR OS 3.2.1 or higher
• Not supported on the HP 3PAR 7450 /7450c models
• Minimum amount of AFC configurable per controller node pair is 64 GB for all models
• Maximum amount of AFC configurable per controller node pair depends on the hardware model:
7200/7200c: 768 GB
7400/7400c: 768 GB (1500 GB max for 4-node models)
7440c: 1500 GB (3000 GB max for 4-node models)
V400 and V800: 2064 GB
• Minimum 4 SSDs per controller node pair for 7000 Series and 8 SSDs per controller node pair for
V400 and V800 required for AFC configuration

39 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Configuring and monitoring AFC

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Technet24.ir

Adaptive Flash Cache CLI commands


Command Summary
createflashcache Specify amount of SSD capacity per node pair to be allocated for AFC

setflashcache Enable/disable AFC for VV Sets or per array

showflashcache Display how much AFC has been allocated per controller node

removeflashcache Disable/remove all AFC from array

statcache Display cache statistics, including AFC stats

srstatcache Displays historical performance data reports for flash cache and data cache

41 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC CLI: createflashcache and showflashcache
• Create 128 GB of Flash Cache for each node pair in the array (must be added multiples of 16 GB):
cli% createflashcache 128g

• Display the status of Flash Cache for all nodes on the system (example output shown):
cli% showflashcache
-(MB)-
Node Mode State Size Used%
0 SSD normal 65536 0
1 SSD normal 65536 0
-------------------------------
2 total 131072

42 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Technet24.ir

AFC CLI: setflashcache System Level Mode (1 of 2)


• There are two “targets” the setflashcache sub commands can be executed against:
“sys:all”
“vvset:<vvset name>”

• The “sys:all” target is used to enter System Level Flash Cache Mode and enables/disables
flash cache for all VVs and VVsets on a system globally
cli% setflashcache enable sys:all
cli% setflashcache disable sys:all

• The system level mode is global and overrides any settings applied to VVset targets while not in
system level mode

• To exit system level flash cache mode and get back into VVset mode you must use the “clear”
subcommand
cli% setflashcache clear sys:all

43 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC CLI: setflashcache System Level Mode (2 of 2)
• When flash cache system level mode is entered any flash cache setting for vvset:<Vvset>
targets is overridden

• While in system mode the showflashcache command with either the –vv or –vvset option will not
display information for either individual VVs or Vvsets: these options only display individual VV and
VVset data when not in system level mode

44 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Technet24.ir

AFC CLI: setflashcache when NOT in System Level Mode


• Use the “vvset:<VVset name>” target when not in flash cache system level mode to
enable/disable flash cache for Vvsets
 If changes are made while in system level mode they will occur but will not take effect until you clear system mode
• Any flash cache setting specified using the “vvset:<VVset name>” target will be overridden
(have no affect) if you go into system level mode
• When system level mode is cleared any prior “vvset:<Vvset name>” target configurations
that were created or modified while in system level mode are applied
• showflashcache with either the –vv or –vvset option only displays VV and VVset data when not in
system mode
• Changes specified to a VVset target while in system level mode will not have any affect on the non-
system mode settings but you will not be able to see the effect until you clear system level mode
• Examples:
cli% setflashcache enable vvset:ESX5ii
cli% setflashcache disable vvset:ESX5ii

45 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC CLI command examples (1 of 2)
• Display the status of VVsets with Flash Cache enabled on the system (example output shown)
cli% showflashcache -vvset

Id VVSetName AFCPolicy
1 ESX5ii enabled
----------------------
1 total

• Display the status of VVs with Flash Cache enabled (example output shown)
cli% showflashcache -vv
VVid VVName AFCPolicy
50 ESX5ii.0 enabled
51 ESX5ii.1 enabled
52 ESX5ii.2 enabled
-------------------------
3 total
46 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Technet24.ir

AFC CLI command examples (2 of 2)


-(MB)-
• Display the status of Flash Cache for all nodes Node Mode State Size Used%
on the system (example output shown): 0 SSD normal 65536 35
cli% showflashcache 1 SSD normal 65536 35
-------------------------------
2 total 131072

• Disable and remove all flash cache from the array


cli% removeflashcache
Are you sure you want to remove the flash cache?
Select q=quit y=yes n=no: y

• Display the status of Flash Cache for all nodes on the system (example output shown):
cli% showflashcache

Flash Cache is not present.


47 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Monitoring Cache: statcache (1 of 5)
statcache shows CMP (DRAM) statistics and FMP statistics for data held in AFC

statcache
When run with no
options reports CMP
and FMP statistics on
a per node basis

48 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Technet24.ir

Monitoring Cache: statcache (2 of 5)


Read and Write statistics details

• Node: Node ID on the storage system


• Type: Data access type either Read or Write
AFC does not cache write data so the “write” counter for FMP will always be 0
• Accesses: Number of Current and Total Read/Write I/Os
• Hit%: Hits divided by accesses (displayed in percentages)

49 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Monitoring Cache: statcache (3 of 5)
Read Back and Destaged Write

• Read Back: Data reads from flash cache back into DRAM cache and represent flash cache read
hits
• Detsaged Write: Writes of CMPs from DRAM into flash cache and occur when a CMP is being
removed from DRAM read cache and written into flash cache -- these writes are mirrored in
flash cache (RAID1) even though AFC only holds read data

50 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Technet24.ir

Monitoring Cache: statcache (4 of 5)


FMP Queue Statistics

Displays FMPs in LRU queues per controller node

51 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Monitoring Cache: statcache (5 of 5)
CMP Queue Statistics

• Free: Number of cache pages without valid data on them


• Clean: Number of clean cache pages (valid data on page)
A page is clean when data in cache matches data on disk
• Write1: Number of dirty pages that have been modified exactly 1 time
A page is dirty when it has been modified in cache but not written to disk
• WriteN: Number of dirty pages that have been modified more than 1 time
• WrtSched: Number of pages scheduled to be written to disk
• Writing: Number of pages currently being written by the flusher to disk
• DcowPend: Number of pages waiting for delayed copy on write resolution
• DcowProc: Number of pages currently being processed for delayed copy on write
resolution
52 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Technet24.ir

Monitoring Cache: statcache -v

statcache –v
reports CMP and FMP
statistics by virtual volume

53 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Monitoring Cache: statcache -v -metadata

statcache –v -metadata
reports CMP, FMP statistics and
metadata statistics by virtual
volume

54 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Technet24.ir

Monitoring Cache: srstatcache (1 of 2)


Displays historical cache statistic reports for both CMPs and FMPs

55 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Monitoring Cache: srstatcache (2 of 2)

cli% srstatcache –internal_flashcache –fmp_queue -btsecs -10m

----CMP---- ----FMP---- -Read Back- -Dstg Wrt- ----------------------FMP Queue-----------------------


Time Secs r/s w/s rhit% whit% rhit% whit% IO/s MB/s IO/s MB/s Dormant Cold Norm Warm Hot Destage Read Flush WrtBack
2015-02-07 13:05:00 MDT 1404759900 2730.0 2721.0 5.4 0.5 0.1 0.0 2.0 0.0 0.0 0.0 16770454 0 5643 884 235 0 0 0 0
2015-02-07 13:10:00 MDT 1404760200 2580.0 2583.0 5.3 0.5 0.1 0.0 2.0 0.0 0.0 0.0 16770685 0 5147 1193 191 0 0 0 0

Display the internal flashcache activity including FMP queue statistics beginning 10 minutes ago

56 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Technet24.ir

Warm Up time for AFC

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Estimating Adaptive Flash Cache Warmup (1 of 3)
• Just like a normal DRAM based cache, flash cache requires time to warmup
before an application or benchmark may see a noticeable improvement in I/O
latency

• The warmup time can be anywhere from minutes to hours depending on


factors such as workload and cache size

• To estimate how long it will take to warmup flash cache

First: Estimate the cache size

Cache Size = (DRAM Read Cache) + (FLASH Read Cache)

58 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Technet24.ir

Estimating Adaptive Flash Cache Warmup (2 of 3)


• Second: Estimate the demand for Flash Cache in MB/sec
This demand depends on the I/O size
All read I/O’s to FC drives on HP 3PAR are 16kb aligned/16kb in size
All sequential Reads and Reads that are 64kb in size should not be included in estimating flash
cache warm up

<=16kb <=32kb <=48kb <64kb


IO Size Modifier 16 32 48 64

Flash Cache Fill Rate = Read IOPS * (IO Size Modifier)

59 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Estimating Adaptive Flash Cache Warmup (3 of 3)
Estimating Cache Hit Rate

• To understand the effectiveness of Flash Cache, you also need to be able to


estimate the cache hit rate

• To do this, take the size of the arrays cache and divide by the Working Set Size

• Working Set Size: a measurement of the total amount of unique blocks being
accessed over a unit of time

60 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Technet24.ir

Warmup time and Cache Hit Rate: Example


• 8TB of VVs
• 32GB of read cache Array Cache Size = (DRAM Read Cache) + (FLASH Read Cache)
• 256GB of flash cache
• 288 GB total of array cache
Array Cache Size
Cache Hit Rate: Cache Hit Rate
288GB/8TB = 0.036 Working Set Size
For a random workload, best flash cache hit %
would be 3.6%

Flash Cache Fill Rate:


Host Workload: 20,000 4kb random reads Flash Cache Fill Rate = Read IOPS * (IO Size Modifier)
20,000 * 16 = 320 MB/sec for filling flash cache

Time to Fill:
Maximum cache hit rate can be obtained in: Array Cache Size
288GB / (320 MB/sec) = 900 seconds Time to fill
15 minutes
Flash Cache Fill Rate

61 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC Impact Example (1 of 2)
Pre-configuration of AFC
Workload stats:
IOs/Sec: ~1000 Service time (ms) ~6 ms

Approx. 10 minutes after configuration of AFC


Workload stats:
IOs/Sec: ~1500 (50% increase)
Service time (ms) ~4 ms (33% decrease)

62 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Technet24.ir

AFC Impact Example (2 of 2)

Approx. 35 minutes after configuration of AFC


Workload stats:
IOs/Sec: ~4500 (450% increase)
Service time (ms) ~1 ms (83% decrease)

63 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC tidbits
• Once the amount of AFC is specified using the createflashcache command the amount can not
be changed: the removeflashcache command must be used to remove the designated AFC then
recreated using createflashcache
• When adding new SSDs the tunesys operation can not be used to rebalance FMPs across all
SSDs used for AFC: to use all SSDs (including newly added) the removeflashcache command
must be used to remove the designated AFC then recreated using createflashcache
• AFC and Adaptive Optimization can co-exist on the same array but serve different purposes:
both improve performance and reduce cost
• The createflashcache –sim <size> can be used to track flash cache statistics even if the array
does not have SSDs to determine if AFC would be beneficial
• If the array contains both eMLC and cMLC SSDs flash cache will be created on the eMLC drives
• Flash Cache is not supported on the 480GB cMLC SSDs
• Administration and configuration of AFC can be done using SSMC 2.1 or higher

64 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Technet24.ir

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

You might also like