You are on page 1of 20

NETAPP TECHNICAL REPORT

Predictive Cache Statistics
Best Practices Guide
Paul Updike, NetApp
June 2008 | TR-3681

®
This guide investigates the use of the Predictive Cache Statistics (PCS) functionality in Data ONTAP .

TABLE OF CONTENTS
1

INTRODUCTION TO PREDICTIVE CACHE STATISTICS ........................................................ 3
1.1

2

3

4

5

ENABLING PREDICTIVE CACHE STATISTICS..........................................................................................3

COLLECTING PREDICTIVE CACHE STATISTICS .................................................................. 3
2.1

DATA COLLECTION FOR HIGH-PRECISION ANALYSIS..........................................................................3

2.2

DATA COLLECTION FOR REAL-TIME ANALYSIS ....................................................................................4

ANALYZING RESULTS.............................................................................................................. 5
3.1

DECODING COUNTERS ..............................................................................................................................5

3.2

EXAMPLE OF A BASIC COUNTERS-BASED ANALYSIS..........................................................................6

3.3

WORKING WITH THE REAL-TIME FLEXSCALE-PCS DATA ....................................................................7

3.4

EXAMPLE OF REAL-TIME DATA ANALYSIS.............................................................................................7

3.5

ANALYSIS TOOLS .......................................................................................................................................8

PREDICTIVE CACHE STATISTICS MODES OF OPERATION ................................................ 9
4.1

METADATA CACHING .................................................................................................................................9

4.2

NORMAL USER DATA CACHING (DEFAULT) .........................................................................................10

4.3

LOW-PRIORITY DATA CACHING..............................................................................................................11

4.4

CHOOSING THE BEST MODE...................................................................................................................12

APPENDIX ................................................................................................................................ 12
5.1

APPENDIX 1: SAMPLE OF EXT_CACHE_OBJ STATISTICS ..................................................................12

5.2

APPENDIX 2 : FLEXSCALE-PCS.XML FILE CONTENTS ........................................................................19

CONFIDENTIAL
Restricted to NetApp employees and channel partners, each of whom is under NDA
obligations. This document may not be shared with customers without prior written
permission from NetApp.

2

PCS simulates caches at three memory points above system memory: 2x.x releases “options ext_cache.3 syntax. we will use the Data ONTAP 7.6. if there is a negative performance impact. there may be a noticeable difference in the performance of the system while PCS is running. This can be between 10% and 20% increase in protocol latency as the CPU approaches 100% utilization. They can be started. or 8 times the base memory. each of whom is under NDA obligations. or o Do not run PCS on this system If you choose to run PCS and monitor system performance.3 and later “options flexscale.1 or later: Data ONTAP 7. The three caches are named EC0 (2x). bringing the total to 32GB simulated at that point. EC2 is 32GB and brings the total to 64GB.6. EC1 (4x). you can model the behavior of real caches and predict the effectiveness of purchasing and placing one or more Performance Acceleration Modules on the system to improve the workload. Finally. The counters associated with PCS are held in two counter manager objects. ext_cache and ext_cache_obj. This technique allows you to collect cache statistics as if an actual cache were installed at each of these memory points. and reset just like any other counters by using the stats command.1 ENABLING PREDICTIVE CACHE STATISTICS You can enable PCS with a simple command line option on storage controllers with more than 2GB of system memory. PCS simulates EC0 = 16GB. you can disable the statistics with flexscale.1 and later 7.2.6.enable pcs” NOTE: For the remainder of this document. stopped.enable on”  Data ONTAP 7. If the “CPU busy” column stays at or above 80%. COLLECTING PREDICTIVE CACHE STATISTICS DATA COLLECTION FOR HIGH-PRECISION ANALYSIS Predictive Cache Statistics are implemented as Data ONTAP Counter Manager objects. You may replace “flexscale” with “ext_cache” for PCS on Data ONTAP 7.1 INTRODUCTION TO PREDICTIVE CACHE STATISTICS Predictive Cache Statistics (PCS) is a way of simulating the effects of additional extended cache memory in a system. Before enabling Predictive Cache Statistics. This document may not be shared with customers without prior written permission from NetApp. • 2 2. 1. To enable PCS you must be running Data ONTAP 7. 3 . 4x.2. shown. either: o Run PCS and closely monitor system performance. In this scenario. From those statistics.x releases. EC1 is represented as an additional 16GB. on a FAS3070 with 8GB of system memory. disable the PCS functionality.1 When a collection period is complete.enable off.2. you should observe the following precautions: • Monitor CPU via the sysstat command. CONFIDENTIAL Restricted to NetApp employees and channel partners. and 8x of the base system memory. PCS should not be left enabled all the time. and EC2 (8x) For example.2.

the command would be: perfstat. Some analysis tools don’t work with perfstats collected in this way. based on the results of the analysis. and EC2. This might take a day or so. Observe the precautions just described and then enable PCS. 4. If necessary. Allow the simulated caches to warm up. you can also collect real-time data from the cache by following these steps: 1.” looks at some of the counters to understand how the cache is working. COLLECTION PROCESS To collect Predictive Cache Statistics. One way to tell when it has stabilized is to view the Usage column described in section 2.com/NOW/download/tools/perfstat/ Note: For proper data collection. 2. 6. Section 3. Copy the text from Appendix 2 into a plain text file and save the file with the name flexscalepcs. This document may not be shared with customers without prior written permission from NetApp. or you may not have access to the analysis tools to process it. do not use the -F option to perfstat when collecting PCS. Earlier versions of ® perfstat on Windows won’t collect PCS stats. each of whom is under NDA obligations. 2. there is still a way to understand the performance changes of adding Performance Acceleration Modules to a system. be sure to use the latest version of the perfstat tool. “Analyzing Results. The individual counters are too numerous to list here. you’re probably ready to go. 4 .xml. 2. available in the “tool chest” at NOW™ (NetApp on the Web). to collect a 10-minute sample five times. The simplest and recommended method of collecting PCS is through the perfstat tool. Also. At a time of interest in the workload.sh –f <storage controller> -t 10 –i 5 The perfstat tool is available here: http://now. For example. EC0. EC1. Move the file to the storage system in the <root volume>/etc/stats/preset/ folder. Five iterations of 10 minutes each (-t 10 –i 5) tends to work well. Disable PCS by setting the option to Off. repeat the process. 5. such as the hits and misses. At step 3 in the collection process just described. collect the perfstat data for the workload in question. has the same set of counters. In addition to the perfstat. 3. In the same manner described earlier.The ext_cache_obj object contains the performance counters. Each of the simulated caches.2 DATA COLLECTION FOR REAL-TIME ANALYSIS Sometimes it may not be possible to collect a perfstat.2. but an example of the counters is listed in Appendix 1. When the percentage has stabilized across the three instances. follow these steps: 1. 3. start the real-time counters with > stats show –p flexscale-pcs This lists output to the screen in the following manner: Instance Blocks Usage Hit Miss Hit Evict Invalidate Insert CONFIDENTIAL Restricted to NetApp employees and channel partners. In these scenarios. 7. Analyze the results. This tool collects all of the statistics at the time intervals given in the command at the command line. run perfstat as described earlier. Save the perfstat output.netapp.

it is safe to assume linear relationships. Blocks: The number of 4KB blocks in the cache. Evict: The blocks evicted per second of data from the instance. You can investigate them by using the Data Ontap stats explain command. See Appendix 1 for an example of such a listing. for all analysis it is important to remember that the Performance Acceleration Module is available at 16GB per module. The instances of the simulated caches may not line up. This helps you to understand the amount of data. This document does not define the individual counters. if the virtual cache is 64GB. Hit: The hits per second of that instance. 5 . search for perfstat_ext_cache_obj. NORMAL ext_cache_obj:ec0:hit_normal_lev0:248/s CONFIDENTIAL Restricted to NetApp employees and channel partners. Usage: The percentage of the instance that is currently filled with data. each of whom is under NDA obligations. To find the PCS data in a raw perfstat file. This paper doesn’t cover in-depth analysis of all perfstat data. so familiarity with NetApp® storage system performance is assumed. it is useful to group them in association with their respective modes of operation. This document may not be shared with customers without prior written permission from NetApp. 3 ANALYZING RESULTS Perfstat collects a lot of data. For example. Hit %: The percentage of hits to total accesses.1 DECODING COUNTERS There are numerous counters in the ext_cache_obj object. Invalidate: The number of blocks invalidated due to data updates. you can divide by four 16GB module to understand the per-module benefit. When looking at them in their bare form. the most important data is in terms of the number of hits or misses in each cache. In that data. Similarly. there are many data points that apply to PCS. Miss: The misses per second of that instance. as detailed in section 5. which is in the header of the PCS object as it is displayed in the perfstat output file.ec0 ec1 ec2 16777216 16777216 33554432 % 99 99 99 /s /s 3451 15091 0 348 0 0 % 18 0 0 /s 5189 4002 3095 /s 0 1112 806 /s 5298 5189 4002 ec0 ec1 ec2 16777216 16777216 33554432 99 99 99 4152 16519 0 390 0 0 20 0 0 0 0 0 198 51 8 0 0 0 ec0 ec1 ec2 16777216 16777216 33554432 99 99 99 4196 16049 0 410 0 0 20 0 0 5124 3561 2756 67 1547 765 5233 5124 356 --- --- The columns are defined as follows: Instance: Instance of the virtual cache. Because the simulated caches are being used to help determine the effect of real caches at similar memory points. you can divide the hits/s and so on and interpolate the results in the same way. When doing an analysis. Additionally. Insert: The number of insertions per second into the instance. 3.

It is expected and normal that they are a little off. so you can conclude that the metadata is fitting into system memory well. Finally. You can use this information to determine the best run-time mode for a simulated cache and/or Performance Acceleration Module. and low-priority modes.ext_cache_obj:ec0:miss_normal_lev0:50/s METADATA ext_cache_obj:ec0:hit_metadata_file:0/s ext_cache_obj:ec0:hit_directory:0/s ext_cache_obj:ec0:hit_indirect:0/s ext_cache_obj:ec0:miss_metadata_file:1/s ext_cache_obj:ec0:miss_directory:0/s ext_cache_obj:ec0:miss_indirect:0/s LOW PRIORITY ext_cache_obj:ec0:hit_flushq:246/s ext_cache_obj:ec0:hit_once:214/s ext_cache_obj:ec0:hit_age:0/s ext_cache_obj:ec0:miss_flushq:42/s ext_cache_obj:ec0:miss_once:16/s ext_cache_obj:ec0:miss_age:0/s For basic analysis purposes. each of whom is under NDA obligations. Don’t expect these numbers to match exactly to the sums of individual modes. This cache is filled to only 55%. CONFIDENTIAL Restricted to NetApp employees and channel partners. There are no hits or misses in the metadata. The two modes of operation that make the most sense are normal. For the data just given. note that the hits are in the normal and low-priority counters. as well as a fair number of misses. CACHE USAGE ext_cache_obj:ec0:usage:55% 3. which does not include low-priority blocks. or a little more than two 16 GB Performance Accelerator Modules. The counters can be summed to get total hits/misses per mode. You now know that the total amount of space required to cover this workload in cache is 55% of 64GB. Looking back at the individual modes of operation. 6 . it is only necessary to understand which mode the counters are associated with. TOTALS ext_cache_obj:ec0:hit:261/s ext_cache_obj:ec0:miss:129/s These are the summary counters for each of the simulated caches. not their individual definitions. Note that the cache has stabilized at this number and is no longer in the process of warming up.2 EXAMPLE OF A BASIC COUNTERS-BASED ANALYSIS The percentage that the simulated caches are filled helps determine the number of Performance Accelerator Modules for this workload. the simulated cache at EC0 is 2x the base. the amount of cache used in each simulated instance is available in the following object. or 64GB. Assume that a system is a FAS6080 and has 32GB of base memory. and the other two (EC1. EC2) are at 0%. This document may not be shared with customers without prior written permission from NetApp.

• If the (hit+miss)/invalidate ratio is too small. it might mean a workload with a large amount of updates. Instance ec0 ec1 ec2 Blocks Usage % 16777216 99 16777216 99 33554432 99 Hit Miss Hit Evict Invalidate Insert /s /s % /s /s /s 3451 15091 18 5189 0 5298 0 348 0 4002 1112 5189 0 0 0 3095 806 4002 ec0 ec1 ec2 16777216 16777216 33554432 99 99 99 4152 16519 0 390 0 0 20 0 0 0 0 0 198 51 8 0 0 0 ec0 ec1 ec2 16777216 16777216 33554432 99 99 99 4196 16049 0 410 0 0 20 0 0 5124 3561 2756 67 1547 765 5233 5124 356 --- --- • First. Instance ec0 ec1 ec2 Blocks Usage % 16777216 99 16777216 99 33554432 99 Hit Miss Hit Evict Invalidate Insert /s /s % /s /s /s 3451 15091 18 5189 0 5298 0 348 0 4002 1112 5189 0 0 0 3095 806 4002 • The hit percentage (hit%) tells you the utilization of the cache at each instance.Through experimentation. In addition to this data. This document may not be shared with customers without prior written permission from NetApp. and the data is valid. you can turn on the low-priority mode and allow another warm-up period. • Second. Understanding this information allows you to estimate the amount of work that could be replaced with a Performance Acceleration Module. note that all three caches are 99% full: the cache is at a stasis point.3 WORKING WITH THE REAL-TIME FLEXSCALE-PCS DATA You can use the output of the flexscale-pcs stats to determine several things. then the working set fits well. 3. Combining the two gives a picture of the effectiveness of adding Performance Acceleration Modules. check to see how big each of the caches is: CONFIDENTIAL Restricted to NetApp employees and channel partners. then the caching point at that instance is possibly too small. switching to metadata mode and checking the hit% again is advisable. • If the usage is stable and there are a small number of invalidates and evictions.4 EXAMPLE OF REAL-TIME DATA ANALYSIS This section analyzes the data above to see what kind of predictions you can make about the workload. (how full). 3. • If the hit/(invalidate+evict) ratio is small. Try to find the mode that gives the most hits and replaces the biggest amount of I/O to the disk. you may want to observe the output of sysstat over a similar interval to understand the workload and the amount of data that is going to disk. • The KB/s that the cache serves is approximately equal to the hit/s *4KB per block. This means that there is a lot more data being discarded than had a chance to be used. each of whom is under NDA obligations. or you can leave the cache in normal mode. 7 .

It also calculates the disk I/Os that would be replaced by using the cache at that caching point.750 128GB 99 1.com/w/Perfsys#Location It can be used as a standalone tool that is run against a perfstat file. sums the data.netapp. It looks like ec0 is the only cache with any hits.510 1991 11. you see about 18% hits for 3451 hits/s.5 MB/s of disk reads to be replaced. you could expect that 3451 blocks would represent the IOPS of almost two shelves of disks. look at the amount of data being fed out of the cache: 3451 blocks/s *4KB/block = 13804 KB/s So on this system. The output of perfsys is an HTML file that can be opened with a Web browser. 8 . Finally. CONFIDENTIAL Restricted to NetApp employees and channel partners. the following tools have been developed to assist the analysis of PCS data. groups it by mode. if you added four Performance Acceleration Modules (64GB in ec0 /16GB per module). • • Looking at the hit percentage. The output has multiple data points.518 254 1 Raw Hits 3008 669 395 Raw Hits 3104 588 386 Raw Misses 15405 390 0 Raw Misses 15660 382 0 Meta Hits 2873 659 379 Meta Hits 2969 579 370 Meta Misses 1919 1257 878 Meta Misses 1846 1266 895 Lo-Pri hits 0 0 0 Lo-Pri hits 0 0 0 Lo-Pri Misses 0 0 0 Lo-Pri Misses 0 0 0 The output collects the data from the hits and misses counters. so the cache is stable and seems effective in the first 64GB. PERFSYS Perfsys is a tool that can take perfstat and turn it into a set of concise analytics. The PCS section takes the following form: Flex Scale Table Key EC0 cumulative simulated cache: 64 gigabytes EC1 cumulative simulated cache: 128 gigabytes EC2 cumulative simulated cache: 256 gigabytes Total IOs that would have gone to disk: 5846 Caching Cache Chain Length Disk IOs Hits in MB/s Point % Used Replaced 64GB 99 1. you might expect 13. and displays iteration by iteration for each simulated cache. It’s requires Perl to run and is available here: http://wikid. 3.125 128GB 99 1. so concentrate there: 3451 / 5189 + 0 = 0.518 2044 12. That’s not bad.510 261 1 Total IOs that would have gone to disk: 5897 Caching Cache Chain Length Disk IOs Hits in MB/s Point % Used Replaced 64GB 99 1. In a small block random read intensive workload.518 387 2 256GB 99 1.5 ANALYSIS TOOLS It can be complicated to take the many counters that are available from PCS and turn them into data that can influence a purchase and implementation decision.510 442 2 256GB 99 1.66 to 1 That’s a pretty good ratio. This makes the analysis process much faster and more obvious than the counters-based example in section 3. each of whom is under NDA obligations.ec0 = 16777216 blocks *4KB/block * 1048576KB/GB = 64GB ec1 = 16777216 blocks *4KB/block * 1048576KB/GB = 64GB ec2 = 33554432 blocks *4KB/block * 1048576KB/GB = 128GB • Check the hits versus the amount of data churning through the cache.2. To that end. This document may not be shared with customers without prior written permission from NetApp.

Normal user data and low-priority user data are kept from the module by setting their options to Off. each of whom is under NDA obligations. the actual application data is seldom reused in a timely fashion that would benefit from a caching technology. LUNs do not have a file and directory structure associated with them. 9 .1 METADATA CACHING Metadata mode allows only metadata into the extended cache area. 4. On NetApp storage systems. each of the modes in the simulated cache allows a broader amount of data to be stored in the module than the previous one. Latx allows you to upload a file via the Web browser and get the perfsys output without needing to have Perl or the script installed on your local machine. Metadata can be defined in two contexts. However. 4. Compare hits and misses for the modes of operation to determine the best mode. PROCESS FOR ANALYSIS USING PERFSYS 1. For NetApp internal use. Metadata is an often misunderstood concept in Data ONTAP. Latx is available at http://latx/ on the NetApp internal network.This analysis also provides a key factor to observe: the number of disk I/Os being replaced at the caching points. but it adds a lot of capability that is not available in other performance analysis tools. Metadata-only caching is implemented by restricting user data from entering the module. CONFIDENTIAL Restricted to NetApp employees and channel partners. This document may not be shared with customers without prior written permission from NetApp. Latx includes in its output a perfsys report. 2. 5.pl –perfstat <perfstat file>  3. Collect PCS by following the steps in section 3. It is currently in its infancy. When applied to SAN. metadata generally means the data required to maintain the file and directory structure. the term means the small number of blocks (fewer than 1 in 400) that are used for the bookkeeping of the data in a LUN. For file services protocols such as NFS and CIFS. Run perfsys against the file with perfsys. As described in this section. Open the HTML output file with a Web browser. Rerun the tests if necessary. and as a result. these workloads tend to reuse metadata. which provides the output described earlier. In many random workloads.which provide the ability to tune the caching behavior to match the storage system’s workload. 4 PREDICTIVE CACHE STATISTICS MODES OF OPERATION The PCS modes of operations match the three modes of operations for the Performance Acceleration Module. gain can often be realized by filtering out other types of data and allowing only metadata into the module. LATX Latx is a tool that is being developed by the NetApp technical support organization.

lopri_blocks       off  flexscale. This is the default mode of operation.normal_data_blocks off  4. To configure this mode. which is discussed in the next section. The following options set the module for normal user data caching: flexscale. This document may not be shared with customers without prior written permission from NetApp. This mode includes user and application data as well as metadata. It does not include low-priority data.Metadata Caching lopri_blocks=off Low priority user data normal_data_blocks=off Normal user data (metadata) Metadata Figure 1) Metadata only. 10 . normal and low-priority user data are not allowed in the module. set the following options to Off.lopri_blocks       off  flexscale. flexscale. Normal User Data Caching lopri_blocks=off Low priority user data normal_data_blocks=on Normal user data (metadata) Metadata Figure 2) Metadata and normal user data are cached.normal_data_blocks on    CONFIDENTIAL Restricted to NetApp employees and channel partners. each of whom is under NDA obligations. just as would be cached by Data ONTAP in main memory.2 NORMAL USER DATA CACHING (DEFAULT) The default mode caches all normal data. low-priority user data is not allowed in the module.

and low-priority data are all allowed in the module. overwhelming the other data in it and resident for a short time itself. large sequential reads are seldom reread and also tend to be bad candidates. Data of this nature is less beneficial to keep. each of whom is under NDA obligations. backups. whereas in the module it can now be kept with other user data. heavy write workloads tend not to be read after writing. 11 . there can be benefits to keeping the data resident in a large fast-access location. still treats the data with low priority. The extended cache provided by the Performance Acceleration Module has the potential to absorb the lowpriority data and keep it resident long enough to have a chance at reuse.lopri_blocks       on  flexscale. To configure this mode. enabling low-priority mode for the module does not affect the behavior of main memory. metadata. In addition.3 LOW-PRIORITY DATA CACHING When low-priority mode is enabled. CONFIDENTIAL Restricted to NetApp employees and channel partners. change the lopri_blocks setting to On. database dumps. Setting this option to Off effectively results in metadata mode. This document may not be shared with customers without prior written permission from NetApp. . Main memory. flexscale. so they’re not necessarily good candidates for caching at all. It’s the first thing to go when more space is needed. and so on. These scenarios apply to the confines of main memory. Normally. the inbound write workload can be high enough that the writes overflow the cache and cause other data to be ejected. normal user data. where space is more limited. and is retained at a relatively low priority. With ample space to store recent writes or large sequential reads.4. writes can come in so fast that they aren’t kept long enough before they have to be ejected to make room for more writes. Large sequential reads have a similar effect. A large amount of data is brought through the cache. At the same time. lowpriority data isn’t kept for long in system memory. Examples of this type of workload are large file copies.normal_data_blocks option must be on for low-priority mode to work. Because of the design of the Performance Acceleration Module. Low-priority data is those types of data that have a high chance of overrunning other cached data and/or have less likelihood of being reused. and low-priority data are all cached. The majority of low-priority user data falls into two categories: • Recent user or application writes • Large sequential reads In the case of recent writes. Like writes. Figure 3) Metadata. normal.normal_data_blocks on  Note: The flexscale.

providing fast access to it. there is little chance of reaccessing user data. APPENDIX APPENDIX 1: SAMPLE OF EXT_CACHE_OBJ STATISTICS =-=-=-=-=-= PERF systemname POSTSTATS =-=-=-=-=-= stats stop -I perfstat_ext_cache_obj TIME: 10:31:35 TIME_DELTA: 2:3 (123s) ext_cache_obj:ec0:type:1 ext_cache_obj:ec0:uptime:3484948981 ext_cache_obj:ec0:blocks:8388608 ext_cache_obj:ec0:associativity:4 ext_cache_obj:ec0:sets:2097152 ext_cache_obj:ec0:usage:55% ext_cache_obj:ec0:accesses_total:714557SS ext_cache_obj:ec0:accesses:390/s ext_cache_obj:ec0:accesses_sync:0/s ext_cache_obj:ec0:hit:261/s ext_cache_obj:ec0:hit_flushq:246/s ext_cache_obj:ec0:hit_once:214/s ext_cache_obj:ec0:hit_age:0/s CONFIDENTIAL Restricted to NetApp employees and channel partners.4 CHOOSING THE BEST MODE To review. the data needed for the “hit” may not stay around long enough to actually be accessed.4. the data kept has more memory available to it. This has two effects on how long and how much of that data stays in the cache. 2. each of whom is under NDA obligations. if you choose to use metadata mode. the three modes of operation are: • Metadata • Normal user data • Low-priority data Each mode can cache more data than the previous one. Or. No user data is allowed into the cache. change the mode to either metadata or low priority and repeat. Using this data. When you choose a more restrictive mode. 4. the total pool of data being cached increases. the cache is used exclusively for metadata. Measure the performance of the cache with this mode enabled. If the working set is very large and active (terabytes). because a greater amount is now moving through the module. Measure the performance of the system before enabling the module. as detailed in section 8.1 1. all of the metadata for a working set can be kept in the cache. This may present more chances to have access “hits” to that data. More of the working set of that type of data can remain in the cache for a longer time than it would if it were competing with data from other modes. use the following guidance to determine which mode fits your workload. as described in section 3. 3. When you choose to cache normal user data as well. If not. This document may not be shared with customers without prior written permission from NetApp. normal user data. Potentially. Start with the default mode. For example.1. If this is a substantial improvement. determine the performance of the simulated cache. 5 5. However. With this in mind. 12 . having the metadata quickly available still improves the performance of the workload. you may want to do nothing more.

each of whom is under NDA obligations.ext_cache_obj:ec0:hit_normal_lev0:248/s ext_cache_obj:ec0:hit_metadata_file:0/s ext_cache_obj:ec0:hit_directory:0/s ext_cache_obj:ec0:hit_indirect:0/s ext_cache_obj:ec0:hit_partial:0/s ext_cache_obj:ec0:hit_sync:0/s ext_cache_obj:ec0:hit_flushq_sync:0/s ext_cache_obj:ec0:hit_once_sync:0/s ext_cache_obj:ec0:hit_age_sync:0/s ext_cache_obj:ec0:hit_normal_lev0_sync:0/s ext_cache_obj:ec0:hit_metadata_file_sync:0/s ext_cache_obj:ec0:hit_directory_sync:0/s ext_cache_obj:ec0:miss:129/s ext_cache_obj:ec0:miss_flushq:42/s ext_cache_obj:ec0:miss_once:16/s ext_cache_obj:ec0:miss_age:0/s ext_cache_obj:ec0:miss_normal_lev0:50/s ext_cache_obj:ec0:miss_metadata_file:1/s ext_cache_obj:ec0:miss_directory:0/s ext_cache_obj:ec0:miss_indirect:0/s ext_cache_obj:ec0:miss_sync:0/s ext_cache_obj:ec0:miss_flushq_sync:0/s ext_cache_obj:ec0:miss_once_sync:0/s ext_cache_obj:ec0:miss_age_sync:0/s ext_cache_obj:ec0:miss_normal_lev0_sync:0/s ext_cache_obj:ec0:miss_metadata_file_sync:0/s ext_cache_obj:ec0:miss_directory_sync:0/s ext_cache_obj:ec0:lookup_reject:0/s ext_cache_obj:ec0:lookup_reject_sync:609/s ext_cache_obj:ec0:lookup_reject_normal_l0:0/s ext_cache_obj:ec0:lookup_reject_io:0/s ext_cache_obj:ec0:lookup_chains:14/s ext_cache_obj:ec0:lookup_chain_cnt:301/s ext_cache_obj:ec0:hit_percent:66% ext_cache_obj:ec0:hit_percent_sync:0% ext_cache_obj:ec0:inserts:73/s ext_cache_obj:ec0:inserts_flushq:49/s ext_cache_obj:ec0:inserts_once:0/s ext_cache_obj:ec0:inserts_age:0/s ext_cache_obj:ec0:inserts_normal_lev0:51/s ext_cache_obj:ec0:inserts_metadata_file:3/s ext_cache_obj:ec0:inserts_directory:7/s ext_cache_obj:ec0:inserts_indirect:9/s ext_cache_obj:ec0:insert_rejects_misc:4/s ext_cache_obj:ec0:insert_rejects_present:218/s ext_cache_obj:ec0:insert_rejects_flushq:0/s ext_cache_obj:ec0:insert_rejects_normal_lev0:0/s ext_cache_obj:ec0:insert_rejects_throttle:0/s ext_cache_obj:ec0:insert_rejects_throttle_io:0/s ext_cache_obj:ec0:insert_rejects_throttle_refill:0/s ext_cache_obj:ec0:insert_rejects_throttle_mem:0/s ext_cache_obj:ec0:insert_rejects_cache_reuse:0/s ext_cache_obj:ec0:insert_rejects_vbn_invalid:0/s ext_cache_obj:ec0:reuse_percent:357% ext_cache_obj:ec0:evicts:72/s ext_cache_obj:ec0:evicts_ref:2/s ext_cache_obj:ec0:readio_solitary:0/s CONFIDENTIAL Restricted to NetApp employees and channel partners. 13 . This document may not be shared with customers without prior written permission from NetApp.

ext_cache_obj:ec0:readio_chains:0/s ext_cache_obj:ec0:readio_blocks:0/s ext_cache_obj:ec0:readio_in_flight:0 ext_cache_obj:ec0:readio_max_in_flight:0 ext_cache_obj:ec0:readio_avg_chainlength:0 ext_cache_obj:ec0:readio_avg_latency:0ms ext_cache_obj:ec0:writeio_solitary:0/s ext_cache_obj:ec0:writeio_chains:0/s ext_cache_obj:ec0:writeio_blocks:0/s ext_cache_obj:ec0:writeio_in_flight:0 ext_cache_obj:ec0:writeio_max_in_flight:0 ext_cache_obj:ec0:writeio_avg_chainlength:0 ext_cache_obj:ec0:writeio_avg_latency:0ms ext_cache_obj:ec0:blocks_ref0:8345882 ext_cache_obj:ec0:blocks_ref1:41023 ext_cache_obj:ec0:blocks_ref2:608 ext_cache_obj:ec0:blocks_ref3:18446744073709551221 ext_cache_obj:ec0:blocks_ref4:18446744073709551257 ext_cache_obj:ec0:blocks_ref5:539 ext_cache_obj:ec0:blocks_ref6:380 ext_cache_obj:ec0:blocks_ref7:930 ext_cache_obj:ec0:blocks_ref0_arrivals:7905 ext_cache_obj:ec0:blocks_ref1_arrivals:50631 ext_cache_obj:ec0:blocks_ref2_arrivals:3933 ext_cache_obj:ec0:blocks_ref3_arrivals:2088 ext_cache_obj:ec0:blocks_ref4_arrivals:1900 ext_cache_obj:ec0:blocks_ref5_arrivals:2098 ext_cache_obj:ec0:blocks_ref6_arrivals:1437 ext_cache_obj:ec0:blocks_ref7_arrivals:5795 ext_cache_obj:ec0:lru_ticks:242592 ext_cache_obj:ec0:invalidates:0/s ext_cache_obj:ec1:type:1 ext_cache_obj:ec1:uptime:3484948957 ext_cache_obj:ec1:blocks:8388608 ext_cache_obj:ec1:associativity:4 ext_cache_obj:ec1:sets:2097152 ext_cache_obj:ec1:usage:0% ext_cache_obj:ec1:accesses_total:1391 ext_cache_obj:ec1:accesses:5/s ext_cache_obj:ec1:accesses_sync:0/s ext_cache_obj:ec1:hit:0/s ext_cache_obj:ec1:hit_flushq:6/s ext_cache_obj:ec1:hit_once:0/s ext_cache_obj:ec1:hit_age:0/s ext_cache_obj:ec1:hit_normal_lev0:7/s ext_cache_obj:ec1:hit_metadata_file:0/s ext_cache_obj:ec1:hit_directory:0/s ext_cache_obj:ec1:hit_indirect:0/s ext_cache_obj:ec1:hit_partial:0/s ext_cache_obj:ec1:hit_sync:0/s ext_cache_obj:ec1:hit_flushq_sync:0/s ext_cache_obj:ec1:hit_once_sync:0/s ext_cache_obj:ec1:hit_age_sync:0/s ext_cache_obj:ec1:hit_normal_lev0_sync:0/s ext_cache_obj:ec1:hit_metadata_file_sync:0/s ext_cache_obj:ec1:hit_directory_sync:0/s ext_cache_obj:ec1:miss:5/s CONFIDENTIAL Restricted to NetApp employees and channel partners. 14 . This document may not be shared with customers without prior written permission from NetApp. each of whom is under NDA obligations.

ext_cache_obj:ec1:miss_flushq:36/s ext_cache_obj:ec1:miss_once:16/s ext_cache_obj:ec1:miss_age:0/s ext_cache_obj:ec1:miss_normal_lev0:43/s ext_cache_obj:ec1:miss_metadata_file:1/s ext_cache_obj:ec1:miss_directory:0/s ext_cache_obj:ec1:miss_indirect:0/s ext_cache_obj:ec1:miss_sync:0/s ext_cache_obj:ec1:miss_flushq_sync:0/s ext_cache_obj:ec1:miss_once_sync:0/s ext_cache_obj:ec1:miss_age_sync:0/s ext_cache_obj:ec1:miss_normal_lev0_sync:0/s ext_cache_obj:ec1:miss_metadata_file_sync:0/s ext_cache_obj:ec1:miss_directory_sync:0/s ext_cache_obj:ec1:lookup_reject:0/s ext_cache_obj:ec1:lookup_reject_sync:0/s ext_cache_obj:ec1:lookup_reject_normal_l0:0/s ext_cache_obj:ec1:lookup_reject_io:0/s ext_cache_obj:ec1:lookup_chains:0/s ext_cache_obj:ec1:lookup_chain_cnt:0/s ext_cache_obj:ec1:hit_percent:0% ext_cache_obj:ec1:hit_percent_sync:0% ext_cache_obj:ec1:inserts:72/s ext_cache_obj:ec1:inserts_flushq:0/s ext_cache_obj:ec1:inserts_once:0/s ext_cache_obj:ec1:inserts_age:0/s ext_cache_obj:ec1:inserts_normal_lev0:0/s ext_cache_obj:ec1:inserts_metadata_file:0/s ext_cache_obj:ec1:inserts_directory:0/s ext_cache_obj:ec1:inserts_indirect:0/s ext_cache_obj:ec1:insert_rejects_misc:0/s ext_cache_obj:ec1:insert_rejects_present:4/s ext_cache_obj:ec1:insert_rejects_flushq:0/s ext_cache_obj:ec1:insert_rejects_normal_lev0:0/s ext_cache_obj:ec1:insert_rejects_throttle:0/s ext_cache_obj:ec1:insert_rejects_throttle_io:0/s ext_cache_obj:ec1:insert_rejects_throttle_refill:0/s ext_cache_obj:ec1:insert_rejects_throttle_mem:0/s ext_cache_obj:ec1:insert_rejects_cache_reuse:0/s ext_cache_obj:ec1:insert_rejects_vbn_invalid:0/s ext_cache_obj:ec1:reuse_percent:0% ext_cache_obj:ec1:evicts:63/s ext_cache_obj:ec1:evicts_ref:0/s ext_cache_obj:ec1:readio_solitary:0/s ext_cache_obj:ec1:readio_chains:0/s ext_cache_obj:ec1:readio_blocks:0/s ext_cache_obj:ec1:readio_in_flight:0 ext_cache_obj:ec1:readio_max_in_flight:0 ext_cache_obj:ec1:readio_avg_chainlength:0 ext_cache_obj:ec1:readio_avg_latency:0ms ext_cache_obj:ec1:writeio_solitary:0/s ext_cache_obj:ec1:writeio_chains:0/s ext_cache_obj:ec1:writeio_blocks:0/s ext_cache_obj:ec1:writeio_in_flight:0 ext_cache_obj:ec1:writeio_max_in_flight:0 ext_cache_obj:ec1:writeio_avg_chainlength:0 ext_cache_obj:ec1:writeio_avg_latency:0ms CONFIDENTIAL Restricted to NetApp employees and channel partners. each of whom is under NDA obligations. 15 . This document may not be shared with customers without prior written permission from NetApp.

each of whom is under NDA obligations. This document may not be shared with customers without prior written permission from NetApp. 16 .ext_cache_obj:ec1:blocks_ref0:8384862 ext_cache_obj:ec1:blocks_ref1:3682 ext_cache_obj:ec1:blocks_ref2:825 ext_cache_obj:ec1:blocks_ref3:18446744073709551586 ext_cache_obj:ec1:blocks_ref4:33 ext_cache_obj:ec1:blocks_ref5:18446744073709551498 ext_cache_obj:ec1:blocks_ref6:18446744073709551455 ext_cache_obj:ec1:blocks_ref7:18446744073709551131 ext_cache_obj:ec1:blocks_ref0_arrivals:4438 ext_cache_obj:ec1:blocks_ref1_arrivals:6259 ext_cache_obj:ec1:blocks_ref2_arrivals:2204 ext_cache_obj:ec1:blocks_ref3_arrivals:815 ext_cache_obj:ec1:blocks_ref4_arrivals:653 ext_cache_obj:ec1:blocks_ref5_arrivals:292 ext_cache_obj:ec1:blocks_ref6_arrivals:302 ext_cache_obj:ec1:blocks_ref7_arrivals:816 ext_cache_obj:ec1:lru_ticks:256816 ext_cache_obj:ec1:invalidates:4/s ext_cache_obj:ec2:type:1 ext_cache_obj:ec2:uptime:3484948917 ext_cache_obj:ec2:blocks:16777216 ext_cache_obj:ec2:associativity:8 ext_cache_obj:ec2:sets:2097152 ext_cache_obj:ec2:usage:1% ext_cache_obj:ec2:accesses_total:0 ext_cache_obj:ec2:accesses:0/s ext_cache_obj:ec2:accesses_sync:0/s ext_cache_obj:ec2:hit:0/s ext_cache_obj:ec2:hit_flushq:3/s ext_cache_obj:ec2:hit_once:0/s ext_cache_obj:ec2:hit_age:0/s ext_cache_obj:ec2:hit_normal_lev0:5/s ext_cache_obj:ec2:hit_metadata_file:0/s ext_cache_obj:ec2:hit_directory:0/s ext_cache_obj:ec2:hit_indirect:0/s ext_cache_obj:ec2:hit_partial:0/s ext_cache_obj:ec2:hit_sync:0/s ext_cache_obj:ec2:hit_flushq_sync:0/s ext_cache_obj:ec2:hit_once_sync:0/s ext_cache_obj:ec2:hit_age_sync:0/s ext_cache_obj:ec2:hit_normal_lev0_sync:0/s ext_cache_obj:ec2:hit_metadata_file_sync:0/s ext_cache_obj:ec2:hit_directory_sync:0/s ext_cache_obj:ec2:miss:0/s ext_cache_obj:ec2:miss_flushq:33/s ext_cache_obj:ec2:miss_once:16/s ext_cache_obj:ec2:miss_age:0/s ext_cache_obj:ec2:miss_normal_lev0:38/s ext_cache_obj:ec2:miss_metadata_file:1/s ext_cache_obj:ec2:miss_directory:0/s ext_cache_obj:ec2:miss_indirect:0/s ext_cache_obj:ec2:miss_sync:0/s ext_cache_obj:ec2:miss_flushq_sync:0/s ext_cache_obj:ec2:miss_once_sync:0/s ext_cache_obj:ec2:miss_age_sync:0/s ext_cache_obj:ec2:miss_normal_lev0_sync:0/s ext_cache_obj:ec2:miss_metadata_file_sync:0/s CONFIDENTIAL Restricted to NetApp employees and channel partners.

each of whom is under NDA obligations.ext_cache_obj:ec2:miss_directory_sync:0/s ext_cache_obj:ec2:lookup_reject:0/s ext_cache_obj:ec2:lookup_reject_sync:0/s ext_cache_obj:ec2:lookup_reject_normal_l0:0/s ext_cache_obj:ec2:lookup_reject_io:0/s ext_cache_obj:ec2:lookup_chains:0/s ext_cache_obj:ec2:lookup_chain_cnt:0/s ext_cache_obj:ec2:hit_percent:0% ext_cache_obj:ec2:hit_percent_sync:0% ext_cache_obj:ec2:inserts:63/s ext_cache_obj:ec2:inserts_flushq:0/s ext_cache_obj:ec2:inserts_once:0/s ext_cache_obj:ec2:inserts_age:0/s ext_cache_obj:ec2:inserts_normal_lev0:0/s ext_cache_obj:ec2:inserts_metadata_file:0/s ext_cache_obj:ec2:inserts_directory:0/s ext_cache_obj:ec2:inserts_indirect:0/s ext_cache_obj:ec2:insert_rejects_misc:0/s ext_cache_obj:ec2:insert_rejects_present:1/s ext_cache_obj:ec2:insert_rejects_flushq:0/s ext_cache_obj:ec2:insert_rejects_normal_lev0:0/s ext_cache_obj:ec2:insert_rejects_throttle:0/s ext_cache_obj:ec2:insert_rejects_throttle_io:0/s ext_cache_obj:ec2:insert_rejects_throttle_refill:0/s ext_cache_obj:ec2:insert_rejects_throttle_mem:0/s ext_cache_obj:ec2:insert_rejects_cache_reuse:0/s ext_cache_obj:ec2:insert_rejects_vbn_invalid:0/s ext_cache_obj:ec2:reuse_percent:0% ext_cache_obj:ec2:evicts:11/s ext_cache_obj:ec2:evicts_ref:0/s ext_cache_obj:ec2:readio_solitary:0/s ext_cache_obj:ec2:readio_chains:0/s ext_cache_obj:ec2:readio_blocks:0/s ext_cache_obj:ec2:readio_in_flight:0 ext_cache_obj:ec2:readio_max_in_flight:0 ext_cache_obj:ec2:readio_avg_chainlength:0 ext_cache_obj:ec2:readio_avg_latency:0ms ext_cache_obj:ec2:writeio_solitary:0/s ext_cache_obj:ec2:writeio_chains:0/s ext_cache_obj:ec2:writeio_blocks:0/s ext_cache_obj:ec2:writeio_in_flight:0 ext_cache_obj:ec2:writeio_max_in_flight:0 ext_cache_obj:ec2:writeio_avg_chainlength:0 ext_cache_obj:ec2:writeio_avg_latency:0ms ext_cache_obj:ec2:blocks_ref0:16776473 ext_cache_obj:ec2:blocks_ref1:640 ext_cache_obj:ec2:blocks_ref2:131 ext_cache_obj:ec2:blocks_ref3:18446744073709551615 ext_cache_obj:ec2:blocks_ref4:15 ext_cache_obj:ec2:blocks_ref5:2 ext_cache_obj:ec2:blocks_ref6:3 ext_cache_obj:ec2:blocks_ref7:18446744073709551569 ext_cache_obj:ec2:blocks_ref0_arrivals:579 ext_cache_obj:ec2:blocks_ref1_arrivals:1181 ext_cache_obj:ec2:blocks_ref2_arrivals:457 ext_cache_obj:ec2:blocks_ref3_arrivals:76 ext_cache_obj:ec2:blocks_ref4_arrivals:55 CONFIDENTIAL Restricted to NetApp employees and channel partners. This document may not be shared with customers without prior written permission from NetApp. 17 .

This document may not be shared with customers without prior written permission from NetApp. 18 .ext_cache_obj:ec2:blocks_ref5_arrivals:30 ext_cache_obj:ec2:blocks_ref6_arrivals:19 ext_cache_obj:ec2:blocks_ref7_arrivals:61 ext_cache_obj:ec2:lru_ticks:87054 ext_cache_obj:ec2:invalidates:1/s CONFIDENTIAL Restricted to NetApp employees and channel partners. each of whom is under NDA obligations.

XML FILE CONTENTS <?xml VERSION = "1. NetApp. Go further. the NetApp logo. Specifications are subject to change without notice. This document may not be shared with customers without prior written permission from NetApp.Display in column format basic FlexScale PCS performance information --> <preset orientation="column" interval="5" print_footer="on"> <object name="ext_cache_obj"> <counter name="blocks"> <title>Blocks</title> <width>9</width> </counter> <counter name="usage"> <title>Usage</title> <width>5</width> </counter> <counter name="hit"> <title>Hit</title> <width>5</width> </counter> <counter name="miss"> <title>Miss</title> <width>5</width> </counter> <counter name="hit_percent"> <title>Hit</title> <width>3</width> </counter> <counter name="evicts"> <title>Evict</title> <width>5</width> </counter> <counter name="invalidates"> <title>Invalidate</title> <width>10</width> </counter> <counter name="inserts"> <title>Insert</title> <width>6</width> </counter> </object> </preset> © 2008 NetApp. Inc. faster. All rights reserved.5. Data ONTAP. and NOW are trademarks or registered trademarks of NetApp.0" ?> <!-. in the United States CONFIDENTIAL Restricted to NetApp employees and channel partners. 19 .2 APPENDIX 2 : FLEXSCALE-PCS. each of whom is under NDA obligations.

and/or other countries. This document may not be shared with customers without prior written permission from NetApp. All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as such. each of whom is under NDA obligations. 20 . Windows is a registered trademark of Microsoft Corporation. CONFIDENTIAL Restricted to NetApp employees and channel partners.