You are on page 1of 59

Storage Tiering with FabricPool:

Moving Cold Data from ONTAP to


StorageGRID

October 2020 | SL10637_C Version 1.2.1


TABLE OF CONTENTS

1 Introduction...................................................................................................................................... 3

2 Lab Activities................................................................................................................................... 5

2.1 FabricPool.................................................................................................................................. 5
2.1.1 Inactive Data Reporting........................................................................................................................................ 7

2.1.2 FabricPool Setup for StorageGRID.................................................................................................................... 14

2.1.3 FabricPool Operation.......................................................................................................................................... 33

3 Lab Summary.................................................................................................................................57

4 References......................................................................................................................................58

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
2 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
1 Introduction
FabricPool is a NetApp Data Fabric technology that enables automated tiering of data to low-cost object storage
located in the public and private cloud. SSD storage is fast but expensive, so ideally you want to limit SSD use to
only workloads that actively require it. FabricPool offers a policy-driven means for selecting and moving (tiering)
certain categories of data from your SSD storage (the internal tier) to more cost-effective object storage (the cloud
tier).

Figure 1-1:

FabricPool has three primary use cases:


• Primary storage space reclamation—Snapshots and unstructured secondary data (like completed
projects and old data sets) are inefficient consumers of SSD storage, so it makes sense to transfer this
data to more cost effective (i.e., slower) object storage, thereby freeing that expensive SSD storage for
better uses.
• Secondary storage space reclamation—Data protection volumes (SnapMirror and SnapVault
destination volumes) are frequently stored on secondary storage clusters. Like snapshots, these data
protection copies are often infrequently used, so by directing the majority of this data to low cost object
storage, customers can significantly reduce the number of disk shelves needed on their secondary
clusters.
• Volume move—Businesses often need to retain historical datasets (like completed projects and legacy
reports) that they only occasionally need to access. It is inefficient to consume expensive primary
storage for such data, so FabricPool allows you to move entire volumes containing this data to much
less expensive object storage.
NetApp’s Cloud Tiering
NetApp now offers a service that simplifies the administration of FabricPool across multiple ONTAP clusters from
the cloud. Cloud Tiering uses exactly the same FabricPool technology that you will learn about in this lab, only
that management across multiple clusters is consolidated into a single service.
Cloud Tiering offers automation, monitoring, reports, and a common management interface:
• Automation makes it easier to set up and manage data tiering from ONTAP clusters to the cloud.
• A single pane of glass removes the need to independently manage FabricPool across several clusters.
• Reports show the amount of active and inactive data on each cluster.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
3 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
• A tiering health status helps you identify and correct issues as they occur.
While the Cloud Tiering management interface is beyond the scope of this lab, all of the concepts and operations
you will learn about in this lab will help you understand FabricPool technology in any implementation. For more
information about the Cloud Tiering service, see the References section at the end of this guide.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
4 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
2 Lab Activities
SSD storage is fast, but expensive. Cloud storage can offer a less expensive alternative, but usually with lower
performance. Wouldn't it be nice if you could get SSD latencies at cloud prices?
Storage tiering can be used to create an optimal solution. NetApp FabricPools offer a policy-driven means for
selecting and moving certain categories of data from your performance tier SSD storage to more cost-effective
cloud tier storage. The cloud-tier storage can be hosted external to the organization (e.g., Amazon S3), or hosted
internally in NetApp StorageGRID. The decision to use FabricPools is independent of the decision on where to
host the FabricPools data.
Imagine you are a storage administrator providing high-performance storage to your users based on NetApp
ONTAP All Flash FAS. Based on your experience, you know that some of the data is accessed frequently ("hot"
data) and some of the data is accessed infrequently ("cold" data).
This lab guide will lead you through three main activities:
1. Suppose you want to quantify the potential benefit of using FabricPools. The Inactive Data Reporting on
page 7 section of this lab shows you how to use Inactive Data Reporting (IDR) to assess how much
data can potentially be tiered.
2. Based on IDR, you have determined that using FabricPools to tier data to cloud storage is useful. The
FabricPool Setup for StorageGRID section of this lab shows you how to configure FabricPools to migrate
data to cloud storage using NetApp StorageGRID as an example.
3. Once you have data tiered to cloud storage, you may want to optimize your setup. The FabricPool
Operation on page 33 section of this lab shows you how to examine tiering results and make
adjustments.

2.1 FabricPool
In FabricPool, cloud tiers (also known as external tiers, which are basically object store buckets) attach to
aggregates. FlexGroup and FlexVol volumes residing on the aggregate must be configured for thin provisioning
before the cloud tier can be attached to the aggregate.
You can attach more than one cloud tier to a cluster, but a given aggregate can only attach to a single cloud
tier. You can attach multiple aggregates to the same cloud tier (i.e., the same object store bucket), but NetApp
recommends that you not connect multiple clusters to the same object store bucket. When dealing with
aggregates that host FlexGroup volumes, best practice is to have all of the hosting aggregates attached to the
same bucket.
FabricPool uses a tiering policy applied at the volume level to decide which FlexVol and FlexGroup volumes on a
FabricPool-enabled aggregate can participate in tiering. The tiering policy also specifies what types of data blocks
within participating volumes are potentially eligible for relocation out to the cloud tier.
Determining which of a volume’s blocks are potentially eligible for relocation to an attached cloud tier is a function
of each block’s “temperature”. When a block is accessed, ONTAP labels the block as “hot”. Over time, if the block
is not accessed again, ONTAP gradually “cools” the block down. Only blocks that have becomes sufficiently
“cold” (i.e., inactive) can potentially be relocated out to the cloud tier, and even then only if they comply with any
special eligibility criteria dictated by the volume’s assigned tiering policy.
When the blocks comprising a file's contents get moved out to the cloud tier, the file's WAFL metadata remains
in the internal tier (also known as the performance tier). Files with partially or fully tiered blocks still appear to be
in their original locations in the internal tier, so users can still browse directories and list files in the internal tier
without any data transfer from the cloud tier, or even any awareness that the file's blocks reside in the cloud tier. If
a tiered file's contents are read, FabricPool seamlessly reads the blocks back from the cloud tier into the internal
tier, with the only visible impact to the end user being a short delay as FabricPool transfers the blocks back. Once
the user is finished with the files, FabricPool's algorithm for cooling blocks may eventually force the associated
blocks back out to the cloud tier, depending on the volume's assigned tiering policy.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
5 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
Although FabricPool can significantly reduce storage footprints, it is not a backup solution. If a catastrophic
disaster destroys the internal tier, you cannot recover the data from the cloud tier alone because it contains no
WAFL metadata. You still need a standard backup solution to protect your internal tier.
ONTAP offers the following four volume tiering policies:
• snapshot-only:
• Use Case: Primary storage space reclamation.
• Effect: Only cold data blocks that are exclusively associated with snapshot copies are eligible
for relocation to the cloud tier. Under this tiering policy, it takes two days of inactivity by
default for data blocks to become “cold”. This value can be made greater on a per-volume
basis, but cannot be reduced below two days. This is the default tiering policy that ONTAP
assigns to a volume at volume creation.
• auto: (introduced in ONTAP 9.4)
• Use Case: Primary storage space reclamation.
• Effect: All cold blocks in the volume, not just those associated with snapshot copies, are
eligible for relocation to the cloud tier. Under this tiering policy, it takes 31 days by default for
blocks to become “cold”, but this value is configurable from 2 to 63 days, on a per-volume
basis.
• all:
• Use Case: Secondary storage space reclamation, Volume Move.
• Effect: All data blocks are immediately moved to the cloud tier; they are all “cold” by default.
In addition, as blocks are read they stay “cold” and remain on the cloud tier. This policy
can only be set on Data Protection volumes, or used in conjunction with a volume move
operation.
• none:
• Use Case: N/A.
• Effect: Disables tiering to the object store for the volume, but if the volume resides on a
FabricPool-enabled aggregate then blocks can still cool. Any volume blocks that were moved
to a cloud tier before this policy was assigned will remain in the cloud tier until accessed (at
which time they will be restored to the local tier).
Even if data blocks are cold, they still will not relocate to the cloud tier unless the following additional criteria are
met:
• The volume containing the data blocks must reside on an aggregate with an attached cloud tier, and the
volume must have an assigned tiering policy that makes the blocks eligible for tiering.
• By default, the cloud-attached aggregate must be greater than 50% full. There is usually little benefit in
tiering cold data to the cloud tier if the internal tier is under-utilized, but this threshold is configurable on
a per-aggregate basis.
• There must be enough cold blocks available to concatenate into a single 4MB object for writing to the
cloud tier. ONTAP's WAFL block size is 4KB, so you need at least 1000 cold WAFL blocks to create the
4MB object store block required for the cloud tier.
Important: Block cooling depends on a background cooling scan process in ONTAP. If the ONTAP
system is turned off, cooling does not progress. This is a particularly important point for Cloud Volumes for
ONTAP; turning such an instance off when it is not in use may reduce computing costs, but also prevents
the cooling scan from running.
From an ONTAP perspective, FabricPool supports All Flash FAS systems, all SSD aggregates on non-AFF FAS
systems, and most volume types on Cloud Volumes ONTAP systems.
The supported FabricPool object storage targets are:
• NetApp StorageGRID 10.3, and greater.
No ONTAP FabricPool capacity license is required for StorageGRID.
• Third-party object storage providers:

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
6 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
• Amazon Web Services (AWS) S3
• Microsoft Azure Blob Storage
• IBM Cloud Object Store
To utilize FabricPool on AFF or FAS hybrid Flash systems with third-party object storage providers, customers
must purchase capacity-based ONTAP FabricPool licenses, available in 1 TB increments. Tiering stops when the
amount of data in the capacity tier reaches the licensed capacity.
Attention: Purchases of new ONTAP 9.2+ clusters include a free 10TB FabricPool capacity license!

The FabricPool capacity licenses do not cover usage charges from the third-party object storage providers; those
charges are the exclusive responsibility of the customer.
A FabricPool capacity license is not required when using Amazon S3 or Microsoft Azure Blob Storage as the
cloud tier for Cloud Volumes ONTAP. Cloud Volumes ONTAP is NetApp's cloud-based software-defined storage
offering which runs as a virtual instance on cloud hyperscalers like Amazon Web Services (AWS), Microsoft
Azure, and Goodle Cloud Platform.
Note: This lab consists of three exercises. The first two exercises demonstrate how to get FabricPool
running on a new cluster. The third exercise demonstrates FabricPool in operation on a cluster that has
already been pre-configured for FabricPool. If desired, you can skip directly to the third exercise without
completing the first two exercises, but you will miss some helpful contextual content if you do.

2.1.1 Inactive Data Reporting

Inactive Data Reporting (IDR) is a tool introduced in ONTAP 9.4 that helps determine how much inactive (or cold)
data there is in your aggregates. No FabricPool license is required to use IDR, and you do not have to attach a
cloud tier to your aggregates or your cluster in order to use it.
You enable IDR on a per-aggregate basis, and ONTAP reports data inactivity for each volume on the enabled
aggregate. However, it takes 31 days after you enable IDR before ONTAP will display the inactivity information.
Until this time period has elapsed, ONTAP will not report a value.
Since IDR requires this much time to elapse before it will report, this exercise shows you how to enable it on one
aggregate, and then has you examine a different aggregate that has been preconfigured and has already aged
enough for IDR to start reporting data.
IDR reporting information can be viewed from either the CLI or from System Manager, but IDR can only be
enabled from the CLI.
1. On the taskbar of Jumphost, launch PuTTY.

Figure 2-1:

2. In PuTTY, double-click the session profile for cluster2 to launch the SSH session to that cluster.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
7 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
2

Figure 2-2:

3. Enter the password Netapp1! to log in as the “admin” user.


4. Display a list of the available volumes.

cluster2::> vol show


Vserver Volume Aggregate State Type Size Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
cluster2-01
vol0 aggr0 online RW 10.04GB 2.71GB 74%
svm3 app1 aggr1_cluster2
online RW 1GB 1015MB 0%
svm3 app2 aggr1_cluster2
online RW 1GB 1015MB 0%
svm3 proj1 aggr1_cluster2
online RW 1GB 333.3MB 65%
svm3 proj2 aggr1_cluster2
online RW 1GB 573.9MB 41%
svm3 proj3 aggr3_cluster2
online RW 2GB 339.2MB 82%
svm3 proj4 aggr4_cluster2
online RW 2GB 137.5MB 92%
svm3 proj5 aggr2_cluster2
online RW 1GB 621.2MB 36%
svm3 proj6 aggr5_cluster2
online RW 2GB 497.7MB 74%
svm3 proj7 aggr5_cluster2
online RW 2GB 979.9MB 49%
svm3 svm3_root aggr3_cluster2
online RW 20MB 17.43MB 8%
11 entries were displayed.

cluster2::>

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
8 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
In this activity, the goal is to find out how much inactive data there is on the “proj3” volume, which the vol
show command says is hosted on the “aggr3_cluster2” aggregate. Note that this volume is 2 GB in size,
and 82% used.
5. Display the list of aggregates on the cluster.

cluster2::> aggr show

Aggregate Size Available Used% State #Vols Nodes RAID Status


--------- -------- --------- ----- ------- ------ ---------------- ------------
aggr0 10.62GB 516.3MB 95% online 1 cluster2-01 raid_dp,
normal
aggr1_cluster2
7.13GB 5.62GB 21% online 4 cluster2-01 raid_dp,
normal
aggr2_cluster2
7.13GB 6.72GB 6% online 1 cluster2-01 raid_dp,
normal
aggr3_cluster2
7.13GB 5.41GB 24% online 2 cluster2-01 raid_dp,
normal
aggr4_cluster2
7.13GB 5.34GB 25% online 1 cluster2-01 raid_dp,
normal
aggr5_cluster2
7.13GB 4.72GB 34% online 2 cluster2-01 raid_dp,
normal
6 entries were displayed.

cluster2::>

The fact that “aggr3_cluster2” aggregate is only 24% full, well below the 50% minimum threshold usually
required for FabricPool, is not a problem as IDR does not require any minimum capacity utilization.
6. Enable IDR on aggr3_cluster.

cluster2::> aggr show -aggregate aggr3_cluster2,aggr4_cluster2 -fields is-inactive-data-


reporting-enabled
aggregate is-inactive-data-reporting-enabled
-------------- ----------------------------------
aggr3_cluster2 false
aggr4_cluster2 true

cluster2::> aggr modify -aggregate aggr3_cluster2 -is-inactive-data-reporting-enabled true

cluster2::> aggr show -aggregate aggr3_cluster2,aggr4_cluster2 -fields is-inactive-data-


reporting-enabled
aggregate is-inactive-data-reporting-enabled
-------------- ----------------------------------
aggr3_cluster2 true
aggr4_cluster2 true

cluster2::>

Note: IDR only starts tracking inactivity starting at the time you enable it on the aggregate, and it
assumes that all files in the aggregate, including those only present in snapshots, start as active.
Since IDR requires 31 days to report data once it has been enabled on an aggregate, for this lab you will
need to examine a pre-configured aggregate to see what the results look like. The aggr show commands
you just ran indicate that “aggr4_cluster2” also has IDR enabled.
7. Determine what volume(s) reside on “aggr4_cluster2”.

cluster2::> vol show -aggregate aggr4_cluster2


Vserver Volume Aggregate State Type Size Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
svm3 proj4 aggr4_cluster2
online RW 2GB 137.6MB 92%

cluster2::>

“proj4” is also a 2 GB volume, and is about 92% full.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
9 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
8. View the inactive data report for both “proj3” and “proj4”.

cluster2::> vol show -volume proj3,proj4 -fields performance-tier-inactive-user-


data,performance-tier-inactive-user-data-percent
vserver volume performance-tier-inactive-user-data performance-tier-inactive-user-data-
percent
------- ------ -----------------------------------
-------------------------------------------
svm3 proj3 - -

svm3 proj4 1.76GB 88%


2 entries were displayed.

cluster2::>

At this time, almost 1.76 GB of data on proj4 is classified as inactive, 88% of proj4’s total space
consumption.
Notice that proj3 only reports “-” characters for the values for its output. In this command’s output, the “-”
character indicates one of the following conditions:
• The underlying aggregate is not configured for IDR.
• The underlying aggregate is configured for IDR, but insufficient time has passed since IDR
was enabled for it to report inactivity data.
You can also view inactive data reporting information in System Manager.
9. On the taskbar of Jumphost, launch Firefox.

Figure 2-3:

10. On the Firefox browser toolbar, navigate to NetApp Bookmarks > NetApp ONTAP Cluster 2.
11. Log in as the user admin with the password Netapp1!.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
10 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
10

11

Figure 2-4:

12. In the left pane, navigate to Storage > Tiers.


13. On the right side of the page, there is an informational message with the heading “No cloud tiers are
configured.” Within this message it is reported that about 1.76 GB of data is inactive. This is a summary
across all aggregates in the cluster.
14. On the left side of the page is the SSD section which contains a short summary of each aggregate.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
11 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
13

12

14

Figure 2-5:

15. Locate and click the aggr4_cluster2 aggregate.

15

Figure 2-6:

16. Within the Overview tab on the aggr4_cluster2 page, observe that it also reports about 1.76 GB of
inactive data. This of course is the same value reported by the CLI for this aggregate.
17. Click All Tiers near the top of the page to return to the Storage Tiers listing.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
12 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
17

16

Figure 2-7:

18. Within the SSD section on the storage Tiers page, locate and click the aggr3_cluster2 aggregate.

18

Figure 2-8:

19. Observe that there is no mention of inactive data on the page for “aggr3_cluster2”, once again because
insufficient time has passed since IDR was enabled on the aggregate for reporting to start.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
13 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
Figure 2-9:

2.1.2 FabricPool Setup for StorageGRID

This section of the lab guide will show you how to configure FabricPools to migrate data to cloud storage using
NetApp StorageGRID as an example.
In the following activities you will set up FabricPool with a StorageGRID S3 target. You will:
• Verify that FabricPool has not yet been set up on the cluster.
• Create an intercluster LIF for FabricPool.
• Create a bucket in StorageGRID for the FabricPool cloud tier you will be creating.
• Add a new FabricPool cloud tier to the cluster.
• Attach an existing aggregate to the new cloud tier.
Each exercise in this section is dependent upon the exercises that come before it, so you should complete them
all in the order in which they appear in this guide.

2.1.2.1 Determine if ONTAP has an Object Store


1. If Firefox is not already running on Jumphost, launch Firefox from the taskbar.

Figure 2-10:

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
14 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
2. From the browser toolbar, navigate to NetApp Bookmarks > NetApp ONTAP Cluster 2.
3. If not already logged in to System Manager, log in with the following credentials:
• Username: admin
• Password: Netapp1!

Figure 2-11:

System Manager displays the Dashboard page.


4. In the left pane, navigate to Storage > Tiers tab.
5. At the top of the page, in the box located under the “Tiers” title, there is a utilization bar that displays
how much space is used on all of the cluster’s internal tiers. If there are no object stores attached to the
cluster, there is no information displayed about cloud tiers. However, if there are already object stores,
information about the utilization for each one is displayed.
6. If there are no object stores attached to the cluster, there is an informational message at the top titled
“No cloud tiers are configured.” There is also a note that indicates how much inactive data there is in
the cluster. This message is only present because (as a time-saving measure for a later lab exercise)
this cluster includes a volume that is preconfigured for Inactive Data Reporting. In a cluster that does not
have Inactive Data Reporting configured, this message would not appear.
Tip: System Manager arranges screen elements based on the size of the in-lab browser window.
If an element does not appear where indicated in this guide, try expanding your browser window.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
15 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
6

Figure 2-12:

2.1.2.2 Create an Intercluster LIF for FabricPool


FabricPool utilizes intercluster LIFs for transferring data into and out of external capacity tiers. An intercluster
LIF is an ONTAP logical network interface that is used for data replication, typically between clusters but also
between clusters and object stores.
Now create an intercluster LIF that FabricPool will use when tiering data between the cluster and object storage.
1. In the left pane of System Manager, navigate to Network > Overview tab.
2. In the “Network Interfaces” section, observe that there are no LIFs with the type “Intercluster” currently
defined for the cluster.
3. Click the plus icon in the top right of that section.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
16 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
1

Figure 2-13:

The “Add Network Interface” dialog opens.


4. Set the fields in this window as follows:
• “Interface Role:” Select the Intercluster option.
• “Name:” intercluster1
• “IP Address:” 192.168.0.154
• “Subnet Mask” 24 (automatically set after entering “IP Address”).
5. Click Save.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
17 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
4

Figure 2-14:

The “Create Network Interface” dialog closes, and focus returns to the Network Overview page in System
Manager.
6. Verify that the new “intercluster1” LIF is present in the list of network interfaces.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
18 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
6

Figure 2-15:

Tip: On a single-node cluster like the one used in this lab, FabricPool only requires a single
intercluster LIF. For high-availability (HA) pairs, FabricPool requires two intercluster LIFs, one per
node.

2.1.2.3 Create a StorageGRID Bucket For FabricPool


Before you can migrate cold data to StorageGRID, you need to create S3 credentials and a bucket for use by
FabricPool.
1. From the Firefox browser toolbar, navigate to NetApp Bookmarks > NetApp StorageGRID.
2. Click Tenant Login near the upper right corner of the window.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
19 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
1

Figure 2-16:

3. Select the “Lab on Demand StorageGRID” Tenant from the Recent drop-down menu.
4. Sign in as the StorageGRID tenant named “Lab on Demand StorageGRID Tenant”, with username root
and password netapp01.

Figure 2-17:

5. To create credentials, click on the S3 tab at the top of the page, then on My Credentials.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
20 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
5

Figure 2-18:

6. Click on + Create to create a new access key.

Figure 2-19:

7. In the Create Access Key dialog, leave the “Expires” field blank so that the message “This access key
will never expire” remains.
8. Click Save.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
21 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
7
8

Figure 2-20:

9. The new Access Key ID and SecretAccess Key will appear.


Important: You need to save these before closing this dialog.

10. Click Download. This will download a file to the Downloads folder that contains the Tenant Account
Name, the Access Key ID, and the Secret Access Key.
11. Click Finish.

10 11

Figure 2-21:

12. To create an S3 bucket in StorageGRID for FabricPool use, click the S3 tab at the top of the page, then
click Buckets.
13. Click + Create Bucket to create a new bucket.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
22 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
12

13

Figure 2-22:

14. Enter fabricpool-bucket for the bucket “Name”.


15. Leave Region set to “us-east-1”.
16. Click Save.

14

15

16
Figure 2-23:

2.1.2.4 Add a StorageGRID Object Store for FabricPool to ONTAP


In this activity you attach a NetApp StorageGRID S3 object store to ONTAP.
1. If you do not already have System Manager for Cluster 2 open in Firefox, from the browser toolbar,
navigate to NetApp Bookmarks > NetApp ONTAP Cluster 2.
2. Log in as the user admin with the password Netapp1!.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
23 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
1

Figure 2-24:

3. In the left pane, navigate to Storage > Tiers.


4. Click the Add Cloud Tier button near the top of the page.
5. From the drop-down, select StorageGRID.

Figure 2-25:

The “Add Cloud Tier” page displays.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
24 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
6. Set the fields on this page as follows:
• “Name”: Accept the pre-populated value.
• “Server Name (FQDN):” dc1-g1.demo.netapp.com (this is the Gateway Node for
StorageGRID).
• “SSL:” checked (default).
• “Object store certificate”: unchecked
• “Port:” 8082 (default).
• “Access Key ID:” Retrieve this value from the “Secret Access Key...” file in the Downloads
directory. This file is in Comma-Separate Values (CSV) format. The file’s first line describes
the CSV fields, the second line contains the values you will be retrieving.
Tip: If you use Ctrl-C and Ctrl-V to cut and paste this value, please note that
sometimes the Ctrl-V will result in the character “v” being placed in the target field.
If you backspace and then do Ctrl-V a second time, the paste operation will work as
expected.
• “Secret Key:” Retrieve this value from the “Secret Access Key...” file in the Downloads
directory.
• “Container Name:” fabricpool-bucket.
• Skip the remaining fields.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
25 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
6

Figure 2-26:

7. Scroll to the bottom of the window, click Save.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
26 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
7

Figure 2-27:

This will configure the external capacity tier without attaching it to any aggregates. You will see how to
attach aggregates in a later step in this exercise.
8. A warning dialog appears regarding the “object store certificate.” This is not an issue within this lab
environment so click OK.

Figure 2-28:

System Manager processes the addition of the specified external capacity tier and then displays the
“Tiers” page.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
27 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
9. On the right side of the page there is a “Clouds” pane showing information about cloud tiers now that you
have created one. At the top, the “Used Capacity” display shows 0 bytes used.
10. Locate the “StorageGRID” box within the “Clouds” pane. This box displays the configuration details
of the StorageGRID cloud tier that you just added to the cluster, along with the “local tiers” that are
connected to it.

10

Figure 2-29:

2.1.2.5 Attach a StorageGRID Cloud Tier to an Aggregate


In this activity you attach a cloud tier to an existing aggregate, in this case “aggr5_cluster2”.
1. Still on the storage “Tiers” page from the last exercise, locate the “StorageGRID” box within the “Clouds”
pane. Click the menu icon.
2. In the drop-down menu select Attach Local Tiers.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
28 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
1

Figure 2-30:

System Manager displays the “Attach Local Tiers” page.


3. Under the heading “Add as Primary”, there is a pre-selected list of all of the aggregates eligible for
attachment.
4. Uncheck the checkboxes for aggr2_cluster2, aggr3_cluster2, and aggr4_cluster2 leaving only the
“aggr5_cluster2” checkbox selected.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
29 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
3

Figure 2-31:

5. Locate the table of tiering policies under the heading “Update Tiering Policy”.
6. Select the checkbox for the proj6 volume, which resides on aggr5_cluster2.
Note: No checkboxes are visible until the cursor hovers over the area just left of the volume
name.
7. The “proj6” volume’s current tiering policy is set to “auto”, indicating that FabricPool will choose the
blocks to tier.
8. You can change the volume’s assigned tiering policy from this page by clicking on Edit, but for this part
of the lab leave the policy set to “auto”.
9. Click Save.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
30 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
5

6 7

Figure 2-32:

10. A “Warning” dialog opens indicating that you cannot detach an object store once it is attached to a local
tier. Click Ok to continue.
Tip: It is possible to disconnect an object store after it has been attached to an aggregate,
but doing so requires that you move all the volumes off of the aggregate, and then delete the
aggregate. Any data blocks that reside in the cloud tier will be pulled back to the internal tier as
part of the volume move operation.

10

Figure 2-33:

The “Warning” dialog closes, and the storage “Tiers” page displays.
11. The “StorageGRID” box within the “Clouds” pane now indicates that “aggr5_cluster2” is attached.
Note: The option to attach local tiers is still available because attaching multiple tiers to the
same bucket is supported.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
31 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
11

Figure 2-34:

12. Within the SSD pane on the same page, locate the box for “aggr5_cluster2”.
13. There is now a StorageGRID utilization count for “aggr5_cluster2” in the bottom right of the box. This
count only displays the utilization for this specific aggregate, and currently displays 0 bytes used, since
insufficient time has elapsed for any data blocks in any volumes on the aggregate to get cool enough to
relocate out to the cloud tier.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
32 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
12

13

Figure 2-35:

2.1.3 FabricPool Operation

This section of the lab guide will show you how to examine tiering results and make adjustments.
The activities in the remainder of the lab guide utilize a cluster that has pre-configured AWS cloud tiers and pre-
cooled/tiered data blocks, but the concepts and procedures they describe are equally applicable to StorageGRID
cloud tiers. This section includes the following activities:

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
33 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
• Analyze Tiering Results.
• Changing a Volume’s Tiering Policy.
• Customize Cooling Periods and Tiering Thresholds.

2.1.3.1 Analyze Tiering Results


In this activity you examine the tiering results for two volumes that have already moved data out to a cloud tier.
1. On the Firefox browser toolbar, navigate to NetApp Bookmarks > NetApp ONTAP Cluster 1.
Important: You are using a different ONTAP cluster for the remainder of the lab, so make sure
you have selected the bookmark for Cluster 1!
2. If not already logged in to System Manager for cluster1, log in with the following credentials:
• Username: admin
• Password: Netapp1!

Figure 2-36:

3. In the left pane, navigate to Storage > Tiers.


4. The bar graph at the top of the “SSD section” provides a visual summary of the internal tier utilization for
the entire cluster. The internal tier holds almost 100 GB of data, and is approximately 65% percent full.
5. Beneath that is a list of the aggregates within this cluster.
6. On the right side, within this “Clouds” section, the AWS cloud tier already contains approximately 7.44
GB of data.
7. The StorageGRID tier already contains approximately 2.59 GB of data.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
34 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
4 6 7

Figure 2-37:

Many of the listed aggregates already have cloud tiers attached. The number under the cloud icon
indicates how much of the data belonging to the respective aggregate has already moved to that cloud
tier.
For all of the aggregates that have attached cloud tiers, the used capacity shown for the internal tier
does not include data that has been moved to the cloud tier.
All of the aggregates that have an attached cloud tier also report how much of the data in the aggregate
has been classified by ONTAP as inactive (i.e., cold) if a change in tiering policy would free up space on
the internal tier. What data is classified as inactive is a function of the tiering policies assigned to each
aggregate’s volumes, as well as block cooling results delivered by the background block cooling scans
run by ONTAP. The reported inactive data value encompasses data that has already moved to the cloud
tier, as well as data that still resides on the internal tier.
8. In the list of aggregates, find “Aggr2”. Notice that Aggr2 does not have an attached cloud tier. Click on
aggr2.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
35 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
8

Figure 2-38:

9. On the “Overview” page that appears, you can see that Aggr2 does not include a report of how much
inactive data is on the aggregate. This is because Inactive Data Reporting (IDR) has not been enabled
on this volume.
10. Click All Tiers near the top of the page to return to the storage “Tiers” page.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
36 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
10

Figure 2-39:

11. Locate the “aggr7” box within the SSD pane, that also does not have an attached cloud tier. Click on
aggr7.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
37 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
11

Figure 2-40:

12. On this “Overview” page for Aggr7, there is a report of how much inactive data is on that aggregate.
This is because Inactive Data Reporting (IDR) has been enabled on this volume.
13. Click All Tiers to return to the storage “Tiers” page.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
38 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
13

12

Figure 2-41:

14. Locate the “aggr1” box within the SSD pane.


15. Observe that “aggr1” currently has 1.76GB tiered out to the cloud, indicated by the counter in the
bottom right of the box.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
39 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
14

15

Figure 2-42:

Now look at the volumes within “aggr1” to get an idea of what data is being tiered to the cloud.
16. From the task bar of Jumphost, launch PuTTY.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
40 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
16

Figure 2-43:

17. In PuTTY, double-click the session profile for cluster1 to launch the SSH session to that cluster.

17

Figure 2-44:

18. Enter the password Netapp1! to log in as the “admin” user.


19. Display a list of volumes within the “aggr1” local tier.

cluster1::> vol show -aggregate aggr1


Vserver Volume Aggregate State Type Size Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
svm1 alpha aggr1 online RW 3GB 259.1MB 91%
svm1 beta aggr1 online RW 3GB 507.6MB 82%
2 entries were displayed.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
41 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
Notice that there are two volumes within the “aggr1” local tier, “alpha” and “beta”.
20. Back in Firefox, from the left-side menu, navigate to Storage > Volumes.
21. Within the list of volumes, click the name alpha.

20
21

Figure 2-45:

The “alpha” volume page appears.


22. The volume’s tiering policy is set to “Snapshot_only” as indicated in the field on the left. This means that
only cold blocks that are captured in snapshots (and that are not referenced by the active filesystem)
are eligible for tiering.
23. In the “Capacity” pane, you can see the alpha volume is about 2 GB in size and 90% used.
24. Further down is a visualization labeled “Inactive Data Stored Locally.” Observe that the volume contains
approximately 765MB MB of inactive (cold) data.
25. Just above that, the “Snapshot Capacity” box has information about how much snapshot data is used
by the volume. Observe that the volume has 980MB of snapshot data.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
42 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
25 23

24
22

Figure 2-46:

Since this volume uses the “Snapshot-only” tiering policy, if all of the blocks in the snapshots are
cold, less than 1GB of the volume’s overall data is eligible for tiering to the cloud tier. In this case, the
snapshot data was cooled in advance and is tiered out to the Amazon S3 cloud tier.
The inactive data report does not include this snapshot data, but rather the data that could be tiered out
if the tiering policy changed from the current setting of “Snapshot-only.”
26. Click All volumes above the “Overview” tab.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
43 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
26

Figure 2-47:

The “alpha” volume page closes and System Manager displays the “Volumes” page with the list of
volumes again.
27. Click on the beta volume.

27

Figure 2-48:

The “beta” volume page displays.


28. Observe that the beta volume is 3 GB in size, and is currently about 82% used.
29. The volume’s tiering policy is set to “auto”, which means that any cold blocks in the volume are eligible
for tiering, whether or not they are part of the active filesystem or snapshots.
The volume does not report any inactive data, meaning if any data on this volume was cold, it has
already been tiered out to the cloud. In this case, although not visible from this interface, about 800MB
of inactive data has been tiered to the cloud from the “beta” volume.
30. Click Back to All volumes.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
44 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
30

28

29

Figure 2-49:

The “beta” volume page closes and System Manager displays the “Volumes” page again.

2.1.3.2 Changing a Volume’s Tiering Policy


In a previous exercise in this lab, you saw how to change a volume’s tiering policy at the same time you attach an
aggregate to the cloud tier.. Now you will see how to change a volume’s tiering policy independently.
1. You should still be on the “Volumes” page from the preceding exercise. If you are not, navigate to
Storage > Volumes.
2. In the volumes list, hover over the entry for the beta volume and click the menu icon.
3. Click Edit from the drop-down menu.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
45 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
1
2

Figure 2-50:

The “Edit Volume” page opens.


4. Scroll down to the “Tiering Policy” drop-down menu and select Snapshot only from the menu.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
46 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
4

Figure 2-51:

5. Scroll to the bottom and click Save.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
47 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
5

Figure 2-52:

The “Edit Volume” page closes, and focus returns to the “Volumes” page.
6. Click beta.

Figure 2-53:

The “beta” volume page displays.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
48 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
7. The “Tiering Policy” field reports the new “Snapshot_only” value you just assigned.
8. Changing a volume’s tiering policy can have an affect on the cold data reported for a volume. Observe
that in this case that nearly 1GB of data is now reported as inactive now that the tiering policy has
changed to “Snapshot_only”. While this data still remains within the cloud tier until warmed by file access
to the volume, this figure reflects the potential benefit of using the “auto” tiering policy.
No additional data blocks on beta will tier out to the cloud tier until snapshots are created on the volume,
and even then only if the blocks in those snapshots match the cooling and tiering criteria dictated by the
snapshot-only tiering policy.

Figure 2-54:

If you had instead changed the tiering policy of the alpha volume from “Snapshot-only” to “Auto”,
snapshot blocks that were already out in alpha’s cloud tier would stay there until either the snapshots
were deleted, or the snapshot blocks were accessed. On the other hand, you saw that alpha has some
cold blocks in the volume’s active filesystem. Those blocks were not able to tier out under the snapshot-
only tiering policy, but would be able to now under the “Auto” tiering policy. However, because the tiering
scan process only runs once every 24 hours, those blocks would not move out until the next time the
tiering scan process runs.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
49 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
2.1.3.3 Customize Cooling Periods and Tiering Thresholds
By default, ONTAP will not automatically relocate a volume’s data blocks to an external tier unless the following
criteria have been met:
• The volume must reside on a FabricPool aggregate (an aggregate with an attached cloud tier).
• The volume’s assigned tiering policy must permit some form of tiering.
• Potentially eligible blocks in the volume must have gone unaccessed long enough to get “cold”. The
default cooling period is 2 days for volumes using the “snapshot-only” tiering policy, and 31 days for
volumes using the “auto” tiering policy.
• The aggregate must be greater than 50% full.
These last two fixed durations/thresholds might not be ideal for all situations, so ONTAP 9.4 introduced the
capability to change the cooling period for blocks in a volume, and the capability to change an aggregate’s
“fullness” threshold.
1. If you do not already have a PuTTY session open to Cluster 1, from the taskbar of Jumphost, launch
PuTTY.

Figure 2-55:

2. In PuTTY, double-click the session profile for cluster1 to launch the SSH session to that cluster.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
50 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
2

Figure 2-56:

3. Enter the password Netapp1! to log in as the “admin” user.


4. Elevate your privileges to the “Advanced” level, which is necessary to interact with the properties that
control a volume’s assigned tiering policies and an aggregates cooling periods.

cluster1::> set -priv advanced

Warning: These advanced commands are potentially dangerous; use them only when directed to do
so by NetApp personnel.
Do you want to continue? {y|n}: y

cluster1::*> volume show -fields aggregate,tiering-policy,tiering-minimum-cooling-days


vserver volume aggregate tiering-policy tiering-minimum-cooling-days
----------- ------ --------- -------------- ----------------------------
cluster1-01 vol0 aggr0 none -
svm1 alpha aggr1 snapshot-only 2
svm1 alpha_sg
aggr9 auto 2
svm1 beta aggr1 snapshot-only -
svm1 beta_sg
aggr9 auto 2
svm1 delta aggr1 auto -
svm1 epsilon
aggr8 auto 2
svm1 fg1 - snapshot-only -
svm1 fg2 - auto -
svm1 svm1_root
aggr2 none -
svm1 zeta aggr7 auto 2
svm2 svm2_root
aggr2 none -
12 entries were displayed.

A “-” character in the “aggregate” column indicates that the volume is a FlexGroup.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
51 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
A “-” character in the “tiering-minimum-cooling-days” indicates that the volume uses the assigned tiering-
policy’s default cooling value. For the “snapshot-only” tiering policy that default is 2 days, and for the
“auto” tiering policy that default is 31 days.
The tiering-minimum-cooling-days value is adjustable from 2 to 63 days.
In this lab, the tiering-minimum-cooling-days value was set to 2 for all volumes using the “auto” tiering
policy in order to speed up block cooling during lab development.
5. Display the inactive data report for the “alpha” volume:

cluster1::*> vol show -volume alpha -fields


performance-tier-inactive-user-data,performance-tier-inactive-user-data-percent
vserver volume performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ----------------------------------- -------------------------------------------
svm1 alpha 765.0MB 25%

cluster1::*>

6. Increase the cooling days value for the “alpha” volume to 63.

cluster1::*> volume modify -vserver svm1 -volume alpha -tiering-minimum-cooling-days 63


Volume modify successful on volume alpha of Vserver svm1.

cluster1::*> volume show -fields aggregate,tiering-minimum-cooling-days


vserver volume aggregate tiering-minimum-cooling-days
----------- ------ --------- ----------------------------
cluster1-01 vol0 aggr0 -
svm1 alpha aggr1 63
svm1 alpha_sg
aggr9 2
svm1 beta aggr1 -
svm1 beta_sg
aggr9 2
svm1 delta aggr1 -
svm1 epsilon
aggr8 2
svm1 fg1 - -
svm1 fg2 - -
svm1 svm1_root
aggr2 -
svm1 zeta aggr7 2
svm2 svm2_root
aggr2 -
12 entries were displayed.

cluster1::*>

7. Display the inactivity report for the “alpha” volume again.

cluster1::*> vol show -volume alpha -fields


performance-tier-inactive-user-data,performance-tier-inactive-user-data-percent
vserver volume performance-tier-inactive-user-data performance-tier-inactive-user-data-percent
------- ------ ----------------------------------- -------------------------------------------
svm1 alpha 765.0MB 25%

cluster1::*>

There is no reduction in the reported amount of inactive data. Changes to a volume’s tiering-minimum-
cooling-days value only affects subsequent cooling behavior. When tiering-minimum-cooling-days is
raised, blocks that are already cold will stay cold. When tiering-minimum-cooling-days is lowered, blocks
that are in the process of cooling will subsequently cool faster, but will not instantly become cold.
8. Display the list of aggregates that have object storage attached.

cluster1::*> storage aggregate object-store show


Aggregate Object Store Name Availability Mirror Type
-------------- ----------------- ------------- -----------
aggr1 aws_s3_734 available primary
aggr5 aws_s3_734 available primary
aggr6 aws_s3_734 available primary
aggr8 aws_s3_734 available primary
aggr9 sgws_164 available primary
5 entries were displayed.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
52 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
cluster1::*>

9. Display the details of the object store instance attached to “aggr1”.

cluster1::*> storage aggregate object-store show -aggregate aggr1 -instance

Aggregate Name: aggr1


ONTAP Name for this Object Store Config: aws_s3_734
Availability of the Object Store: available
Type of the Object Store Provider: AWS_S3
License Space Used Percent: 0%
Threshold for Reclaiming Unreferenced Space: 20%
Aggregate Fullness Threshold Required for Tiering: 50%
Object Store Mirror Type: primary
This object store is in mirror degraded mode: -
Force Tiering with no Mirror in a MetroCluster Configuration: false
The name of the Cluster to which the bin belongs: cluster1

cluster1::*>

The “Aggregate Fullness Threshold Required for Tiering” property specifies how full the aggregate must
be for tiering to operate. You can change this value for each aggregate with a connected object store.
10. Determine how much data “aggr1” has stored on the internal and cloud tiers.

cluster1::*> aggr show-space -aggregate aggr1

Aggregate : aggr1
Performance Tier
Feature Used Used%
-------------------------------- ---------- ------
Volume Footprints 3.40GB 48%
Aggregate Metadata 20.75MB 0%
Snapshot Reserve 0B 0%
Total Used 3.42GB 48%

Total Physical Used 3.49GB 49%

Aggregate : aggr1
Object Store: aws_s3_734
Feature Used Used%
-------------------------------- ---------- ------
Referenced Capacity 1.76GB -
Metadata 0B -
Unreclaimed Space 0B -
Space Saved by Storage Efficiency 7.53MB -

Total Physical Used 1.76GB -

2 entries were displayed.

cluster1::*>

Make note of aggr1’s object store’s referenced capacity, which indicates that 1.76GB aggr1’s data is in
the object store. So clearly some of aggr1’s data has already transferred out to object storage.
Observe aggr1’s Total Physical Used percentage, which is 49%. Remember from this activity’s
introduction that, by default, an aggregate must be greater than 50% full for tiering to take place.
Sufficient tiering has already occurred on this aggregate below 50% full, so further tiering activity has
been suspended until that fullness threshold is crossed again.
11. Change the minimum fullness threshold for “aggr1” to 25%.

cluster1::*> storage aggregate object-store modify -aggregate aggr1 -object-store-name


aws_s3_734 -tiering-fullness-threshold 25

cluster1::*>

12. Determine again how much data “aggr1” has stored on the internal and cloud tiers.

cluster1::*> aggr show-space -aggregate aggr1

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
53 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
Aggregate : aggr1
Performance Tier
Feature Used Used%
-------------------------------- ---------- ------
Volume Footprints 3.40GB 48%
Aggregate Metadata 20.75MB 0%
Snapshot Reserve 0B 0%
Total Used 3.42GB 48%

Total Physical Used 3.49GB 49%

Aggregate : aggr1
Object Store: aws_s3_734
Feature Used Used%
-------------------------------- ---------- ------
Referenced Capacity 1.76GB -
Metadata 0B -
Unreclaimed Space 0B -
Space Saved by Storage Efficiency 7.53MB -

Total Physical Used 1.76GB -

2 entries were displayed.

cluster1::*>

The amount of data in the internal and cloud tiers is the same as before you decreased the fullness
threshold for aggr1. This is because the ONTAP tiering scan, the process responsible for transferring
cold blocks out to the cloud tier, only runs once a day. It may take up to 24 hours for the internal and
cloud tier’s space used values to reflect the fullness threshold change you just applied.
In some cases, tiering activity may drive an aggregates fullness well below the aggregate’s configured
minimum fullness threshold. The remainder of this activity illustrates how this can occur.
13. Display the fullness threshold for all of the cluster’s attached cloud tiers.

cluster1::*> storage aggregate object-store show -fields tiering-fullness-threshold


aggregate object-store-name tiering-fullness-threshold
--------- ----------------- --------------------------
aggr1 aws_s3_734 25%
aggr5 aws_s3_734 50%
aggr6 aws_s3_734 50%
aggr8 aws_s3_734 30%
aggr9 sgws_164 50%
5 entries were displayed.

cluster1::*>

The aggr8 aggregate has already been modified to start tiering when the aggregate becomes greater
than 30% full. Examine that aggregate’s tiering metrics more closely.
14. Set your privilege back to the admin level for the remainder of the activity.

cluster1::*> set -priv admin


cluster1::>

15. Determine how large aggr8 is.

cluster1::> aggr show -aggregate aggr8 -fields size


aggregate size
--------- ------
aggr8 7.13GB

cluster1::>

aggr8 has a size of 7.13 GB.


16. List the volumes hosted on “aggr8”.

cluster1::> volume show -aggregate aggr8


Vserver Volume Aggregate State Type Size Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
54 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
svm1 epsilon aggr8 online RW 5GB 846.2MB 82%

cluster1::>

The aggregate’s only volume (epsilon) is 5 GB in size, and 82% utilized, meaning it consumes
approximately 4 GB of space.
17. Determine how much of epsilon’s data is stored in aggr8’s cloud tier.

cluster1::> vol show-footprint -volume epsilon

Vserver : svm1
Volume : epsilon

Feature Used Used%


-------------------------------- ---------- -----
Volume Data Footprint 3.92GB 55%
Footprint in Performance Tier
1.71GB 44%
Footprint in aws_s3_734 2.22GB 56%
Volume Guarantee 0B 0%
Flexible Volume Metadata 28.44MB 0%
Delayed Frees 15.98MB 0%

Total Footprint 3.97GB 56%

cluster1::>

The Volume Data Footprint “Used” column value indicates the amount of data the epsilon volume
contains, and the Volume Data Footprint “Used%” column indicates the percentage of the containing
aggregates space that the epsilon volume consumes. 3.97 GB divided by 7.12 GB (the size of aggr8)
= 56%. Of that 3.97 GB, 1.71 GB is stored on the aggr’s SSDs, and 2.22 GB is stored in the external
object store. The “Used%” values on these two lines are relative to the volume’s total data footprint,
which is why these two percentages total to 100%.
18. Display “aggr8’s” space usage.

cluster1::> aggr show-space -aggregate aggr8

Aggregate : aggr8
Performance Tier
Feature Used Used%
-------------------------------- ---------- ------
Volume Footprints 1.74GB 24%
Aggregate Metadata 13.28MB 0%
Snapshot Reserve 0B 0%
Total Used 1.75GB 25%

Total Physical Used 1.82GB 26%

Aggregate : aggr8
Object Store: aws_s3_734
Feature Used Used%
-------------------------------- ---------- ------
Referenced Capacity 2.23GB -
Metadata 0B -
Unreclaimed Space 0B -
Space Saved by Storage Efficiency 1.74MB -

Total Physical Used 2.23GB -

2 entries were displayed.

cluster1::>

You may be wondering why aggr8’s Total Physical Used percent is 26% rather than the 30% minimum
tiering threshold value you saw configured for the aggregate in step 11. After all, FabricPool should
have stopped relocating cold blocks out to object storage once the aggregates fullness dropped below
the 30% minimum tiering threshold you saw was assigned to aggr8 in step 11.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
55 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
The answer has to do with the nature of the data inside the epsilon volume. When all of a file’s blocks
get cold at the same time, the tiering scan process moves the file’s entire set of blocks out to object
storage as a unit in order to keep them together; in this scenario it won’t split up a single file’s blocks
between the performance and object storage tiers. All of the files on epsilon are 100 MB in size, so in
this specific situation the internal volume’s used space dropped in approximately 100 MB increments as
each file’s complete set of blocks were moved to the cloud tier.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
56 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
3 Lab Summary
This lab demonstrated how a storage administrator can use ONTAP to determine what data is a good
candidate to move to FabricPool, offloading it from All-Flash FAS. The lab illustrated how easy it is to configure
StorageGRID as a private cloud object storage target for FabricPool. And finally, the lab showed you how to
examine the tiering results and make adjustments to better optimize storage for your application needs.
With a simple setup process, the usage of your storage systems is seamlessly optimized for cost, performance,
and efficiency. FabricPool helps you make the most of your Data Fabric.

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
57 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
4 References
The following references were used to write this lab guide.
• FabricPool:
• TR-4598: FabricPool Best Practices
http://www.netapp.com/us/media/tr-4598.pdf
• NetApp’s Cloud Tiering
https://cloud.netapp.com/cloud-tiering

Storage Tiering with FabricPool: Moving Cold Data from ONTAP to StorageGRID
58 © 2020 NetApp, Inc. All rights reserved. NetApp Proprietary
Refer to the Interoperability Matrix Tool (IMT) on the NetApp Support site to validate that the exact
product and feature versions described in this document are supported for your specific environment.
The NetApp IMT defines the product components and versions that can be used to construct
configurations that are supported by NetApp. Specific results depend on each customer's installation in
accordance with published specifications.

NetApp provides no representations or warranties regarding the accuracy, reliability, or serviceability of any
information or recommendations provided in this publication, or with respect to any results that may be obtained
by the use of the information or observance of any recommendations provided herein. The information in this
document is distributed AS IS, and the use of this information or the implementation of any recommendations or
techniques herein is a customer’s responsibility and depends on the customer’s ability to evaluate and integrate
them into the customer’s operational environment. This document and the information contained herein may be
used solely in connection with the NetApp products discussed in this document.

®
Go further, faster

© 2020NetApp, Inc. All rights reserved. No portions of this document may be reproduced without prior written consent
of NetApp, Inc. Specifications are subject to change without notice. NetApp, the NetApp logo, Data ONTAP®,
ONTAP®, OnCommand®, SANtricity®, FlexPod®, SnapCenter®, and SolidFire® are trademarks or registered
trademarks of NetApp, Inc. in the United States and/or other countries. All other brands or products are trademarks or
registered trademarks of their respective holders and should be treated as such.

You might also like