Professional Documents
Culture Documents
Azure Storage Types For SAP Workload - Microsoft Docs
Azure Storage Types For SAP Workload - Microsoft Docs
Azure has numerous storage types that differ vastly in capabilities, throughput, latency, and
prices. Some of the storage types are not, or of limited usable for SAP scenarios. Whereas,
several Azure storage types are well suited or optimized for specific SAP workload
scenarios. Especially for SAP HANA, some Azure storage types got certified for the usage
with SAP HANA. In this document, we are going through the different types of storage and
describe their capability and usability with SAP workloads and SAP components.
Remark about the units used throughout this article. The public cloud vendors moved to
use GiB (Gibibyte) or TiB (Tebibyte as size units, instead of Gigabyte or Terabyte. Therefore
all Azure documentation and prizing are using those units. Throughout the document, we
are referencing these size units of MiB, GiB, and TiB units exclusively. You might need to
plan with MB, GB, and TB. So, be aware of some small differences in the calculations if you
need to size for a 400 MiB/sec throughput, instead of a 250 MiB/sec throughput.
copies on three different storage nodes. Failing over to another replica and seeding of a
new replica in case of a storage node failure is transparent. As a result of this redundancy,
it is NOT required to use any kind of storage redundancy layer across multiple Azure disks.
This fact is called Local Redundant Storage (LRS). LRS is default for these types of storage in
Azure. Azure NetApp Files provides sufficient redundancy to achieve the same SLAs as
other native Azure storage.
There are several more redundancy methods, which are all described in the article Azure
Storage replication that apply to some of the different storage types Azure has to offer.
Managed disks are a resource type in Azure Resource Manager that can be used instead of
VHDs that are stored in Azure Storage Accounts. Managed Disks automatically align with
the [availability set][virtual-machines-manage-availability] of the virtual machine they are
attached to and therefore increase the availability of your virtual machine and the services
that are running on the virtual machine. For more information, read the overview article.
You are deploying your two DBMS VMs for your SAP system in an Azure availability
set
As Azure deploys the VMs, the disk with the OS image will be placed in a different
storage cluster. This avoids that both VMs get impacted by an issue of a single Azure
storage cluster
As you create new managed disks that you assign to these VMs to store the data and
log files of your database, these new disks for the two VMs are also deployed in
separate storage clusters, so, that none of disks of the first VM are sharing storage
clusters with the disks of the second VM
Deploying without managed disks in customer defined storage accounts, disk allocation is
arbitrary and has no awareness of the fact that VMs are deployed within an AvSet for
resiliency purposes.
7 Note
Out of this reason and several other improvements that are exclusively available
through managed disks, we require that new deployments of VMs that use Azure
block storage for their disks (all Azure storage except Azure NetApp Files) need to use
Azure managed disks for the base VHD/OS disks, data disks that contain SAP database
https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/planning-guide-storage 2/20
6/25/2020 Azure storage types for SAP workload | Microsoft Docs
files. Independent on whether you deploy the VMs through availability set, across
Availability Zones or independent of the sets and zones. Disks that are used for the
purpose of storing backups are not necessarily required to be managed disks.
7 Note
Persistent the base VHD of your VM that holds the operating system and other
software you install in that disk. This disk/VHD is the root of your VM. Any changes
made to it, need to be persisted. So, that the next time, you stop and restart the VM,
all the changes made before still exist. Especially in cases where the VM is getting
deployed by Azure onto another host than it was running originally
Persisted data disks. These disks are VHDs you attach to store application data in. This
application data could be data and log/redo files of a database, backup files, or
software installations. Means any disk beyond your base VHD that holds the
operating system
File shares or shared disks that contain your global transport directory for NetWeaver
or S/4HANA. Content of those shares is either consumed by software running in
multiple VMs or is used to build high-availability failover cluster scenarios
The /sapmnt directory or common file shares for EDI processes or similar. Content of
those shares is either consumed by software running in multiple VMs or is used to
build high-availability failover cluster scenarios
In the next few sections, the different Azure storage types and their usability for SAP
workload gets discussed that apply to the three scenarios above. A general categorization
of how the different Azure storage types should be used is documented in the article What
disk types are available in Azure?. The recommendations for using the different Azure
storage types for SAP workload are not going to be majorly different.
For support restrictions on Azure storage types for SAP NetWeaver/application layer of
S/4HANA, read the SAP support note 2015553 For SAP HANA certified and supported
https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/planning-guide-storage 3/20
6/25/2020 Azure storage types for SAP workload | Microsoft Docs
Azure storage types read the article SAP HANA Azure virtual machine storage
configurations.
The sections describing the different Azure storage types will give you more background
about the restrictions and possibilities using the SAP supported storage.
M/Mv2
Usage Standard Standard Premium Ultra disk Azure NetAp
VM
scenario
families HDD SSD Storage Files
1 With usage of Azure Write Accelerator for M/Mv2 VM families for log/redo log volumes 2
Using ANF requires /hana/data as well as /hana/log to be on ANF
Characteristics you can expect from the different storage types list like:
https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/planning-guide-storage 6/20
6/25/2020 Azure storage types for SAP workload | Microsoft Docs
1
With usage of Azure Write Accelerator for M/Mv2 VM families for log/redo log volumes
3 Creation of different ANF capacity pools does not guarantee deployment of capacity
pools onto different storage units
) Important
To achieve less than 1 millisecond I/O latency using Azure NetApp Files (ANF), you
need to work with Microsoft to arrange the correct placement between your VMs and
the NFS shares based on ANF. So far there is no mechanism in place that provides an
automatic proximity between a VM deployed and the NFS volumes hosted on ANF.
Given the different setup of the different Azure regions, the network latency added
could push the I/O latency beyond 1 millisecond if the VM and the NFS share are not
allocated in proximity.
) Important
None of the currently offered Azure block storage based managed disks, or Azure
NetApp Files offer any zonal or geographical redundancy. As a result, you need to
make sure that your high availability and disaster recovery architectures are not
relying on any type of Azure native storage replication for these managed disks, NFS
or SMB shares.
Azure premium SSD storage got introduced with the goal to provide:
This type of storage is targeting DBMS workloads, storage traffic that requires low single
digit millisecond latency, and SLAs on IOPS and throughput Cost basis in the case of Azure
premium storage is not the actual data volume stored in such disks, but the size category
of such a disk, independent of the amount of the data that is stored within the disk. You
also can create disks on premium storage that are not directly mapping into the size
categories shown in the article Premium SSD. Conclusions out of this article are:
The storage is organized in ranges. For example, a disk in the range 513 GiB to 1024
GiB capacity share the same capabilities and the same monthly costs
The IOPS per GiB are not tracking linear across the size categories. Smaller disks
below 32 GiB have higher IOPS rates per GiB. For disks beyond 32 GiB to 1024 GiB,
the IOPS rate per GiB is between 4-5 IOPS per GiB. For larger disks up to 32,767 GiB,
the IOPS rate per GiB is going below 1
The I/O throughput for this storage is not linear with the size of the disk category. For
smaller disks, like the category between 65 GiB and 128 GiB capacity, the throughput
is around 780KB/GiB. Whereas for the extreme large disks like a 32,767 GiB disk, the
throughput is around 28KB/GiB
The IOPS and throughput SLAs cannot be changed without changing the capacity of
the disk
Azure has a single instance VM SLA of 99.9% that is tied to the usage of Azure premium
storage or Azure Ultra disk storage. The SLA is documented in SLA for Virtual Machines. In
order to comply with this single VM SLA, the base VHD disk as well as all attached disk
need to be either Azure premium storage or Azure Ultra disk storage.
https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/planning-guide-storage 8/20
6/25/2020 Azure storage types for SAP workload | Microsoft Docs
Costs MEDIUM -
Azure premium storage does not fulfill SAP HANA storage latency KPIs with the common
caching types offered with Azure premium storage. In order to fulfill the storage latency
KPIs for SAP HANA log writes, you need to use Azure Write Accelerator caching as
described in the article Enable Write Accelerator. Azure Write Accelerator benefits all other
DBMS systems for their transaction log writes and redo log writes. Therefore, it is
recommended to use it across all the SAP DBMS deployments. For SAP HANA, the usage of
Azure Write Accelerator in conjunction with Azure premium storage is mandatory.
https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/planning-guide-storage 9/20
6/25/2020 Azure storage types for SAP workload | Microsoft Docs
Summary: Azure premium storage is one of the Azure storage types recommended for
SAP workload. This recommendation applies for non-production as well as production
systems. Azure premium storage is suited to handle database workloads. The usage of
Azure Write Accelerator is going to improve write latency against Azure premium disks
substantially. However, for DBMS systems with high IOPS and throughput rates, you need
to either over-provision storage capacity or you need to use functionality like Windows
Storage Spaces or logical volume managers in Linux to build stripe sets that give you the
desired capacity on the one side, but also the necessary IOPS or throughput at best cost
efficiency.
For Azure premium storage disks smaller or equal to 512 GiB in capacity, burst functionality
is offered. The exact way how disk bursting works is described in the article Disk bursting.
When you read the article, you understand the concept of accruing IOPS and throughput in
the times when your I/O workload is below the nominal IOPS and throughput of the disks
(for details on the nominal throughput see Managed Disk pricing). You are going to accrue
the delta of IOPS and throughput between your current usage and the nominal values of
the disk. The bursts are limited to a maximum of 30 minutes.
The ideal cases where this burst functionality can be planned in is likely going to be the
volumes or disks that contain data files for the different DBMS. The I/O workload expected
against those volumes, especially with small to mid-ranged systems is expected to look
like:
Low to moderate read workload since data ideally is cached in memory, or like in the
case of HANA should be completely in memory
Bursts of write triggered by database checkpoints or savepoints that are issued on a
regular basis
Backup workload that reads in a continuous stream in cases where backups are not
executed via storage snapshots
For SAP HANA, load of the data into memory after an instance restart
Especially on smaller DBMS systems where your workload is handling a few hundred
transactions per seconds only, such a burst functionality can make sense as well for the
disks or volumes that store the transaction or redo log. Expected workload against such a
disk or volumes looks like:
Regular writes to the disk that are dependent on the workload and the nature of
workload since every commit issued by the application is likely to trigger an I/O
https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/planning-guide-storage 10/20
6/25/2020 Azure storage types for SAP workload | Microsoft Docs
operation
Higher workload in throughput for cases of operational tasks, like creating or
rebuilding indexes
Read bursts when performing transaction log or redo log backups
As you create an ultra disk, you have three dimensions you can define:
The capacity of the disk. Ranges are from 4 GiB to 65,536 GiB
Provisioned IOPS for the disk. Different maximum values apply to the capacity of the
disk. Read the article Ultra disk for more details
Provisioned storage bandwidth. Different maximum bandwidth applies dependent on
the capacity of the disk. Read the article Ultra disk for more details
The cost of a single disk is determined by the three dimensions you can define for the
particular disks separately.
https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/planning-guide-storage 11/20
6/25/2020 Azure storage types for SAP workload | Microsoft Docs
Summary: Azure ultra disks are a suitable storage with low latency for all kinds of SAP
workload. So far, Ultra disk can only be used in combinations with VMs that have been
deployed through Availability Zones (zonal deployment). Ultra disk is not supporting
storage snapshots at this point in time. In opposite to all other storage, Ultra disk cannot
be used for the base VHD disk. Ultra disk is ideal for cases where I/O workload fluctuates a
lot and you want to adapt deployed storage throughput or IOPS to storage workload
patterns instead of sizing for maximum usage of bandwidth and IOPS.
https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/planning-guide-storage 12/20
6/25/2020 Azure storage types for SAP workload | Microsoft Docs
Azure NetApp Files is the result out of a cooperation between Microsoft and NetApp with
the goal to provide high performing Azure native NFS and SMB shares. The emphasis is to
provide high bandwidth and low latency storage that enables DBMS deployment scenarios,
and over time enable typical operational functionality of the NetApp storage through
Azure as well. NFS/SMB shares are offered in three different service levels that differentiate
in storage throughput and in price. The service levels are documented in the article Service
levels for Azure NetApp Files. For the different types of SAP workload the following service
levels are highly recommended:
7 Note
The minimum provisioning size is a 4 TiB unit that is called capacity pool. You then
create volumes out of this capacity pool. Whereas the smallest volume you can build
is 100 GiB. You can expand a capacity pool in TiB steps. For pricing, check the article
Azure NetApp Files Pricing
7 Note
No other DBMS workload is supported for Azure NetApp Files based NFS or SMB
shares. Updates and changes will be provided if this is going to change.
https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/planning-guide-storage 13/20
6/25/2020 Azure storage types for SAP workload | Microsoft Docs
As already with Azure premium storage, a fixed or linear throughput size per GB can be a
problem when you are required to adhere to some minimum numbers in throughput. Like
this is the case for SAP HANA. With ANF, this problem can become more pronounced than
with Azure premium disk. In case of Azure premium disk, you can take several smaller disks
with a relatively high throughput per GiB and stripe across them to be cost efficient and
have higher throughput at lower capacity. This kind of striping does not work for NFS or
SMB shares hosted on ANF. This restriction resulted in deployment of overcapacity like:
Shares/shared disk YES SMB 3.0, NFS v3, and NFS v4.1
https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/planning-guide-storage 14/20
6/25/2020 Azure storage types for SAP workload | Microsoft Docs
Azure Backup VM NO -
snapshots possible
Summary: Azure NetApp Files is a HANA certified low latency storage that allows to
deploy NFS and SMB volumes or shares. The storage comes with three different service
levels that provide different throughput and IOPS in a linear manner per GiB capacity of the
volume. The ANF storage is enabling to deploy SAP HANA scale-out scenarios with a
standby node. The storage is suitable for providing file shares as needed for /sapmnt or
SAP global transport directory. ANF storage come with functionality availability that is
available as native NetApp functionality.
consistent performance at lower IOPS levels. This storage is the minimum storage used for
non-production SAP systems that have low IOPS and throughput demands. The capability
matrix for SAP workload looks like:
Data disk restricted some non-production systems with low IOPS and
suitable latency demands
IOPS SLA NO -
Throughput SLA NO -
HANA certified NO -
Costs LOW -
https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/planning-guide-storage 16/20
6/25/2020 Azure storage types for SAP workload | Microsoft Docs
Summary: Azure standard SSD storage is the minimum recommendation for non-
production VMs for base VHD, eventual DBMS deployments with relative latency
insensitivity and/or low IOPS and throughput rates. This Azure storage type is not
supported anymore for hosting the SAP Global Transport Directory.
Latency high too high for DBMS usage, SAP Global Transport
directory, or sapmnt/saploc
IOPS SLA NO -
https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/planning-guide-storage 17/20
6/25/2020 Azure storage types for SAP workload | Microsoft Docs
Throughput SLA NO -
HANA certified NO -
Costs LOW -
Summary: Standard HDD is an Azure storage type that should only be used to store SAP
backups. It should only be used as base VHD for rather inactive systems, like retired
systems used for looking up data here and there. But no active development, QA or
production VMs should be based on that storage. Nor should database files being hosted
on that storage
Standard Sizes for Sizes for Likely hard to touch the storage limits of
HDD Linux VMs in Windows VMs medium or large VMs
Azure in Azure
Standard Sizes for Sizes for Likely hard to touch the storage limits of
SSD Linux VMs in Windows VMs medium or large VMs
Azure in Azure
Premium Sizes for Sizes for Easy to hit IOPS or storage throughput VM
Storage Linux VMs in Windows VMs limits with storage configuration
Azure in Azure
https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/planning-guide-storage 18/20
6/25/2020 Azure storage types for SAP workload | Microsoft Docs
Ultra Sizes for Sizes for Easy to hit IOPS or storage throughput VM
disk Linux VMs in Windows VMs limits with storage configuration
storage Azure in Azure
Azure Sizes for Sizes for Storage traffic is using network throughput
NetApp Linux VMs in Windows VMs bandwidth and not storage bandwidth!
Files Azure in Azure
The smaller the VM, the fewer disks you can attach. This does not apply to ANF. Since
you mount NFS or SMB shares, you don't encounter a limit of number of shared
volumes to be attached
VMs have I/O throughput and IOPS limits that easily could be exceeded with
premium storage disks and Ultra disks
With ANF, the traffic to the shared volumes is consuming the VM's network
bandwidth and not storage bandwidth
With large NFS volumes in the double digit TiB capacity space, the throughput
accessing such a volume out of a single VM is going to plateau based on limits of
Linux for a single session interacting with the shared volume.
As you up-size Azure VMs in the lifecycle of an SAP system, you should evaluate the IOPS
and storage throughput limits of the new and larger VM type. In some cases, it also could
make sense to adjust the storage configuration to the new capabilities of the Azure VM.
https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/planning-guide-storage 19/20
6/25/2020 Azure storage types for SAP workload | Microsoft Docs
Striping across two P15 premium storage disks gets you to a throughput of
250 MiB/sec. Such a volume is going to have 512 GiB capacity. If you want to have a
single disk that gives you 250 MiB throughput per second, you would need to pick a
P40 disk with 2 TiB capacity.
Or you could achieve a throughput of 400 MiB/sec by striping four P10 premium
storage disks with an overall capacity of 512 GiB by striping. If you would like to have
a single disk with a minimum of 500 MiB throughput per second, you would need to
pick a P60 premium storage disk with 8 TiB. Since costing or premium storage is near
linear with the capacity, you can sense the cost savings by using striping.
No in-VM configured storage should be used since Azure storage keeps the data
redundant already
The disks the stripe set is applied to, need to be of the same size
Striping across multiple smaller disks is the best way to achieve a good price/performance
ratio using Azure premium storage. It is understood that striping has some additional
deployment and management overhead.
For specific stripe size recommendations, read the documentation for the different DBMS,
like SAP HANA Azure virtual machine storage configurations.
Next steps
Read the articles:
Considerations for Azure Virtual Machines DBMS deployment for SAP workload
SAP HANA Azure virtual machine storage configurations
Yes No
https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/planning-guide-storage 20/20