Professional Documents
Culture Documents
Azure Virtual Machines (VM) is one of several types of on-demand, scalable computing resources that Azure offers.
Typically, you choose a VM when you need more control over the computing environment than the other choices
offer. This article gives you information about what you should consider before you create a VM, how you create it,
and how you manage it.
An Azure VM gives you the flexibility of virtualization without having to buy and maintain the physical hardware
that runs it. However, you still need to maintain the VM by performing tasks, such as configuring, patching, and
installing the software that runs on it.
Azure virtual machines can be used in various ways. Some examples are:
Development and test – Azure VMs offer a quick and easy way to create a computer with specific
configurations required to code and test an application.
Applications in the cloud – Because demand for your application can fluctuate, it might make economic
sense to run it on a VM in Azure. You pay for extra VMs when you need them and shut them down when you
don’t.
Extended datacenter – Virtual machines in an Azure virtual network can easily be connected to your
organization’s network.
The number of VMs that your application uses can scale up and out to whatever is required to meet your needs.
M ET H O D DESC RIP T IO N
Azure portal Select a location from the list when you create a VM.
Availability
Azure announced an industry leading single instance virtual machine Service Level Agreement of 99.9% provided
you deploy the VM with premium storage for all disks. In order for your deployment to qualify for the standard
99.95% VM Service Level Agreement, you still need to deploy two or more VMs running your workload inside of
an availability set. An availability set ensures that your VMs are distributed across multiple fault domains in the
Azure data centers as well as deployed onto hosts with different maintenance windows. The full Azure SLA explains
the guaranteed availability of Azure as a whole.
VM size
The size of the VM that you use is determined by the workload that you want to run. The size that you choose then
determines factors such as processing power, memory, and storage capacity. Azure offers a wide variety of sizes to
support many types of uses.
Azure charges an hourly price based on the VM’s size and operating system. For partial hours, Azure charges only
for the minutes used. Storage is priced and charged separately.
VM Limits
Your subscription has default quota limits in place that could impact the deployment of many VMs for your project.
The current limit on a per subscription basis is 20 VMs per region. Limits can be raised by filing a support ticket
requesting an increase
Operating system disks and images
Virtual machines use virtual hard disks (VHDs) to store their operating system (OS) and data. VHDs are also used
for the images you can choose from to install an OS.
Azure provides many marketplace images to use with various versions and types of Windows Server operating
systems. Marketplace images are identified by image publisher, offer, sku, and version (typically version is specified
as latest). Only 64-bit operating systems are supported. For more information on the supported guest operating
systems, roles, and features, see Microsoft server software support for Microsoft Azure virtual machines.
This table shows some ways that you can find the information for an image.
M ET H O D DESC RIP T IO N
Azure portal The values are automatically specified for you when you select
an image to use.
M ET H O D DESC RIP T IO N
You can choose to upload and use your own image and when you do, the publisher name, offer, and sku aren’t
used.
Extensions
VM extensions give your VM additional capabilities through post deployment configuration and automated tasks.
These common tasks can be accomplished using extensions:
Run custom scripts – The Custom Script Extension helps you configure workloads on the VM by running
your script when the VM is provisioned.
Deploy and manage configurations – The PowerShell Desired State Configuration (DSC) Extension helps
you set up DSC on a VM to manage configurations and environments.
Collect diagnostics data – The Azure Diagnostics Extension helps you configure the VM to collect diagnostics
data that can be used to monitor the health of your application.
Related resources
The resources in this table are used by the VM and need to exist or be created when the VM is created.
Azure virtual machines (VMs) can be created through the Azure portal. This method provides a browser-based
user interface to create VMs and their associated resources. This quickstart shows you how to use the Azure
portal to deploy a virtual machine (VM) in Azure that runs Windows Server 2019. To see your VM in action, you
then RDP to the VM and install the IIS web server.
If you don't have an Azure subscription, create a free account before you begin.
Sign in to Azure
Sign in to the Azure portal at https://portal.azure.com.
5. Under Instance details , type myVM for the Vir tual machine name and choose East US for your
Region , and then choose Windows Server 2019 Datacenter for the Image . Leave the other defaults.
6. Under Administrator account , provide a username, such as azureuser and a password. The password
must be at least 12 characters long and meet the defined complexity requirements.
7. Under Inbound por t rules , choose Allow selected por ts and then select RDP (3389) and HTTP (80)
from the drop-down.
8. Leave the remaining defaults and then select the Review + create button at the bottom of the page.
2. In the Connect to vir tual machine page, keep the default options to connect by IP address, over port
3389, and click Download RDP file .
3. Open the downloaded RDP file and click Connect when prompted.
4. In the Windows Security window, select More choices and then Use a different account . Type the
username as localhost \username, enter password you created for the virtual machine, and then click OK .
5. You may receive a certificate warning during the sign-in process. Click Yes or Continue to create the
connection.
Clean up resources
When no longer needed, you can delete the resource group, virtual machine, and all related resources.
Select the resource group for the virtual machine, then select Delete . Confirm the name of the resource group to
finish deleting the resources.
Next steps
In this quickstart, you deployed a simple virtual machine, open a network port for web traffic, and installed a basic
web server. To learn more about Azure virtual machines, continue to the tutorial for Windows VMs.
Azure Windows virtual machine tutorials
Quickstart: Create a Windows virtual machine in
Azure with PowerShell
11/2/2020 • 2 minutes to read • Edit Online
The Azure PowerShell module is used to create and manage Azure resources from the PowerShell command line
or in scripts. This quickstart shows you how to use the Azure PowerShell module to deploy a virtual machine
(VM) in Azure that runs Windows Server 2016. You will also RDP to the VM and install the IIS web server, to show
the VM in action.
If you don't have an Azure subscription, create a free account before you begin.
New-AzVm `
-ResourceGroupName "myResourceGroup" `
-Name "myVM" `
-Location "East US" `
-VirtualNetworkName "myVnet" `
-SubnetName "mySubnet" `
-SecurityGroupName "myNetworkSecurityGroup" `
-PublicIpAddressName "myPublicIpAddress" `
-OpenPorts 80,3389
Use the following command to create a remote desktop session from your local computer. Replace the IP address
with the public IP address of your VM.
mstsc /v:publicIpAddress
In the Windows Security window, select More choices , and then select Use a different account . Type the
username as localhost \username, enter password you created for the virtual machine, and then click OK .
You may receive a certificate warning during the sign-in process. Click Yes or Continue to create the connection
Next steps
In this quickstart, you deployed a simple virtual machine, open a network port for web traffic, and installed a
basic web server. To learn more about Azure virtual machines, continue to the tutorial for Windows VMs.
Azure Windows virtual machine tutorials
Quickstart: Create a Windows virtual machine with
the Azure CLI
11/2/2020 • 3 minutes to read • Edit Online
The Azure CLI is used to create and manage Azure resources from the command line or in scripts. This quickstart
shows you how to use the Azure CLI to deploy a virtual machine (VM) in Azure that runs Windows Server 2016. To
see your VM in action, you then RDP to the VM and install the IIS web server.
If you don't have an Azure subscription, create a free account before you begin.
az vm create \
--resource-group myResourceGroup \
--name myVM \
--image win2016datacenter \
--admin-username azureuser
It takes a few minutes to create the VM and supporting resources. The following example output shows the VM
create operation was successful.
{
"fqdns": "",
"id":
"/subscriptions/<guid>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM",
"location": "eastus",
"macAddress": "00-0D-3A-23-9A-49",
"powerState": "VM running",
"privateIpAddress": "10.0.0.4",
"publicIpAddress": "52.174.34.95",
"resourceGroup": "myResourceGroup"
}
Note your own publicIpAddress in the output from your VM. This address is used to access the VM in the next
steps.
mstsc /v:publicIpAddress
Next steps
In this quickstart, you deployed a simple virtual machine, open a network port for web traffic, and installed a basic
web server. To learn more about Azure virtual machines, continue to the tutorial for Windows VMs.
Azure Windows virtual machine tutorials
Quickstart: Create a Windows virtual machine using
an ARM template
11/2/2020 • 5 minutes to read • Edit Online
This quickstart shows you how to use an Azure Resource Manager template (ARM template) to deploy a Windows
virtual machine (VM) in Azure.
An ARM template is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for
your project. The template uses declarative syntax, which lets you state what you intend to deploy without having to
write the sequence of programming commands to create it.
If your environment meets the prerequisites and you're familiar with using ARM templates, select the Deploy to
Azure button. The template will open in the Azure portal.
Prerequisites
If you don't have an Azure subscription, create a free account before you begin.
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"adminUsername": {
"type": "string",
"metadata": {
"description": "Username for the Virtual Machine."
}
},
"adminPassword": {
"type": "securestring",
"minLength": 12,
"metadata": {
"description": "Password for the Virtual Machine."
}
},
"dnsLabelPrefix": {
"type": "string",
"defaultValue": "[toLower(concat(parameters('vmName'),'-', uniqueString(resourceGroup().id,
parameters('vmName'))))]",
"metadata": {
"description": "Unique DNS Name for the Public IP used to access the Virtual Machine."
}
},
"publicIpName": {
"type": "string",
"defaultValue": "myPublicIP",
"metadata": {
"description": "Name for the Public IP used to access the Virtual Machine."
}
},
"publicIPAllocationMethod": {
"publicIPAllocationMethod": {
"type": "string",
"defaultValue": "Dynamic",
"allowedValues": [
"Dynamic",
"Static"
],
"metadata": {
"description": "Allocation method for the Public IP used to access the Virtual Machine."
}
},
"publicIpSku": {
"type": "string",
"defaultValue": "Basic",
"allowedValues": [
"Basic",
"Standard"
],
"metadata": {
"description": "SKU for the Public IP used to access the Virtual Machine."
}
},
"OSVersion": {
"type": "string",
"defaultValue": "2019-Datacenter",
"allowedValues": [
"2008-R2-SP1",
"2012-Datacenter",
"2012-R2-Datacenter",
"2016-Nano-Server",
"2016-Datacenter-with-Containers",
"2016-Datacenter",
"2019-Datacenter",
"2019-Datacenter-Core",
"2019-Datacenter-Core-smalldisk",
"2019-Datacenter-Core-with-Containers",
"2019-Datacenter-Core-with-Containers-smalldisk",
"2019-Datacenter-smalldisk",
"2019-Datacenter-with-Containers",
"2019-Datacenter-with-Containers-smalldisk"
],
"metadata": {
"description": "The Windows version for the VM. This will pick a fully patched image of this given
Windows version."
}
},
"vmSize": {
"type": "string",
"defaultValue": "Standard_D2_v3",
"metadata": {
"description": "Size of the virtual machine."
}
},
"location": {
"type": "string",
"defaultValue": "[resourceGroup().location]",
"metadata": {
"description": "Location for all resources."
}
},
"vmName": {
"type": "string",
"defaultValue": "simple-vm",
"metadata": {
"description": "Name of the virtual machine."
}
}
},
"variables": {
"variables": {
"storageAccountName": "[concat('bootdiags', uniquestring(resourceGroup().id))]",
"nicName": "myVMNic",
"addressPrefix": "10.0.0.0/16",
"subnetName": "Subnet",
"subnetPrefix": "10.0.0.0/24",
"virtualNetworkName": "MyVNET",
"subnetRef": "[resourceId('Microsoft.Network/virtualNetworks/subnets', variables('virtualNetworkName'),
variables('subnetName'))]",
"networkSecurityGroupName": "default-NSG"
},
"resources": [
{
"type": "Microsoft.Storage/storageAccounts",
"apiVersion": "2019-06-01",
"name": "[variables('storageAccountName')]",
"location": "[parameters('location')]",
"sku": {
"name": "Standard_LRS"
},
"kind": "Storage",
"properties": {}
},
{
"type": "Microsoft.Network/publicIPAddresses",
"apiVersion": "2020-06-01",
"name": "[parameters('publicIPName')]",
"location": "[parameters('location')]",
"sku": {
"name": "[parameters('publicIpSku')]"
},
"properties": {
"publicIPAllocationMethod": "[parameters('publicIPAllocationMethod')]",
"dnsSettings": {
"domainNameLabel": "[parameters('dnsLabelPrefix')]"
}
}
},
{
"type": "Microsoft.Network/networkSecurityGroups",
"apiVersion": "2020-06-01",
"name": "[variables('networkSecurityGroupName')]",
"location": "[parameters('location')]",
"properties": {
"securityRules": [
{
"name": "default-allow-3389",
"properties": {
"priority": 1000,
"access": "Allow",
"direction": "Inbound",
"destinationPortRange": "3389",
"protocol": "Tcp",
"sourcePortRange": "*",
"sourceAddressPrefix": "*",
"destinationAddressPrefix": "*"
}
}
]
}
},
{
"type": "Microsoft.Network/virtualNetworks",
"apiVersion": "2020-06-01",
"name": "[variables('virtualNetworkName')]",
"location": "[parameters('location')]",
"dependsOn": [
"[resourceId('Microsoft.Network/networkSecurityGroups', variables('networkSecurityGroupName'))]"
],
"properties": {
"properties": {
"addressSpace": {
"addressPrefixes": [
"[variables('addressPrefix')]"
]
},
"subnets": [
{
"name": "[variables('subnetName')]",
"properties": {
"addressPrefix": "[variables('subnetPrefix')]",
"networkSecurityGroup": {
"id": "[resourceId('Microsoft.Network/networkSecurityGroups',
variables('networkSecurityGroupName'))]"
}
}
}
]
}
},
{
"type": "Microsoft.Network/networkInterfaces",
"apiVersion": "2020-06-01",
"name": "[variables('nicName')]",
"location": "[parameters('location')]",
"dependsOn": [
"[resourceId('Microsoft.Network/publicIPAddresses', parameters('publicIPName'))]",
"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]"
],
"properties": {
"ipConfigurations": [
{
"name": "ipconfig1",
"properties": {
"privateIPAllocationMethod": "Dynamic",
"publicIPAddress": {
"id": "[resourceId('Microsoft.Network/publicIPAddresses', parameters('publicIPName'))]"
},
"subnet": {
"id": "[variables('subnetRef')]"
}
}
}
]
}
},
{
"type": "Microsoft.Compute/virtualMachines",
"apiVersion": "2020-06-01",
"name": "[parameters('vmName')]",
"location": "[parameters('location')]",
"dependsOn": [
"[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]",
"[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]"
],
"properties": {
"hardwareProfile": {
"vmSize": "[parameters('vmSize')]"
},
"osProfile": {
"computerName": "[parameters('vmName')]",
"adminUsername": "[parameters('adminUsername')]",
"adminPassword": "[parameters('adminPassword')]"
},
"storageProfile": {
"imageReference": {
"publisher": "MicrosoftWindowsServer",
"offer": "WindowsServer",
"sku": "[parameters('OSVersion')]",
"version": "latest"
},
"osDisk": {
"createOption": "FromImage",
"managedDisk": {
"storageAccountType": "StandardSSD_LRS"
}
},
"dataDisks": [
{
"diskSizeGB": 1023,
"lun": 0,
"createOption": "Empty"
}
]
},
"networkProfile": {
"networkInterfaces": [
{
"id": "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]"
}
]
},
"diagnosticsProfile": {
"bootDiagnostics": {
"enabled": true,
"storageUri": "[reference(resourceId('Microsoft.Storage/storageAccounts',
variables('storageAccountName'))).primaryEndpoints.blob]"
}
}
}
}
],
"outputs": {
"hostname": {
"type": "string",
"value": "[reference(parameters('publicIPName')).dnsSettings.fqdn]"
}
}
}
2. Select or enter the following values. Use the default values, when available.
Subscription : select an Azure subscription.
Resource group : select an existing resource group from the drop-down, or select Create new , enter a
unique name for the resource group, and then click OK .
Location : select a location. For example, Central US .
Admin username : provide a username, such as azureuser.
Admin password : provide a password to use for the admin account. The password must be at least 12
characters long and meet the defined complexity requirements.
DNS label prefix : enter a unique identifier to use as part of the DNS label.
Windows OS version : select which version of Windows you want to run on the VM.
VM size : select the size to use for the VM.
Location : the default is the same location as the resource group, if it already exists.
3. Select Review + create . After validation completes, select Create to create and deploy the VM.
The Azure portal is used to deploy the template. In addition to the Azure portal, you can also use the Azure
PowerShell, Azure CLI, and REST API. To learn other deployment methods, see Deploy templates.
Clean up resources
When no longer needed, delete the resource group, which deletes the VM and all of the resources in the resource
group.
1. Select the Resource group .
2. On the page for the resource group, select Delete .
3. When prompted, type the name of the resource group and then select Delete .
Next steps
In this quickstart, you deployed a simple virtual machine using an ARM template. To learn more about Azure virtual
machines, continue to the tutorial for Linux VMs.
Azure Windows virtual machine tutorials
Tutorial: Create and Manage Windows VMs with
Azure PowerShell
11/2/2020 • 7 minutes to read • Edit Online
Azure virtual machines provide a fully configurable and flexible computing environment. This tutorial covers
basic Azure virtual machine (VM) deployment tasks like selecting a VM size, selecting a VM image, and deploying
a VM. You learn how to:
Create and connect to a VM
Select and use VM images
View and use specific VM sizes
Resize a VM
View and understand VM state
New-AzResourceGroup `
-ResourceGroupName "myResourceGroupVM" `
-Location "EastUS"
The resource group is specified when creating or modifying a VM, which can be seen throughout this tutorial.
Create a VM
When creating a VM, several options are available like operating system image, network configuration, and
administrative credentials. This example creates a VM named myVM, running the default version of Windows
Server 2016 Datacenter.
Set the username and password needed for the administrator account on the VM with Get-Credential:
$cred = Get-Credential
Connect to VM
After the deployment has completed, create a remote desktop connection with the VM.
Run the following commands to return the public IP address of the VM. Take note of this IP Address so you can
connect to it with your browser to test web connectivity in a future step.
Get-AzPublicIpAddress `
-ResourceGroupName "myResourceGroupVM" | Select IpAddress
Use the following command, on your local machine, to create a remote desktop session with the VM. Replace the
IP address with the publicIPAddress of your VM. When prompted, enter the credentials used when creating the
VM.
mstsc /v:<publicIpAddress>
In the Windows Security window, select More choices and then Use a different account . Type the
username and password you created for the VM and then click OK .
Use the Get-AzVMImageOffer to return a list of image offers. With this command, the returned list is filtered on
the specified publisher named MicrosoftWindowsServer :
Get-AzVMImageOffer `
-Location "EastUS" `
-PublisherName "MicrosoftWindowsServer"
The Get-AzVMImageSku command will then filter on the publisher and offer name to return a list of image
names.
Get-AzVMImageSku `
-Location "EastUS" `
-PublisherName "MicrosoftWindowsServer" `
-Offer "WindowsServer"
This information can be used to deploy a VM with a specific image. This example deploys a VM using the latest
version of a Windows Server 2016 with Containers image.
New-AzVm `
-ResourceGroupName "myResourceGroupVM" `
-Name "myVM2" `
-Location "EastUS" `
-VirtualNetworkName "myVnet" `
-SubnetName "mySubnet" `
-SecurityGroupName "myNetworkSecurityGroup" `
-PublicIpAddressName "myPublicIpAddress2" `
-ImageName "MicrosoftWindowsServer:WindowsServer:2016-Datacenter-with-Containers:latest" `
-Credential $cred `
-AsJob
The -AsJob parameter creates the VM as a background task, so the PowerShell prompts return to you. You can
view details of background jobs with the Get-Job cmdlet.
Understand VM sizes
The VM size determines the amount of compute resources like CPU, GPU, and memory that are made available
to the VM. Virtual machines should be created using a VM size appropriate for the workload. If a workload
increases, an existing virtual machine can also be resized.
VM Sizes
The following table categorizes sizes into use cases.
General purpose B, Dsv3, Dv3, DSv2, Dv2, Av2, DC Balanced CPU-to-memory. Ideal for
dev / test and small to medium
applications and data solutions.
Memory optimized Esv3, Ev3, M, DSv2, Dv2 High memory-to-core. Great for
relational databases, medium to large
caches, and in-memory analytics.
Storage optimized Lsv2, Ls High disk throughput and IO. Ideal for
Big Data, SQL, and NoSQL databases.
GPU NV, NVv2, NC, NCv2, NCv3, ND Specialized VMs targeted for heavy
graphic rendering and video editing.
Resize a VM
After a VM has been deployed, it can be resized to increase or decrease resource allocation.
Before resizing a VM, check if the size you want is available on the current VM cluster. The Get-AzVMSize
command returns a list of sizes.
If the size is available, the VM can be resized from a powered-on state, however it is rebooted during the
operation.
$vm = Get-AzVM `
-ResourceGroupName "myResourceGroupVM" `
-VMName "myVM"
$vm.HardwareProfile.VmSize = "Standard_DS3_v2"
Update-AzVM `
-VM $vm `
-ResourceGroupName "myResourceGroupVM"
If the size you want isn't available on the current cluster, the VM needs to be deallocated before the resize
operation can occur. Deallocating a VM will remove any data on the temp disk, and the public IP address will
change unless a static IP address is being used.
Stop-AzVM `
-ResourceGroupName "myResourceGroupVM" `
-Name "myVM" -Force
$vm = Get-AzVM `
-ResourceGroupName "myResourceGroupVM" `
-VMName "myVM"
$vm.HardwareProfile.VmSize = "Standard_E2s_v3"
Update-AzVM -VM $vm `
-ResourceGroupName "myResourceGroupVM"
Start-AzVM `
-ResourceGroupName "myResourceGroupVM" `
-Name $vm.name
VM power states
An Azure VM can have one of many power states.
To get the state of a particular VM, use the Get-AzVM command. Be sure to specify a valid name for a VM and
resource group.
Get-AzVM `
-ResourceGroupName "myResourceGroupVM" `
-Name "myVM" `
-Status | Select @{n="Status"; e={$_.Statuses[1].Code}}
Status
------
PowerState/running
To retrieve the power state of all the VMs in your subscription, use the Virtual Machines - List All API with
parameter statusOnly set to true.
Management tasks
During the lifecycle of a VM, you may want to run management tasks like starting, stopping, or deleting a VM.
Additionally, you may want to create scripts to automate repetitive or complex tasks. Using Azure PowerShell,
many common management tasks can be run from the command line or in scripts.
Stop a VM
Stop and deallocate a VM with Stop-AzVM:
Stop-AzVM `
-ResourceGroupName "myResourceGroupVM" `
-Name "myVM" -Force
If you want to keep the VM in a provisioned state, use the -StayProvisioned parameter.
Start a VM
Start-AzVM `
-ResourceGroupName "myResourceGroupVM" `
-Name "myVM"
Remove-AzResourceGroup `
-Name "myResourceGroupVM" `
-Force
Next steps
In this tutorial, you learned about basic VM creation and management such as how to:
Create and connect to a VM
Select and use VM images
View and use specific VM sizes
Resize a VM
View and understand VM state
Advance to the next tutorial to learn about VM disks.
Create and Manage VM disks
Tutorial - Manage Azure disks with Azure PowerShell
11/2/2020 • 6 minutes to read • Edit Online
Azure virtual machines use disks to store the VMs operating system, applications, and data. When creating a VM,
it's important to choose a disk size and configuration appropriate to the expected workload. This tutorial covers
deploying and managing VM disks. You learn about:
OS disks and temporary disks
Data disks
Standard and Premium disks
Disk performance
Attaching and preparing data disks
VM disk types
Azure provides two types of disks.
Standard disks - backed by HDDs, and delivers cost-effective storage while still being performant. Standard disks
are ideal for a cost effective dev and test workload.
Premium disks - backed by SSD-based, high-performance, low-latency disk. Perfect for VMs running production
workload. VM sizes with an S in the size name, typically support Premium Storage. For example, DS-series, DSv2-
series, GS-series, and FS-series VMs support premium storage. When you select a disk size, the value is rounded
up to the next type. For example, if the disk size is more than 64 GB, but less than 128 GB, the disk type is P10.
P RE
M IU
M
SSD
SIZ E
S P1 P2 P3 P4 P6 P 10 P 15 P 20 P 30 P 40 P 50 P 60 P 70 P 80
Disk 4 8 16 32 64 128 256 512 1,02 2,04 4,09 8,19 16,3 32,7
size 4 8 6 2 84 67
in
GiB
Pro 120 120 120 120 240 500 1,10 2,30 5,00 7,50 7,50 16,0 18,0 20,0
visi 0 0 0 0 0 00 00 00
one
d
IOP
S
per
disk
Pro 25 25 25 25 50 100 125 150 200 250 250 500 750 900
visi MB/ MB/ MB/ MB/ MB/ MB/ MB/ MB/ MB/ MB/ MB/ MB/ MB/ MB/
one sec sec sec sec sec sec sec sec sec sec sec sec sec sec
d
Thr
oug
hpu
t
per
disk
Max 30 30 30 30 30 30 30 30
bur min min min min min min min min
st
dur
atio
n
P RE
M IU
M
SSD
SIZ E
S P1 P2 P3 P4 P6 P 10 P 15 P 20 P 30 P 40 P 50 P 60 P 70 P 80
When you provision a premium storage disk, unlike standard storage, you are guaranteed the capacity, IOPS, and
throughput of that disk. For example, if you create a P50 disk, Azure provisions 4,095-GB storage capacity, 7,500
IOPS, and 250-MB/s throughput for that disk. Your application can use all or part of the capacity and performance.
Premium SSD disks are designed to provide low single-digit millisecond latencies and target IOPS and throughput
described in the preceding table 99.9% of the time.
While the above table identifies max IOPS per disk, a higher level of performance can be achieved by striping
multiple data disks. For instance, 64 data disks can be attached to Standard_GS5 VM. If each of these disks is sized
as a P30, a maximum of 80,000 IOPS can be achieved. For detailed information on max IOPS per VM, see VM types
and sizes.
New-AzVm `
-ResourceGroupName "myResourceGroupDisk" `
-Name "myVM" `
-Location "East US" `
-VirtualNetworkName "myVnet" `
-SubnetName "mySubnet" `
-SecurityGroupName "myNetworkSecurityGroup" `
-PublicIpAddressName "myPublicIpAddress"
Create the initial configuration with New-AzDiskConfig. The following example configures a disk that is 128
gigabytes in size.
$diskConfig = New-AzDiskConfig `
-Location "EastUS" `
-CreateOption Empty `
-DiskSizeGB 128
Get the virtual machine that you want to add the data disk to with the Get-AzVM command.
Add the data disk to the virtual machine configuration with the Add-AzVMDataDisk command.
$vm = Add-AzVMDataDisk `
-VM $vm `
-Name "myDataDisk" `
-CreateOption Attach `
-ManagedDiskId $dataDisk.Id `
-Lun 1
$vm.StorageProfile.DataDisks
Name : myDataDisk
DiskSizeGB : 128
Lun : 1
Caching : None
CreateOption : Attach
SourceImage :
VirtualHardDisk :
Next steps
In this tutorial, you learned about VM disks topics such as:
OS disks and temporary disks
Data disks
Standard and Premium disks
Disk performance
Attaching and preparing data disks
Advance to the next tutorial to learn about automating VM configuration.
Automate VM configuration
Tutorial - Deploy applications to a Windows virtual
machine in Azure with the Custom Script Extension
11/2/2020 • 2 minutes to read • Edit Online
To configure virtual machines (VMs) in a quick and consistent manner, you can use the Custom Script Extension for
Windows. In this tutorial you learn how to:
Use the Custom Script Extension to install IIS
Create a VM that uses the Custom Script Extension
View a running IIS site after the extension is applied
$cred = Get-Credential
Now you can create the VM with New-AzVM. The following example creates a VM named myVM in the EastUS
location. If they do not already exist, the resource group myResourceGroupAutomate and supporting network
resources are created. To allow web traffic, the cmdlet also opens port 80.
New-AzVm `
-ResourceGroupName "myResourceGroupAutomate" `
-Name "myVM" `
-Location "East US" `
-VirtualNetworkName "myVnet" `
-SubnetName "mySubnet" `
-SecurityGroupName "myNetworkSecurityGroup" `
-PublicIpAddressName "myPublicIpAddress" `
-OpenPorts 80 `
-Credential $cred
Get-AzPublicIPAddress `
-ResourceGroupName "myResourceGroupAutomate" `
-Name "myPublicIPAddress" | select IpAddress
You can then enter the public IP address in to a web browser. The website is displayed, including the hostname of
the VM that the load balancer distributed traffic to as in the following example:
Next steps
In this tutorial, you automated the IIS install on a VM. You learned how to:
Use the Custom Script Extension to install IIS
Create a VM that uses the Custom Script Extension
View a running IIS site after the extension is applied
Advance to the next tutorial to learn how to create custom VM images.
Create custom VM images
Tutorial: Create Windows VM images with Azure
PowerShell
11/2/2020 • 6 minutes to read • Edit Online
Images can be used to bootstrap deployments and ensure consistency across multiple VMs. In this tutorial, you
create your own specialized image of an Azure virtual machine using PowerShell and store it in a Shared Image
Gallery. You learn how to:
Create a Shared Image Gallery
Create an image definition
Create an image version
Create a VM from an image
Share an image gallery
Overview
A Shared Image Gallery simplifies custom image sharing across your organization. Custom images are like
marketplace images, but you create them yourself. Custom images can be used to bootstrap configurations such
as preloading applications, application configurations, and other OS configurations.
The Shared Image Gallery lets you share your custom VM images with others. Choose which images you want to
share, which regions you want to make them available in, and who you want to share them with.
The Shared Image Gallery feature has multiple resource types:
Image definition Image definitions are created within a gallery and carry
information about the image and requirements for using it
internally. This includes whether the image is Windows or
Linux, release notes, and minimum and maximum memory
requirements. It is a definition of a type of image.
RESO URC E DESC RIP T IO N
Get the VM
You can see a list of VMs that are available in a resource group using Get-AzVM. Once you know the VM name and
what resource group, you can use Get-AzVM again to get the VM object and store it in a variable to use later. This
example gets an VM named sourceVM from the "myResourceGroup" resource group and assigns it to the variable
$sourceVM.
$sourceVM = Get-AzVM `
-Name sourceVM `
-ResourceGroupName myResourceGroup
$resourceGroup = New-AzResourceGroup `
-Name 'myGalleryRG' `
-Location 'EastUS'
$galleryImage = New-AzGalleryImageDefinition `
-GalleryName $gallery.Name `
-ResourceGroupName $resourceGroup.ResourceGroupName `
-Location $gallery.Location `
-Name 'myImageDefinition' `
-OsState specialized `
-OsType Windows `
-Publisher 'myPublisher' `
-Offer 'myOffer' `
-Sku 'mySKU'
New-AzGalleryImageVersion `
-GalleryImageDefinitionName $galleryImage.Name`
-GalleryImageVersionName '1.0.0' `
-GalleryName $gallery.Name `
-ResourceGroupName $resourceGroup.ResourceGroupName `
-Location $resourceGroup.Location `
-TargetRegion $targetRegions `
-Source $sourceVM.Id.ToString() `
-PublishingProfileEndOfLifeDate '2020-12-01'
It can take a while to replicate the image to all of the target regions.
Create a VM
Once you have a specialized image, you can create one or more new VMs. Using the New-AzVM cmdlet. To use the
image, use Set-AzVMSourceImage and set the -Id to the image definition ID ($galleryImage.Id in this case) to
always use the latest image version.
Replace resource names as needed in this example.
# Create a virtual machine configuration using $imageVersion.Id to specify the image version.
$vmConfig = New-AzVMConfig -VMName $vmName -VMSize Standard_D1_v2 | `
Set-AzVMSourceImage -Id $galleryImage.Id | `
Add-AzVMNetworkInterface -Id $nic.Id
Clean up resources
When no longer needed, you can use the Remove-AzResourceGroup cmdlet to remove the resource group, and all
related resources:
# Delete the gallery
Remove-AzResourceGroup -Name myGalleryRG
# Delete the VM
Remove-AzResourceGroup -Name myResoureceGroup
Next steps
In this tutorial, you created a specialized VM image. You learned how to:
Create a Shared Image Gallery
Create an image definition
Create an image version
Create a VM from an image
Share an image gallery
Advance to the next tutorial to learn about how to create highly available virtual machines.
Create highly available VMs
Tutorial: Create and deploy highly available virtual
machines with Azure PowerShell
11/2/2020 • 4 minutes to read • Edit Online
In this tutorial, you learn how to increase the availability and reliability of your Virtual Machines (VMs) using
Availability Sets. Availability Sets make sure the VMs you deploy on Azure are distributed across multiple, isolated
hardware nodes, in a cluster.
In this tutorial, you learn how to:
Create an availability set
Create a VM in an availability set
Check available VM sizes
Check Azure Advisor
Create a managed availability set using New-AzAvailabilitySet with the -sku aligned parameter.
New-AzAvailabilitySet `
-Location "EastUS" `
-Name "myAvailabilitySet" `
-ResourceGroupName "myResourceGroupAvailability" `
-Sku aligned `
-PlatformFaultDomainCount 2 `
-PlatformUpdateDomainCount 2
$cred = Get-Credential
It takes a few minutes to create and configure both VMs. When finished, you have two virtual machines distributed
across the underlying hardware.
If you look at the availability set in the portal by going to Resource Groups > myResourceGroupAvailability
> myAvailabilitySet , you should see how the VMs are distributed across the two fault and update domains.
Check for available VM sizes
When you create a VM inside a availability set, you need to know what VM sizes are available on the hardware.
Use Get-AzVMSize command to get all available sizes for virtual machines that you can deploy in the availability
set.
Get-AzVMSize `
-ResourceGroupName "myResourceGroupAvailability" `
-AvailabilitySetName "myAvailabilitySet"
Next steps
In this tutorial, you learned how to:
Create an availability set
Create a VM in an availability set
Check available VM sizes
Check Azure Advisor
Advance to the next tutorial to learn about virtual machine scale sets.
Create a VM scale set
Tutorial: Create a virtual machine scale set and
deploy a highly available app on Windows with
Azure PowerShell
11/2/2020 • 7 minutes to read • Edit Online
A virtual machine scale set allows you to deploy and manage a set of identical, autoscaling virtual machines. You
can scale the number of VMs in the scale set manually. You can also define rules to autoscale based on resource
usage such as CPU, memory demand, or network traffic. In this tutorial, you deploy a virtual machine scale set in
Azure and learn how to:
Use the Custom Script Extension to define an IIS site to scale
Create a load balancer for your scale set
Create a virtual machine scale set
Increase or decrease the number of instances in a scale set
Create autoscale rules
It takes a few minutes to create and configure all the scale set resources and VMs.
# Use Custom Script Extension to install IIS and configure basic website
Add-AzVmssExtension -VirtualMachineScaleSet $vmss `
-Name "customScript" `
-Publisher "Microsoft.Compute" `
-Type "CustomScriptExtension" `
-TypeHandlerVersion 1.8 `
-Setting $publicSettings
# Update the scale set and apply the Custom Script Extension to the VM instances
Update-AzVmss `
-ResourceGroupName "myResourceGroupScaleSet" `
-Name "myScaleSet" `
-VirtualMachineScaleSet $vmss
$vnet = Get-AzVirtualNetwork `
-ResourceGroupName "myResourceGroupScaleSet" `
-Name myVnet
$frontendSubnet = $vnet.Subnets[0]
$frontendSubnetConfig = Set-AzVirtualNetworkSubnetConfig `
-VirtualNetwork $vnet `
-Name mySubnet `
-AddressPrefix $frontendSubnet.AddressPrefix `
-NetworkSecurityGroup $nsgFrontend
# Update the scale set and apply the Custom Script Extension to the VM instances
Update-AzVmss `
-ResourceGroupName "myResourceGroupScaleSet" `
-Name "myScaleSet" `
-VirtualMachineScaleSet $vmss
Get-AzPublicIPAddress `
-ResourceGroupName "myResourceGroupScaleSet" `
-Name "myPublicIPAddress" | select IpAddress
Enter the public IP address in to a web browser. The web app is displayed, including the hostname of the VM that
the load balancer distributed traffic to:
To see the scale set in action, you can force-refresh your web browser to see the load balancer distribute traffic
across all the VMs running your app.
Management tasks
Throughout the lifecycle of the scale set, you may need to run one or more management tasks. Additionally, you
may want to create scripts that automate various lifecycle-tasks. Azure PowerShell provides a quick way to do
those tasks. Here are a few common tasks.
View VMs in a scale set
To view a list of VM instances in a scale set, use Get-AzVmssVM as follows:
Get-AzVmssVM `
-ResourceGroupName "myResourceGroupScaleSet" `
-VMScaleSetName "myScaleSet"
The following example output shows two VM instances in the scale set:
To view additional information about a specific VM instance, add the -InstanceId parameter to Get-AzVmssVM.
The following example views information about VM instance 1:
Get-AzVmssVM `
-ResourceGroupName "myResourceGroupScaleSet" `
-VMScaleSetName "myScaleSet" `
-InstanceId "1"
You can then manually increase or decrease the number of virtual machines in the scale set with Update-AzVmss.
The following example sets the number of VMs in your scale set to 3:
# Get current scale set
$scaleset = Get-AzVmss `
-ResourceGroupName "myResourceGroupScaleSet" `
-VMScaleSetName "myScaleSet"
If takes a few minutes to update the specified number of instances in your scale set.
Configure autoscale rules
Rather than manually scaling the number of instances in your scale set, you define autoscale rules. These rules
monitor the instances in your scale set and respond accordingly based on metrics and thresholds you define. The
following example scales out the number of instances by one when the average CPU load is greater than 60% over
a 5-minute period. If the average CPU load then drops below 30% over a 5-minute period, the instances are scaled
in by one instance:
# Define your scale set information
$mySubscriptionId = (Get-AzSubscription)[0].Id
$myResourceGroup = "myResourceGroupScaleSet"
$myScaleSet = "myScaleSet"
$myLocation = "East US"
$myScaleSetId = (Get-AzVmss -ResourceGroupName $myResourceGroup -VMScaleSetName $myScaleSet).Id
# Create a scale up rule to increase the number instances after 60% average CPU usage exceeded for a 5-minute
period
$myRuleScaleUp = New-AzAutoscaleRule `
-MetricName "Percentage CPU" `
-MetricResourceId $myScaleSetId `
-Operator GreaterThan `
-MetricStatistic Average `
-Threshold 60 `
-TimeGrain 00:01:00 `
-TimeWindow 00:05:00 `
-ScaleActionCooldown 00:05:00 `
-ScaleActionDirection Increase `
-ScaleActionValue 1
# Create a scale down rule to decrease the number of instances after 30% average CPU usage over a 5-minute
period
$myRuleScaleDown = New-AzAutoscaleRule `
-MetricName "Percentage CPU" `
-MetricResourceId $myScaleSetId `
-Operator LessThan `
-MetricStatistic Average `
-Threshold 30 `
-TimeGrain 00:01:00 `
-TimeWindow 00:05:00 `
-ScaleActionCooldown 00:05:00 `
-ScaleActionDirection Decrease `
-ScaleActionValue 1
# Create a scale profile with your scale up and scale down rules
$myScaleProfile = New-AzAutoscaleProfile `
-DefaultCapacity 2 `
-MaximumCapacity 10 `
-MinimumCapacity 2 `
-Rule $myRuleScaleUp,$myRuleScaleDown `
-Name "autoprofile"
For more design information on the use of autoscale, see autoscale best practices.
Next steps
In this tutorial, you created a virtual machine scale set. You learned how to:
Use the Custom Script Extension to define an IIS site to scale
Create a load balancer for your scale set
Create a virtual machine scale set
Increase or decrease the number of instances in a scale set
Create autoscale rules
Advance to the next tutorial to learn more about load balancing concepts for virtual machines.
Load balance virtual machines
Tutorial: Load balance Windows virtual machines in
Azure to create a highly available application with
Azure PowerShell
11/2/2020 • 8 minutes to read • Edit Online
Load balancing provides a higher level of availability by spreading incoming requests across multiple virtual
machines. In this tutorial, you learn about the different components of the Azure load balancer that distribute
traffic and provide high availability. You learn how to:
Create an Azure load balancer
Create a load balancer health probe
Create load balancer traffic rules
Use the Custom Script Extension to create a basic IIS site
Create virtual machines and attach to a load balancer
View a load balancer in action
Add and remove VMs from a load balancer
$publicIP = New-AzPublicIpAddress `
-ResourceGroupName "myResourceGroupLoadBalancer" `
-Location "EastUS" `
-AllocationMethod "Static" `
-Name "myPublicIP"
$frontendIP = New-AzLoadBalancerFrontendIpConfig `
-Name "myFrontEndPool" `
-PublicIpAddress $publicIP
Create a backend address pool with New-AzLoadBalancerBackendAddressPoolConfig. The VMs attach to this
backend pool in the remaining steps. The following example creates a backend address pool named
myBackEndPool:
$backendPool = New-AzLoadBalancerBackendAddressPoolConfig `
-Name "myBackEndPool"
Now, create the load balancer with New-AzLoadBalancer. The following example creates a load balancer named
myLoadBalancer using the frontend and backend IP pools created in the preceding steps:
$lb = New-AzLoadBalancer `
-ResourceGroupName "myResourceGroupLoadBalancer" `
-Name "myLoadBalancer" `
-Location "EastUS" `
-FrontendIpConfiguration $frontendIP `
-BackendAddressPool $backendPool
To apply the health probe, update the load balancer with Set-AzLoadBalancer:
Add-AzLoadBalancerRuleConfig `
-Name "myLoadBalancerRule" `
-LoadBalancer $lb `
-FrontendIpConfiguration $lb.FrontendIpConfigurations[0] `
-BackendAddressPool $lb.BackendAddressPools[0] `
-Protocol Tcp `
-FrontendPort 80 `
-BackendPort 80 `
-Probe $probe
Virtual NICs are created with New-AzNetworkInterface. The following example creates three virtual NICs. (One
virtual NIC for each VM you create for your app in the following steps). You can create additional virtual NICs and
VMs at any time and add them to the load balancer:
$availabilitySet = New-AzAvailabilitySet `
-ResourceGroupName "myResourceGroupLoadBalancer" `
-Name "myAvailabilitySet" `
-Location "EastUS" `
-Sku aligned `
-PlatformFaultDomainCount 2 `
-PlatformUpdateDomainCount 2
Set an administrator username and password for the VMs with Get-Credential:
$cred = Get-Credential
Now you can create the VMs with New-AzVM. The following example creates three VMs and the required virtual
network components if they do not already exist:
for ($i=1; $i -le 3; $i++)
{
New-AzVm `
-ResourceGroupName "myResourceGroupLoadBalancer" `
-Name "myVM$i" `
-Location "East US" `
-VirtualNetworkName "myVnet" `
-SubnetName "mySubnet" `
-SecurityGroupName "myNetworkSecurityGroup" `
-OpenPorts 80 `
-AvailabilitySetName "myAvailabilitySet" `
-Credential $cred `
-AsJob
}
The -AsJob parameter creates the VM as a background task, so the PowerShell prompts return to you. You can
view details of background jobs with the Job cmdlet. It takes a few minutes to create and configure all three VMs.
Install IIS with Custom Script Extension
In a previous tutorial on How to customize a Windows virtual machine, you learned how to automate VM
customization with the Custom Script Extension for Windows. You can use the same approach to install and
configure IIS on your VMs.
Use Set-AzVMExtension to install the Custom Script Extension. The extension runs
powershell Add-WindowsFeature Web-Server to install the IIS webserver and then updates the Default.htm page to
show the hostname of the VM:
Get-AzPublicIPAddress `
-ResourceGroupName "myResourceGroupLoadBalancer" `
-Name "myPublicIP" | select IpAddress
You can then enter the public IP address in to a web browser. The website is displayed, including the hostname of
the VM that the load balancer distributed traffic to as in the following example:
To see the load balancer distribute traffic across all three VMs running your app, you can force-refresh your web
browser.
$nic = Get-AzNetworkInterface `
-ResourceGroupName "myResourceGroupLoadBalancer" `
-Name "myVM2"
$nic.Ipconfigurations[0].LoadBalancerBackendAddressPools=$null
Set-AzNetworkInterface -NetworkInterface $nic
To see the load balancer distribute traffic across the remaining two VMs running your app you can force-refresh
your web browser. You can now perform maintenance on the VM, such as installing OS updates or performing a
VM reboot.
Add a VM to the load balancer
After performing VM maintenance, or if you need to expand capacity, set the LoadBalancerBackendAddressPools
property of the virtual NIC to the BackendAddressPool from Get-AzLoadBalancer:
Get the load balancer:
$lb = Get-AzLoadBalancer `
-ResourceGroupName myResourceGroupLoadBalancer `
-Name myLoadBalancer
$nic.IpConfigurations[0].LoadBalancerBackendAddressPools=$lb.BackendAddressPools[0]
Set-AzNetworkInterface -NetworkInterface $nic
Next steps
In this tutorial, you created a load balancer and attached VMs to it. You learned how to:
Create an Azure load balancer
Create a load balancer health probe
Create load balancer traffic rules
Use the Custom Script Extension to create a basic IIS site
Create virtual machines and attach to a load balancer
View a load balancer in action
Add and remove VMs from a load balancer
Advance to the next tutorial to learn how to manage VM networking.
Manage VMs and virtual networks
Tutorial: Create and manage Azure virtual networks
for Windows virtual machines with Azure PowerShell
11/2/2020 • 7 minutes to read • Edit Online
Azure virtual machines use Azure networking for internal and external network communication. This tutorial walks
through deploying two virtual machines and configuring Azure networking for these VMs. The examples in this
tutorial assume that the VMs are hosting a web application with a database back-end, however an application isn't
deployed in the tutorial. In this tutorial, you learn how to:
Create a virtual network and subnet
Create a public IP address
Create a front-end VM
Secure network traffic
Create back-end VM
VM networking overview
Azure virtual networks enable secure network connections between virtual machines, the internet, and other Azure
services such as Azure SQL Database. Virtual networks are broken down into logical segments called subnets.
Subnets are used to control network flow, and as a security boundary. When deploying a VM, it generally includes
a virtual network interface, which is attached to a subnet.
While completing this tutorial, you can see these resources created:
myVNet - The virtual network that the VMs use to communicate with each other and the internet.
myFrontendSubnet - The subnet in myVNet used by the front-end resources.
myPublicIPAddress - The public IP address used to access myFrontendVM from the internet.
myFrontendNic - The network interface used by myFrontendVM to communicate with myBackendVM.
myFrontendVM - The VM used to communicate between the internet and myBackendVM.
myBackendNSG - The network security group that controls communication between the myFrontendVM and
myBackendVM.
myBackendSubnet - The subnet associated with myBackendNSG and used by the back-end resources.
myBackendNic - The network interface used by myBackendVM to communicate with myFrontendVM.
myBackendVM - The VM that uses port 1433 to communicate with myFrontendVM.
Create subnet
For this tutorial, a single virtual network is created with two subnets. A front-end subnet for hosting a web
application, and a back-end subnet for hosting a database server.
Before you can create a virtual network, create a resource group using New-AzResourceGroup. The following
example creates a resource group named myRGNetwork in the EastUS location:
$frontendSubnet = New-AzVirtualNetworkSubnetConfig `
-Name myFrontendSubnet `
-AddressPrefix 10.0.0.0/24
$backendSubnet = New-AzVirtualNetworkSubnetConfig `
-Name myBackendSubnet `
-AddressPrefix 10.0.1.0/24
$vnet = New-AzVirtualNetwork `
-ResourceGroupName myRGNetwork `
-Location EastUS `
-Name myVNet `
-AddressPrefix 10.0.0.0/16 `
-Subnet $frontendSubnet, $backendSubnet
At this point, a network has been created and segmented into two subnets, one for front-end services, and another
for back-end services. In the next section, virtual machines are created and connected to these subnets.
$pip = New-AzPublicIpAddress `
-ResourceGroupName myRGNetwork `
-Location EastUS `
-AllocationMethod Dynamic `
-Name myPublicIPAddress
You could change the -AllocationMethod parameter to Static to assign a static public IP address.
Create a front-end VM
For a VM to communicate in a virtual network, it needs a virtual network interface (NIC). Create a NIC using New-
AzNetworkInterface:
$frontendNic = New-AzNetworkInterface `
-ResourceGroupName myRGNetwork `
-Location EastUS `
-Name myFrontend `
-SubnetId $vnet.Subnets[0].Id `
-PublicIpAddressId $pip.Id
Set the username and password needed for the administrator account on the VM using Get-Credential. You use
these credentials to connect to the VM in additional steps:
$cred = Get-Credential
New-AzVM `
-Credential $cred `
-Name myFrontend `
-PublicIpAddressName myPublicIPAddress `
-ResourceGroupName myRGNetwork `
-Location "EastUS" `
-Size Standard_D1 `
-SubnetName myFrontendSubnet `
-VirtualNetworkName myVNet
$nsgFrontendRule = New-AzNetworkSecurityRuleConfig `
-Name myFrontendNSGRule `
-Protocol Tcp `
-Direction Inbound `
-Priority 200 `
-SourceAddressPrefix * `
-SourcePortRange * `
-DestinationAddressPrefix * `
-DestinationPortRange 80 `
-Access Allow
You can limit internal traffic to myBackendVM from only myFrontendVM by creating an NSG for the back-end
subnet. The following example creates an NSG rule named myBackendNSGRule:
$nsgBackendRule = New-AzNetworkSecurityRuleConfig `
-Name myBackendNSGRule `
-Protocol Tcp `
-Direction Inbound `
-Priority 100 `
-SourceAddressPrefix 10.0.0.0/24 `
-SourcePortRange * `
-DestinationAddressPrefix * `
-DestinationPortRange 1433 `
-Access Allow
$nsgFrontend = New-AzNetworkSecurityGroup `
-ResourceGroupName myRGNetwork `
-Location EastUS `
-Name myFrontendNSG `
-SecurityRules $nsgFrontendRule
$vnet = Get-AzVirtualNetwork `
-ResourceGroupName myRGNetwork `
-Name myVNet
$frontendSubnet = $vnet.Subnets[0]
$backendSubnet = $vnet.Subnets[1]
$frontendSubnetConfig = Set-AzVirtualNetworkSubnetConfig `
-VirtualNetwork $vnet `
-Name myFrontendSubnet `
-AddressPrefix $frontendSubnet.AddressPrefix `
-NetworkSecurityGroup $nsgFrontend
$backendSubnetConfig = Set-AzVirtualNetworkSubnetConfig `
-VirtualNetwork $vnet `
-Name myBackendSubnet `
-AddressPrefix $backendSubnet.AddressPrefix `
-NetworkSecurityGroup $nsgBackend
Set-AzVirtualNetwork -VirtualNetwork $vnet
Create a back-end VM
The easiest way to create the back-end VM for this tutorial is by using a SQL Server image. This tutorial only
creates the VM with the database server, but doesn't provide information about accessing the database.
Create myBackendNic:
$backendNic = New-AzNetworkInterface `
-ResourceGroupName myRGNetwork `
-Location EastUS `
-Name myBackend `
-SubnetId $vnet.Subnets[1].Id
Set the username and password needed for the administrator account on the VM with Get-Credential:
$cred = Get-Credential
Create myBackendVM.
New-AzVM `
-Credential $cred `
-Name myBackend `
-ImageName "MicrosoftSQLServer:SQL2016SP1-WS2016:Enterprise:latest" `
-ResourceGroupName myRGNetwork `
-Location "EastUS" `
-SubnetName MyBackendSubnet `
-VirtualNetworkName myVNet
The image in this example has SQL Server installed, but it isn't used in this tutorial. It's included to show you how
you can configure a VM to handle web traffic and a VM to handle database management.
Next steps
In this tutorial, you created and secured Azure networks as related to virtual machines.
Create a virtual network and subnet
Create a public IP address
Create a front-end VM
Secure network traffic
Create a back-end VM
Advance to the next tutorial to learn about monitoring securing data on virtual machines using Azure backup.
Back up Windows virtual machines in Azure
Tutorial: Back up and restore files for Windows virtual
machines in Azure
11/2/2020 • 4 minutes to read • Edit Online
You can protect your data by taking backups at regular intervals. Azure Backup creates recovery points that are
stored in geo-redundant recovery vaults. When you restore from a recovery point, you can restore the whole VM
or specific files. This article explains how to restore a single file to a VM running Windows Server and IIS. If you
don't already have a VM to use, you can create one using the Windows quickstart. In this tutorial you learn how to:
Create a backup of a VM
Schedule a daily backup
Restore a file from a backup
Backup overview
When the Azure Backup service initiates a backup job, it triggers the backup extension to take a point-in-time
snapshot. The Azure Backup service uses the VMSnapshot extension. The extension is installed during the first VM
backup if the VM is running. If the VM is not running, the Backup service takes a snapshot of the underlying
storage (since no application writes occur while the VM is stopped).
When taking a snapshot of Windows VMs, the Backup service coordinates with the Volume Shadow Copy Service
(VSS) to get a consistent snapshot of the virtual machine's disks. Once the Azure Backup service takes the
snapshot, the data is transferred to the vault. To maximize efficiency, the service identifies and transfers only the
blocks of data that have changed since the previous backup.
When the data transfer is complete, the snapshot is removed and a recovery point is created.
Create a backup
Create a simple scheduled daily backup to a Recovery Services Vault.
1. Sign in to the Azure portal.
2. In the menu on the left, select Vir tual machines .
3. From the list, select a VM to back up.
4. On the VM blade, in the Operations section, click Backup . The Enable backup blade opens.
5. In Recover y Ser vices vault , click Create new and provide the name for the new vault. A new vault is
created in the same resource group and location as the virtual machine.
6. Under Choose backup policy , keep the default (New) DailyPolicy , and then click Enable Backup .
7. To create an initial recovery point, on the Backup blade click Backup now .
8. On the Backup Now blade, click the calendar icon, use the calendar control to choose how long the restore
point is retained, and click OK .
9. In the Backup blade for your VM, you'll see the number of restore points that are complete.
The first backup takes about 20 minutes. Proceed to the next part of this tutorial after your backup is finished.
Recover a file
If you accidentally delete or make changes to a file, you can use File Recovery to recover the file from your backup
vault. File Recovery uses a script that runs on the VM, to mount the recovery point as local drive. These drives
remain mounted for 12 hours so that you can copy files from the recovery point and restore them to the VM.
In this example, we show how to recover the image file that is used in the default web page for IIS.
1. Open a browser and connect to the IP address of the VM to show the default IIS page.
Next steps
In this tutorial, you learned how to:
Create a backup of a VM
Schedule a daily backup
Restore a file from a backup
Advance to the next tutorial to learn about monitoring virtual machines.
Govern virtual machines
Tutorial: Enable disaster recovery for Windows VMs
11/6/2020 • 5 minutes to read • Edit Online
This tutorial shows you how to set up disaster recovery for Azure VMs running Windows. In this article, learn how
to:
Enable disaster recovery for a Windows VM
Run a disaster recovery drill
Stop replicating the VM after the drill
When you enable replication for a VM, the Site Recovery Mobility service extension installs on the VM, and
registers it with Azure Site Recovery. During replication, VM disk writes are sent to a cache storage account in the
source region. Data is sent from there to the target region, and recovery points are generated from the data. When
you fail over a VM during disaster recovery, a recovery point is used to restore the VM in the target region.
If you don't have an Azure subscription, create a free account before you begin.
Prerequisites
1. Check that your Azure subscription allows you to create a VM in the target region. If you just created your
free Azure account, you're the administrator of the subscription, and you have the permissions you need.
2. If you're not the subscription administrator, work with the administrator to assign you:
Either the Virtual Machine Contributor built-in role, or specific permissions to:
Create a VM in the selected virtual network.
Write to an Azure storage account.
Write to an Azure-managed disk.
The Site Recovery Contributor built-in role, to manage Site Recovery operations in the vault.
3. We recommend you use a Windows VM running Windows Server 2012 or later. The VM disk shouldn't be
encrypted for the purpose of this tutorial.
4. If VM outbound connections use a URL-based proxy, make sure it can access these URLs. Using an
authenticated proxy isn't supported.
TA G A L LO W
AzureSiteRecovery tag Allows access to the Site Recovery service in any region.
6. On Windows VMs, install the latest Windows updates, to make sure that VMs have the latest root certificates.
6. Select Star t replication . Deployment starts, and Site Recovery starts creating target resources. You can
monitor replication progress in the notifications.
Check VM status
After the replication job finishes, you can check the VM replication status.
1. Open the VM properties page.
2. In Operations , select Disaster recover y .
3. Expand the Essentials section to review defaults about the vault, replication policy, and target settings.
4. In Health and status , get information about replication state for the VM, the agent version, failover
readiness, and the latest recovery points.
5. In Infrastructure view , get a visual overview of source and target VMs, managed disks, and the cache
storage account.
Run a drill
Run a drill to make sure disaster recovery works as expected. When you run a test failover, it creates a copy of the
VM, with no impact on ongoing replication, or on your production environment.
1. In the VM disaster recovery page, select Test failover .
2. In Test failover , leave the default Latest processed (low RPO) setting for the recovery point.
This option provides the lowest recovery point objective (RPO), and generally the quickest spin up of the
target VM. It first processes all the data that has been sent to Site Recovery service, to create a recovery
point for each VM, before failing over to it. This recovery point has all the data replicated to Site Recovery
when the failover was triggered.
3. Select the virtual network in which the VM will be located after failover.
4. The test failover process begins. You can monitor the progress in notifications.
After the test failover completes, the VM is in the Cleanup test failover pending state on the Essentials page.
Clean up resources
The VM is automatically cleaned up by Site Recovery after the drill.
1. To begin automatic cleanup, select Cleanup test failover .
2. In Test failover cleanup , type in any notes you want to record for the failover, and then select Testing is
complete. Delete test failover vir tual machine . Then select OK .
The Site Recovery extension installed on the VM during replication isn't removed automatically. If you disable
replication for the VM, and you don't want to replicate it again at a later time, you can remove the Site Recovery
extension manually, as follows:
1. Go to the VM > Settings > Extensions .
2. In the Extensions page, select each Microsoft.Azure.RecoveryServices entry for Linux.
3. In the properties page for the extension, select Uninstall .
Next steps
In this tutorial, you configured disaster recovery for an Azure VM, and ran a disaster recovery drill. Now, you can
perform a full failover for the VM.
Fail over a VM to another region
Tutorial: Learn about Windows virtual machine
management with Azure PowerShell
11/2/2020 • 10 minutes to read • Edit Online
When deploying resources to Azure, you have tremendous flexibility when deciding what types of resources to
deploy, where they are located, and how to set them up. However, that flexibility may open more options than you
would like to allow in your organization. As you consider deploying resources to Azure, you might be wondering:
How do I meet legal requirements for data sovereignty in certain countries/regions?
How do I control costs?
How do I ensure that someone does not inadvertently change a critical system?
How do I track resource costs and bill it accurately?
This article addresses those questions. Specifically, you:
Assign users to roles and assign the roles to a scope so users have permission to perform expected actions but
not more actions.
Apply policies that prescribe conventions for resources in your subscription.
Lock resources that are critical to your system.
Tag resources so you can track them by values that make sense to your organization.
This article focuses on the tasks you take to implement governance. For a broader discussion of the concepts, see
Governance in Azure.
Understand scope
Before creating any items, let's review the concept of scope. Azure provides four levels of management:
management groups, subscription, resource group, and resource. Management groups are in a preview release.
The following image shows an example of these layers.
You apply management settings at any of these levels of scope. The level you select determines how widely the
setting is applied. Lower levels inherit settings from higher levels. When you apply a setting to the subscription,
that setting is applied to all resource groups and resources in your subscription. When you apply a setting on the
resource group, that setting is applied the resource group and all its resources. However, another resource group
does not have that setting.
Usually, it makes sense to apply critical settings at higher levels and project-specific requirements at lower levels.
For example, you might want to make sure all resources for your organization are deployed to certain regions. To
accomplish this requirement, apply a policy to the subscription that specifies the allowed locations. As other users
in your organization add new resource groups and resources, the allowed locations are automatically enforced.
In this tutorial, you apply all management settings to a resource group so you can easily remove those settings
when done.
Let's create that resource group.
If you receive an error stating Principal <guid> does not exist in the director y , the new group hasn't
propagated throughout Azure Active Directory. Try running the command again.
Typically, you repeat the process for Network Contributor and Storage Account Contributor to make sure users are
assigned to manage the deployed resources. In this article, you can skip those steps.
Azure Policy
Azure Policy helps you make sure all resources in subscription meet corporate standards. Your subscription already
has several policy definitions. To see the available policy definitions, use the Get-AzPolicyDefinition command:
You see the existing policy definitions. The policy type is either BuiltIn or Custom . Look through the definitions for
ones that describe a condition you want assign. In this article, you assign policies that:
Limit the locations for all resources.
Limit the SKUs for virtual machines.
Audit virtual machines that don't use managed disks.
In the following example, you retrieve three policy definitions based on the display name. You use the New-
AzPolicyAssignment command to assign those definitions to the resource group. For some policies, you provide
parameter values to specify the allowed values.
# Values to use for parameters
$locations ="eastus", "eastus2"
$skus = "Standard_DS1_v2", "Standard_E2s_v2"
# Get policy definitions for allowed locations, allowed SKUs, and auditing VMs that don't use managed disks
$locationDefinition = Get-AzPolicyDefinition | where-object {$_.properties.displayname -eq "Allowed
locations"}
$skuDefinition = Get-AzPolicyDefinition | where-object {$_.properties.displayname -eq "Allowed virtual machine
size SKUs"}
$auditDefinition = Get-AzPolicyDefinition | where-object {$_.properties.displayname -eq "Audit VMs that do not
use managed disks"}
After your deployment finishes, you can apply more management settings to the solution.
Lock resources
Resource locks prevent users in your organization from accidentally deleting or modifying critical resources. Unlike
role-based access control, resource locks apply a restriction across all users and roles. You can set the lock level to
CanNotDelete or ReadOnly.
To lock the virtual machine and network security group, use the New-AzResourceLock command:
# Add CanNotDelete lock to the VM
New-AzResourceLock -LockLevel CanNotDelete `
-LockName LockVM `
-ResourceName myVM `
-ResourceType Microsoft.Compute/virtualMachines `
-ResourceGroupName myResourceGroup
You see an error stating that the delete operation can't be completed because of a lock. The resource group can
only be deleted if you specifically remove the locks. That step is shown in Clean up resources.
Tag resources
You apply tags to your Azure resources to logically organize them by categories. Each tag consists of a name and a
value. For example, you can apply the name "Environment" and the value "Production" to all the resources in
production.
To add two tags to a resource group, use the Set-AzResourceGroup command:
Let's suppose you want to add a third tag. Every time you apply tags to a resource or a resource group, you
overwrite the existing tags on that resource or resource group. To add a new tag without losing the existing tags,
you must retrieve the existing tags, add a new tag, and reapply the collection of tags:
Resources don't inherit tags from the resource group. Currently, your resource group has three tags but the
resources do not have any tags. To apply all tags from a resource group to its resources, and retain existing tags on
resources that are not duplicates, use the following script:
# Get the resource group
$group = Get-AzResourceGroup myResourceGroup
Alternatively, you can apply tags from the resource group to the resources without keeping the existing tags:
# Find all the resources in the resource group, and for each resource apply the tags from the resource group
Get-AzResource -ResourceGroupName $g.ResourceGroupName | ForEach-Object {Set-AzResource -ResourceId
$_.ResourceId -Tag $g.Tags -Force }
To add a new tag with several values without losing the existing tags, you must retrieve the existing tags, use a
JSON string for the new tag, and reapply the collection of tags:
You can use the returned values for management tasks like stopping all virtual machines with a tag value.
Clean up resources
The locked network security group can't be deleted until the lock is removed. To remove the lock, use the Remove-
AzResourceLock command:
When no longer needed, you can use the Remove-AzResourceGroup command to remove the resource group, VM,
and all related resources.
Manage costs
Azure services cost money. Azure Cost Management helps you set budgets and configure alerts to keep spending
under control. Analyze, manage, and optimize your Azure costs with Cost Management. To learn more, see the
quickstart on analyzing your costs.
Next steps
In this tutorial, you created a custom VM image. You learned how to:
Assign users to a role
Apply policies that enforce standards
Protect critical resources with locks
Tag resources for billing and management
Advance to the next tutorial to learn about how to identify changes and manage package updates on a Linux virtual
machine.
Manage virtual machines
Tutorial: Monitor changes and update a Windows
virtual machine in Azure
11/2/2020 • 9 minutes to read • Edit Online
With Azure Change Tracking and Update Management, you can easily identify changes in your Windows virtual
machines in Azure and manage operating system updates for those VMs.
In this tutorial, you learn how to:
Manage Windows updates.
Monitor changes and inventory.
$cred = Get-Credential
Next, create the VM with New-AzVM. The following example creates a VM named myVM in the East US location. If
they don't already exist, the resource group myResourceGroupMonitor and supporting network resources are created:
New-AzVm `
-ResourceGroupName "myResourceGroupMonitor" `
-Name "myVM" `
-Location "East US" `
-Credential $cred
O P T IO N DESC RIP T IO N
Schedule settings Choose the time to start, and select either Once or
Recurring .
Pre-scripts + Post-scripts Choose the scripts to run before and after your deployment.
Maintenance window Enter the number of minutes set for updates. Valid values
range from 30 to 360 minutes.
Reboot control Select how reboots are handled. Available selections are:
Reboot if required
Always reboot
Never reboot
Only reboot
Reboot if required is the default selection. If you select
Only reboot , updates aren't installed.
After you have finished configuring the schedule, click Create to return to the status dashboard. The Scheduled
table shows the deployment schedule you created.
You can also create update deployments programmatically. To learn how to create an update deployment with the
REST API, see Software Update Configurations - Create. There's also a sample runbook that you can use to create a
weekly update deployment. To learn more about this runbook, see Create a weekly update deployment for one or
more VMs in a resource group.
View results of an update deployment
After the scheduled deployment starts, you can see the deployment status in the Update deployments tab of the
Update management window.
If the deployment is currently running, its status shows as "In progress." After successful completion, the status
changes to "Succeeded." But if any updates in the deployment fail, the status is "Partially failed."
Select the completed update deployment to see the dashboard for that deployment.
The Update results tile shows a summary of the total number of updates and deployment results on the VM. The
table to the right shows a detailed breakdown of each update and the installation results. Each result has one of the
following values:
Not attempted : The update isn't installed. There wasn't enough time available based on the defined
maintenance-window duration.
Succeeded : The update succeeded.
Failed : The update failed.
Select All logs to see all log entries that the deployment created.
Select the Output tile to see the job stream of the runbook responsible for managing the update deployment on
the target VM.
Select Errors to see detailed information about any deployment errors.
The previous chart shows changes that have occurred over time. After you add an Azure Activity Log connection,
the line graph at the top displays Azure Activity Log events.
Each row of bar graphs represents a different trackable change type. These types are Linux daemons, files,
Windows registry keys, software, and Windows services. The Change tab shows the change details. Changes
appear in the order of when each occurred, with the most recent change shown first.
Next steps
In this tutorial, you configured and reviewed Change Tracking and Update Management for your VM. You learned
how to:
Create a resource group and VM.
Manage Windows updates.
Monitor changes and inventory.
Go to the next tutorial to learn about monitoring your VM.
Monitor virtual machines
Tutorial: Monitor a Windows virtual machine in Azure
11/2/2020 • 4 minutes to read • Edit Online
Azure monitoring uses agents to collect boot and performance data from Azure VMs, store this data in Azure
storage, and make it accessible through portal, the Azure PowerShell module, and Azure CLI. Advanced monitoring
is delivered with Azure Monitor for VMs by collecting performance metrics, discover application components
installed on the VM, and includes performance charts and dependency map.
In this tutorial, you learn how to:
Enable boot diagnostics on a VM
View boot diagnostics
View VM host metrics
Enable Azure Monitor for VMs
View VM performance metrics
Create an alert
$cred = Get-Credential
Now create the VM with New-AzVM. The following example creates a VM named myVM in the EastUS location. If
they do not already exist, the resource group myResourceGroupMonitorMonitor and supporting network
resources are created:
New-AzVm `
-ResourceGroupName "myResourceGroupMonitor" `
-Name "myVM" `
-Location "East US" `
-Credential $cred
NOTE
To create a new Log Analytics workspace to store the monitoring data from the VM, see Create a Log Analytics
workspace. The workspace must belong to one of the supported regions.
After you've enabled monitoring, you might need to wait several minutes before you can view the performance
metrics for the VM.
Create alerts
You can create alerts based on specific performance metrics. Alerts can be used to notify you when average CPU
usage exceeds a certain threshold or available free disk space drops below a certain amount, for example. Alerts
are displayed in the Azure portal or can be sent via email. You can also trigger Azure Automation runbooks or
Azure Logic Apps in response to alerts being generated.
The following example creates an alert for average CPU usage.
1. In the Azure portal, click Resource Groups , select myResourceGroupMonitor , and then select myVM in
the resource list.
2. Click Aler t rules on the VM blade, then click Add metric aler t across the top of the alerts blade.
3. Provide a Name for your alert, such as myAlertRule
4. To trigger an alert when CPU percentage exceeds 1.0 for five minutes, leave all the other defaults selected.
5. Optionally, check the box for Email owners, contributors, and readers to send email notification. The default
action is to present a notification in the portal.
6. Click the OK button.
Next steps
In this tutorial, you configured and viewed performance of your VM. You learned how to:
Create a resource group and VM
Enable boot diagnostics on the VM
View boot diagnostics
View host metrics
Enable Azure Monitor for VMs
View VM metrics
Create an alert
Advance to the next tutorial to learn about Azure Security Center.
Manage VM security
Tutorial: Use Azure Security Center to monitor
Windows virtual machines
11/2/2020 • 4 minutes to read • Edit Online
Azure Security Center can help you gain visibility into your Azure resource security practices. Security Center offers
integrated security monitoring. It can detect threats that otherwise might go unnoticed. In this tutorial, you learn
about Azure Security Center, and how to:
Set up data collection
Set up security policies
View and fix configuration health issues
Review detected threats
Security Center goes beyond data discovery to provide recommendations for issues that it detects. For example, if a
VM was deployed without an attached network security group, Security Center displays a recommendation, with
remediation steps you can take. You get automated remediation without leaving the context of Security Center.
In this tutorial, we install a SQL, IIS, .NET stack using Azure PowerShell. This stack consists of two VMs running
Windows Server 2016, one with IIS and .NET and the other with SQL Server.
Create a VM
Install IIS and the .NET Core SDK on the VM
Create a VM running SQL Server
Install the SQL Server extension
Create an IIS VM
In this example, we use New-AzVM cmdlet in the PowerShell Cloud Shell to quickly create a Windows Server 2016
VM and then install IIS and the .NET Framework. The IIS and SQL VMs share a resource group and virtual network,
so we create variables for those names.
$vmName = "IISVM"
$vNetName = "myIISSQLvNet"
$resourceGroup = "myIISSQLGroup"
New-AzVm `
-ResourceGroupName $resourceGroup `
-Name $vmName `
-Location "East US" `
-VirtualNetworkName $vNetName `
-SubnetName "myIISSubnet" `
-SecurityGroupName "myNetworkSecurityGroup" `
-AddressPrefix 192.168.0.0/16 `
-PublicIpAddressName "myIISPublicIpAddress" `
-OpenPorts 80,3389
Install IIS and the .NET framework using the custom script extension with the Set-AzVMExtension cmdlet.
Set-AzVMExtension `
-ResourceGroupName $resourceGroup `
-ExtensionName IIS `
-VMName $vmName `
-Publisher Microsoft.Compute `
-ExtensionType CustomScriptExtension `
-TypeHandlerVersion 1.4 `
-SettingString '{"commandToExecute":"powershell Add-WindowsFeature Web-Server,Web-Asp-Net45,NET-Framework-
Features"}' `
-Location EastUS
$vNet = Get-AzVirtualNetwork `
-Name $vNetName `
-ResourceGroupName $resourceGroup
Add-AzVirtualNetworkSubnetConfig `
-AddressPrefix 192.168.0.0/24 `
-Name mySQLSubnet `
-VirtualNetwork $vNet `
-ServiceEndpoint Microsoft.Sql
Update the vNet with the new subnet information using Set-AzVirtualNetwork
$vNet | Set-AzVirtualNetwork
Azure SQL VM
Use a pre-configured Azure marketplace image of a SQL server to create the SQL VM. We first create the VM, then
we install the SQL Server Extension on the VM.
New-AzVm `
-ResourceGroupName $resourceGroup `
-Name "mySQLVM" `
-ImageName "MicrosoftSQLServer:SQL2016SP1-WS2016:Enterprise:latest" `
-Location eastus `
-VirtualNetworkName $vNetName `
-SubnetName "mySQLSubnet" `
-SecurityGroupName "myNetworkSecurityGroup" `
-PublicIpAddressName "mySQLPublicIpAddress" `
-OpenPorts 3389,1401
Use Set-AzVMSqlServerExtension to add the SQL Server extension to the SQL VM.
Set-AzVMSqlServerExtension `
-ResourceGroupName $resourceGroup `
-VMName mySQLVM `
-Name "SQLExtension" `
-Location "EastUS"
Next steps
In this tutorial, you installed a SQL\IIS\.NET stack using Azure PowerShell. You learned how to:
Create a VM
Install IIS and the .NET Core SDK on the VM
Create a VM running SQL Server
Install the SQL Server extension
Advance to the next tutorial to learn how to secure IIS web server with TLS/SSL certificates.
Secure IIS web server with TLS/SSL certificates
Tutorial: Secure a web server on a Windows virtual
machine in Azure with TLS/SSL certificates stored in
Key Vault
11/2/2020 • 4 minutes to read • Edit Online
NOTE
Currently this doc only works for Generalized images. If attempting this tutorial using a Specialized disk you will receive an
error.
To secure web servers, a Transport Layer Security (TLS), previously known as Secure Sockets Layer (SSL), certificate
can be used to encrypt web traffic. These TLS/SSL certificates can be stored in Azure Key Vault, and allow secure
deployments of certificates to Windows virtual machines (VMs) in Azure. In this tutorial you learn how to:
Create an Azure Key Vault
Generate or upload a certificate to the Key Vault
Create a VM and install the IIS web server
Inject the certificate into the VM and configure IIS with a TLS binding
Overview
Azure Key Vault safeguards cryptographic keys and secrets, such certificates or passwords. Key Vault helps
streamline the certificate management process and enables you to maintain control of keys that access those
certificates. You can create a self-signed certificate inside Key Vault, or upload an existing, trusted certificate that
you already own.
Rather than using a custom VM image that includes certificates baked-in, you inject certificates into a running VM.
This process ensures that the most up-to-date certificates are installed on a web server during deployment. If you
renew or replace a certificate, you don't also have to create a new custom VM image. The latest certificates are
automatically injected as you create additional VMs. During the whole process, the certificates never leave the
Azure platform or are exposed in a script, command-line history, or template.
Next, create a Key Vault with New-AzKeyVault. Each Key Vault requires a unique name, and should be all lower case.
Replace mykeyvault in the following example with your own unique Key Vault name:
$keyvaultName="mykeyvault"
New-AzKeyVault -VaultName $keyvaultName `
-ResourceGroup $resourceGroup `
-Location $location `
-EnabledForDeployment
$policy = New-AzKeyVaultCertificatePolicy `
-SubjectName "CN=www.contoso.com" `
-SecretContentType "application/x-pkcs12" `
-IssuerName Self `
-ValidityInMonths 12
Add-AzKeyVaultCertificate `
-VaultName $keyvaultName `
-Name "mycert" `
-CertificatePolicy $policy
$cred = Get-Credential
Now you can create the VM with New-AzVM. The following example creates a VM named myVM in the EastUS
location. If they do not already exist, the supporting network resources are created. To allow secure web traffic, the
cmdlet also opens port 443.
# Create a VM
New-AzVm `
-ResourceGroupName $resourceGroup `
-Name "myVM" `
-Location $location `
-VirtualNetworkName "myVnet" `
-SubnetName "mySubnet" `
-SecurityGroupName "myNetworkSecurityGroup" `
-PublicIpAddressName "myPublicIpAddress" `
-Credential $cred `
-OpenPorts 443
It takes a few minutes for the VM to be created. The last step uses the Azure Custom Script Extension to install the
IIS web server with Set-AzVmExtension.
$PublicSettings = '{
"fileUris":["https://raw.githubusercontent.com/Azure-Samples/compute-automation-
configurations/master/secure-iis.ps1"],
"commandToExecute":"powershell -ExecutionPolicy Unrestricted -File secure-iis.ps1"
}'
Now you can open a web browser and enter https://<myPublicIP> in the address bar. To accept the security
warning if you used a self-signed certificate, select Details and then Go on to the webpage :
The following table provides links to PowerShell script samples that create and manage Windows virtual
machines (VMs).
Quickly create a virtual machine Creates a resource group, a virtual machine, and all related
resources, with a minimum of prompts.
Create a fully configured virtual machine Creates a resource group, a virtual machine, and all related
resources.
Create highly available virtual machines Creates several virtual machines in a highly-available and
load-balanced configuration.
Create a VM and run a configuration script Creates a virtual machine and uses the Azure Custom Script
extension to install IIS.
Create a VM and run a DSC configuration Creates a virtual machine and uses the Azure Desired State
Configuration (DSC) extension to install IIS.
Upload a VHD and create VMs Uploads a local VHD file to Azure, creates an image from the
VHD, and then creates a VM from that image.
Create a VM from a managed OS disk Creates a virtual machine by attaching an existing Managed
Disk as OS disk.
Create a VM from a snapshot Creates a virtual machine from a snapshot by first creating a
managed disk from the snapshot and then attaching the
new managed disk as OS disk.
Manage storage
Create a managed disk from a VHD in the same or a Creates a managed disk from a specialized VHD as an OS
different subscription disk, or from a data VHD as a data disk, in the same or a
different subscription.
Create a managed disk from a snapshot Creates a managed disk from a snapshot.
Copy a managed disk to the same or a different subscription Copies a managed disk to the same or a different
subscription that is in the same region as the parent
managed disk.
Export a snapshot as a VHD to a storage account Exports a managed snapshot as a VHD to a storage account
in a different region.
Export the VHD of a managed disk to a storage account Exports the underlying VHD of a managed disk to a storage
account in a different region.
SC RIP T DESC RIP T IO N
Create a snapshot from a VHD Creates a snapshot from a VHD and then uses that snapshot
to create multiple identical managed disks quickly.
Copy a snapshot to the same or a different subscription Copies snapshot to the same or a different subscription that
is in the same region as the parent snapshot.
Encrypt a VM and its data disks Creates an Azure key vault, an encryption key, and a service
principal, and then encrypts a VM.
Monitor a VM with Azure Monitor Creates a virtual machine, installs the Azure Log Analytics
agent, and enrolls the VM in a Log Analytics workspace.
Collect details about all VMs in a subscription with Creates a csv that contains the VM Name, Resource Group
PowerShell Name, Region, Virtual Network, Subnet, Private IP Address,
OS Type, and Public IP Address of the VMs in the provided
subscription.
Create a virtual machine with PowerShell
11/2/2020 • 2 minutes to read • Edit Online
This script creates an Azure Virtual Machine running Windows Server 2016. After running the script, you can
access the virtual machine over RDP.
If you don't have an Azure subscription, create a free account before you begin.
Sample script
# Variables for common values
$resourceGroup = "myResourceGroup"
$location = "westeurope"
$vmName = "myVM"
Clean up deployment
Run the following command to remove the resource group, VM, and all related resources.
Script explanation
This script uses the following commands to create the deployment. Each item in the table links to command
specific documentation.
C OMMAND N OT ES
Next steps
For more information on the Azure PowerShell module, see Azure PowerShell documentation.
Additional virtual machine PowerShell script samples can be found in the Azure Windows VM documentation.
Create a fully configured virtual machine with
PowerShell
11/2/2020 • 2 minutes to read • Edit Online
This script creates an Azure Virtual Machine running Windows Server 2016. After running the script, you can
access the virtual machine over RDP.
This sample requires Azure PowerShell Az 1.0 or later. Run Get-Module -ListAvailable Az to see which versions are
installed. If you need to install, see Install Azure PowerShell module.
Run Connect-AzAccount to sign in to Azure.
If you don't have an Azure subscription, create a free account before you begin.
Sample script
# Variables for common values
$resourceGroup = "myResourceGroup"
$location = "westeurope"
$vmName = "myVM"
# Create a virtual network card and associate with public IP address and NSG
$nic = New-AzNetworkInterface -Name myNic -ResourceGroupName $resourceGroup -Location $location `
-SubnetId $vnet.Subnets[0].Id -PublicIpAddressId $pip.Id -NetworkSecurityGroupId $nsg.Id
Clean up deployment
Run the following command to remove the resource group, VM, and all related resources.
Script explanation
This script uses the following commands to create the deployment. Each item in the table links to command
specific documentation.
C OMMAND N OT ES
Next steps
For more information on the Azure PowerShell module, see Azure PowerShell documentation.
Additional virtual machine PowerShell script samples can be found in the Azure Windows VM documentation.
Load balance traffic between highly available virtual
machines
11/2/2020 • 5 minutes to read • Edit Online
This script sample creates everything needed to run several Windows Server 2016 virtual machines configured in
a highly available and load balanced configuration. After running the script, you will have three virtual machines,
joined to an Azure Availability Set, and accessible through an Azure Load Balancer.
This sample requires Azure PowerShell Az 1.0 or later. Run Get-Module -ListAvailable Az to see which versions are
installed. If you need to install, see Install Azure PowerShell module.
Run Connect-AzAccount to sign in to Azure.
If you don't have an Azure subscription, create a free account before you begin.
Sample script
# Variables for common values
$rgName='MyResourceGroup'
$location='eastus'
# Create three virtual network cards and associate with public IP address and NSG.
$nicVM1 = New-AzNetworkInterface -ResourceGroupName $rgName -Location $location `
-Name 'MyNic1' -LoadBalancerBackendAddressPool $bepool -NetworkSecurityGroup $nsg `
-LoadBalancerInboundNatRule $natrule1 -Subnet $vnet.Subnets[0]
Clean up deployment
Run the following command to remove the resource group, VM, and all related resources.
Script explanation
This script uses the following commands to create the deployment. Each item in the table links to command
specific documentation.
C OMMAND N OT ES
You can also create the VMs using your own custom managed image. In the VM configuration, for
Set-AzVMSourceImage use the -Id and -VM parameters instead of -PublisherName , -Offer , -Skus , and -Version .
For example, creating the VM config would be:
Next steps
For more information on the Azure PowerShell module, see Azure PowerShell documentation.
Additional virtual machine PowerShell script samples can be found in the Azure Windows VM documentation.
Create an IIS VM with PowerShell
11/2/2020 • 2 minutes to read • Edit Online
This script creates an Azure Virtual Machine running Windows Server 2016, and then uses the Azure Virtual
Machine Custom Script Extension to install IIS. After running the script, you can access the default IIS website on
the public IP address of the virtual machine.
If you don't have an Azure subscription, create a free account before you begin.
Sample script
# Variables for common values
$resourceGroup = "myResourceGroup"
$location = "westeurope"
$vmName = "myVM"
# Install IIS
$PublicSettings = '{"commandToExecute":"powershell Add-WindowsFeature Web-Server"}'
Clean up deployment
Run the following command to remove the resource group, VM, and all related resources.
Script explanation
This script uses the following commands to create the deployment. Each item in the table links to command
specific documentation.
C OMMAND N OT ES
Next steps
For more information on the Azure PowerShell module, see Azure PowerShell documentation.
Additional virtual machine PowerShell script samples can be found in the Azure Windows VM documentation.
Use an Azure PowerShell sample script to create an
IIS VM
11/2/2020 • 2 minutes to read • Edit Online
This script creates an Azure Virtual Machine running Windows Server 2016, and then uses the Azure Virtual
Machine DSC Extension to install IIS. After running the script, you can access the default IIS website on the public IP
address of the virtual machine.
This sample requires Azure PowerShell Az 1.0 or later. Run Get-Module -ListAvailable Az to see which versions are
installed. If you need to install, see Install Azure PowerShell module.
Run Connect-AzAccount to sign in to Azure.
If you don't have an Azure subscription, create a free account before you begin.
Sample script
# Variables for common values
$resourceGroup = "myResourceGroup"
$location = "westeurope"
$vmName = "myVM"
# Install IIS
$PublicSettings = '{"ModulesURL":"https://github.com/Azure/azure-quickstart-templates/raw/master/dsc-
extension-iis-server-windows-vm/ContosoWebsite.ps1.zip", "configurationFunction":
"ContosoWebsite.ps1\\ContosoWebsite", "Properties": {"MachineName": "myVM"} }'
Clean up deployment
Run the following command to remove the resource group, VM, and all related resources.
Remove-AzResourceGroup -Name myResourceGroup
Script explanation
This script uses the following commands to create the deployment. Each item in the table links to command
specific documentation.
C OMMAND N OT ES
Next steps
For more information on the Azure PowerShell module, see Azure PowerShell documentation.
Additional virtual machine PowerShell script samples can be found in the Azure Windows VM documentation.
Sample script to upload a VHD to Azure and create a
new VM
11/2/2020 • 3 minutes to read • Edit Online
This script takes a local .vhd file from a generalized VM and uploads it to Azure, creates a Managed Disk image and
uses the to create a new VM.
This sample requires Azure PowerShell Az 1.0 or later. Run Get-Module -ListAvailable Az to see which versions are
installed. If you need to install, see Install Azure PowerShell module.
Run Connect-AzAccount to sign in to Azure.
If you don't have an Azure subscription, create a free account before you begin.
Sample script
# Provide values for the variables
$resourceGroup = 'myResourceGroup'
$location = 'EastUS'
$storageaccount = 'mystorageaccount'
$storageType = 'Standard_LRS'
$containername = 'mycontainer'
$localPath = 'C:\Users\Public\Documents\Hyper-V\VHDs\generalized.vhd'
$vmName = 'myVM'
$imageName = 'myImage'
$vhdName = 'myUploadedVhd.vhd'
$diskSizeGB = '128'
$subnetName = 'mySubnet'
$vnetName = 'myVnet'
$ipName = 'myPip'
$nicName = 'myNic'
$nsgName = 'myNsg'
$ruleName = 'myRdpRule'
$computerName = 'myComputerName'
$vmSize = 'Standard_DS1_v2'
# Get the username and password to be used for the administrators account on the VM.
# This is used when connecting to the VM using RDP.
$cred = Get-Credential
# Create the VM
New-AzVM -VM $vm -ResourceGroupName $resourceGroup -Location $location
Clean up deployment
Run the following command to remove the resource group, VM, and all related resources.
Script explanation
This script uses the following commands to create the deployment. Each item in the table links to command
specific documentation.
C OMMAND N OT ES
Next steps
For more information on the Azure PowerShell module, see Azure PowerShell documentation.
Additional virtual machine PowerShell script samples can be found in the Azure Windows VM documentation.
Create a virtual machine using an existing managed
OS disk with PowerShell
11/2/2020 • 2 minutes to read • Edit Online
This script creates a virtual machine by attaching an existing managed disk as OS disk. Use this script in preceding
scenarios:
Create a VM from an existing managed OS disk that was copied from a managed disk in different subscription
Create a VM from an existing managed disk that was created from a specialized VHD file
Create a VM from an existing managed OS disk that was created from a snapshot
If you don't have an Azure subscription, create a free account before you begin.
Sample script
#Provide the subscription Id
$subscriptionId = 'yourSubscriptionId'
#Provide the name of the snapshot that will be used to create OS disk
$snapshotName = 'yourSnapshotName'
#Provide the name of the OS disk that will be created using the snapshot
$osDiskName = 'yourOSDiskName'
#Provide the name of an existing virtual network where virtual machine will be created
$virtualNetworkName = 'yourVNETName'
#Set the context to the subscription Id where Managed Disk will be created
Select-AzSubscription -SubscriptionId $SubscriptionId
#Use the Managed Disk Resource Id to attach it to the virtual machine. Please change the OS type to linux if
OS disk has linux OS
$VirtualMachine = Set-AzVMOSDisk -VM $VirtualMachine -ManagedDiskId $disk.Id -CreateOption Attach -Windows
Clean up deployment
Run the following command to remove the resource group, VM, and all related resources.
C OMMAND N OT ES
Get-AzDisk Gets disk object based on the name and the resource group
of a disk. Id property of the returned disk object is used to
attach the disk to a new VM
Next steps
For more information on the Azure PowerShell module, see Azure PowerShell documentation.
Additional virtual machine PowerShell script samples can be found in the Azure Windows VM documentation.
Create a virtual machine from a snapshot with
PowerShell (Windows)
11/2/2020 • 2 minutes to read • Edit Online
Sample script
#Provide the subscription Id
$subscriptionId = 'yourSubscriptionId'
#Provide the name of the snapshot that will be used to create OS disk
$snapshotName = 'yourSnapshotName'
#Provide the name of the OS disk that will be created using the snapshot
$osDiskName = 'yourOSDiskName'
#Provide the name of an existing virtual network where virtual machine will be created
$virtualNetworkName = 'yourVNETName'
#Set the context to the subscription Id where Managed Disk will be created
Select-AzSubscription -SubscriptionId $SubscriptionId
#Use the Managed Disk Resource Id to attach it to the virtual machine. Please change the OS type to linux if
OS disk has linux OS
$VirtualMachine = Set-AzVMOSDisk -VM $VirtualMachine -ManagedDiskId $disk.Id -CreateOption Attach -Windows
Clean up deployment
Run the following command to remove the resource group, VM, and all related resources.
C OMMAND N OT ES
Next steps
For more information on the Azure PowerShell module, see Azure PowerShell documentation.
Additional virtual machine PowerShell script samples can be found in the Azure Windows VM documentation.
Create a managed disk from a VHD file in a storage
account in same or different subscription with
PowerShell
11/2/2020 • 2 minutes to read • Edit Online
This script creates a managed disk from a VHD file in a storage account in same or different subscription. Use this
script to import a specialized (not generalized/sysprepped) VHD to managed OS disk to create a virtual machine.
Also, use it to import a data VHD to managed data disk.
Don't create multiple identical managed disks from a VHD file in small amount of time. To create managed disks
from a vhd file, blob snapshot of the vhd file is created and then it is used to create managed disks. Only one blob
snapshot can be created in a minute that causes disk creation failures due to throttling. To avoid this throttling,
create a managed snapshot from the vhd file and then use the managed snapshot to create multiple managed
disks in short amount of time.
If you don't have an Azure subscription, create a free account before you begin.
Sample script
#Provide the subscription Id where Managed Disks will be created
$subscriptionId = 'yourSubscriptionId'
#Provide the name of your resource group where Managed Disks will be created.
$resourceGroupName ='yourResourceGroupName'
#Provide the size of the disks in GB. It should be greater than the VHD file size.
$diskSize = '128'
#Provide the Azure region (e.g. westus) where Managed Disk will be located.
#This location should be same as the storage account where VHD file is stored
#Get all the Azure location using command below:
#Get-AzLocation
$location = 'westus'
#Provide the URI of the VHD file (page blob) in a storage account. Please not that this is NOT the SAS URI of
the storage container where VHD file is stored.
#e.g. https://contosostorageaccount1.blob.core.windows.net/vhds/contosovhd123.vhd
#Note: VHD file can be deleted as soon as Managed Disk is created.
$sourceVHDURI = 'https://contosostorageaccount1.blob.core.windows.net/vhds/contosovhd123.vhd'
#Provide the resource Id of the storage account where VHD file is stored.
#e.g. /subscriptions/6472s1g8-h217-446b-b509-
314e17e1efb0/resourceGroups/MDDemo/providers/Microsoft.Storage/storageAccounts/contosostorageaccount
#This is an optional parameter if you are creating managed disk in the same subscription
$storageAccountId =
'/subscriptions/yourSubscriptionId/resourceGroups/yourResourceGroupName/providers/Microsoft.Storage/storageAcc
ounts/yourStorageAccountName'
#Set the context to the subscription Id where Managed Disk will be created
Select-AzSubscription -SubscriptionId $SubscriptionId
Script explanation
This script uses following commands to create a managed disk from a VHD in different subscription. Each
command in the table links to command specific documentation.
C OMMAND N OT ES
Next steps
Create a virtual machine by attaching a managed disk as OS disk
For more information on the Azure PowerShell module, see Azure PowerShell documentation.
Additional virtual machine PowerShell script samples can be found in the Azure Windows VM documentation.
Create a managed disk from a snapshot with
PowerShell
11/2/2020 • 2 minutes to read • Edit Online
This script creates a managed disk from a snapshot. Use it to restore a virtual machine from snapshots of OS and
data disks. Create OS and data managed disks from respective snapshots and then create a new virtual machine by
attaching managed disks. You can also restore data disks of an existing VM by attaching data disks created from
snapshots.
If you don't have an Azure subscription, create a free account before you begin.
Sample script
#Provide the subscription Id
$subscriptionId = 'yourSubscriptionId'
#Provide the name of the snapshot that will be used to create Managed Disks
$snapshotName = 'yourSnapshotName'
#Provide the size of the disks in GB. It should be greater than the VHD file size.
$diskSize = '128'
#Provide the Azure region (e.g. westus) where Managed Disks will be located.
#This location should be same as the snapshot location
#Get all the Azure location using command below:
#Get-AzLocation
$location = 'westus'
#Set the context to the subscription Id where Managed Disk will be created
Select-AzSubscription -SubscriptionId $SubscriptionId
Script explanation
This script uses following commands to create a managed disk from a snapshot. Each command in the table links
to command specific documentation.
C OMMAND N OT ES
Next steps
Create a virtual machine from a managed disk
For more information on the Azure PowerShell module, see Azure PowerShell documentation.
Additional virtual machine PowerShell script samples can be found in the Azure Windows VM documentation.
Copy managed disks in the same subscription or
different subscription with PowerShell
11/2/2020 • 2 minutes to read • Edit Online
This script creates a copy of an existing managed disk in the same subscription or different subscription. The new
disk is created in the same region as the parent managed disk.
If needed, install the Azure PowerShell module using the instructions found in the Azure PowerShell guide, and
then run Connect-AzAccount to create a connection with Azure. Also, you need to have an SSH public key named
id_rsa.pub in the .ssh directory of your user profile.
If you don't have an Azure subscription, create a free account before you begin.
Sample script
#Provide the subscription Id of the subscription where managed disk exists
$sourceSubscriptionId='yourSourceSubscriptionId'
#Provide the name of your resource group where managed disk exists
$sourceResourceGroupName='mySourceResourceGroupName'
#Provide the subscription Id of the subscription where managed disk will be copied to
#If managed disk is copied to the same subscription then you can skip this step
$targetSubscriptionId='yourTargetSubscriptionId'
#Set the context to the subscription Id where managed disk will be copied to
#If snapshot is copied to the same subscription then you can skip this step
Select-AzSubscription -SubscriptionId $targetSubscriptionId
#Create a new managed disk in the target subscription and resource group
New-AzDisk -Disk $diskConfig -DiskName $managedDiskName -ResourceGroupName $targetResourceGroupName
Script explanation
This script uses following commands to create a new managed disk in the target subscription using the Id of the
source managed disk. Each command in the table links to command specific documentation.
C OMMAND N OT ES
Next steps
Create a virtual machine from a managed disk
For more information on the Azure PowerShell module, see Azure PowerShell documentation.
Additional virtual machine PowerShell script samples can be found in the Azure Windows VM documentation.
Export/Copy managed snapshots as VHD to a
storage account in different region with PowerShell
11/2/2020 • 2 minutes to read • Edit Online
This script exports a managed snapshot to a storage account in different region. It first generates the SAS URI of
the snapshot and then uses it to copy it to a storage account in different region. Use this script to maintain backup
of your managed disks in different region for disaster recovery.
If needed, install the Azure PowerShell module using the instructions found in the Azure PowerShell guide, and
then run Connect-AzAccount to create a connection with Azure. Also, you need to have an SSH public key named
id_rsa.pub in the .ssh directory of your user profile.
If you don't have an Azure subscription, create a free account before you begin.
Sample script
#Provide the subscription Id of the subscription where snapshot is created
$subscriptionId = "yourSubscriptionId"
#Provide Shared Access Signature (SAS) expiry duration in seconds e.g. 3600.
#Know more about SAS here: https://docs.microsoft.com/en-us/Az.Storage/storage-dotnet-shared-access-signature-
part-1
$sasExpiryDuration = "3600"
#Provide storage account name where you want to copy the snapshot.
$storageAccountName = "yourstorageaccountName"
#Name of the storage container where the downloaded snapshot will be stored
$storageContainerName = "yourstoragecontainername"
#Provide the key of the storage account where you want to copy snapshot.
$storageAccountKey = 'yourStorageAccountKey'
#Provide the name of the VHD file to which snapshot will be copied.
$destinationVHDFileName = "yourvhdfilename"
C OMMAND N OT ES
Next steps
Create a managed disk from a VHD
Create a virtual machine from a managed disk
For more information on the Azure PowerShell module, see Azure PowerShell documentation.
Additional virtual machine PowerShell script samples can be found in the Azure Linux VM documentation.
Export/Copy the VHD of a managed disk to a
storage account in different region with PowerShell
(Windows)
11/2/2020 • 2 minutes to read • Edit Online
This script exports the VHD of a managed disk to a storage account in different region. It first generates the SAS
URI of the managed disk and then uses it to copy the underlying VHD to a storage account in different region. Use
this script to copy managed disks to another region for regional expansion.
If needed, install the Azure PowerShell module using the instructions found in the Azure PowerShell guide, and
then run Connect-AzAccount to create a connection with Azure. Also, you need to have an SSH public key named
id_rsa.pub in the .ssh directory of your user profile.
If you don't have an Azure subscription, create a free account before you begin.
Sample script
#Provide the subscription Id of the subscription where managed disk is created
$subscriptionId = "yourSubscriptionId"
#Provide Shared Access Signature (SAS) expiry duration in seconds e.g. 3600.
#Know more about SAS here: https://docs.microsoft.com/en-us/Az.Storage/storage-dotnet-shared-access-signature-
part-1
$sasExpiryDuration = "3600"
#Provide storage account name where you want to copy the underlying VHD of the managed disk.
$storageAccountName = "yourstorageaccountName"
#Name of the storage container where the downloaded VHD will be stored
$storageContainerName = "yourstoragecontainername"
#Provide the key of the storage account where you want to copy the VHD of the managed disk.
$storageAccountKey = 'yourStorageAccountKey'
#Provide the name of the destination VHD file to which the VHD of the managed disk will be copied.
$destinationVHDFileName = "yourvhdfilename"
#Set the value to 1 to use AzCopy tool to download the data. This is the recommended option for faster copy.
#Download AzCopy v10 from the link here: https://docs.microsoft.com/en-us/azure/storage/common/storage-use-
azcopy-v10
#Ensure that AzCopy is downloaded in the same folder as this file
#If you set the value to 0 then Start-AzStorageBlobCopy will be used. Azure storage will asynchronously copy
the data.
$useAzCopy = 1
#Create the context of the storage account where the underlying VHD of the managed disk will be copied
$destinationContext = New-AzStorageContext -StorageAccountName $storageAccountName -StorageAccountKey
$storageAccountKey
}else{
Script explanation
This script uses the following commands to generate SAS URI of a managed disk and copies the underlying VHD to
a storage account using the SAS URI. Each command in the table links to the command specific documentation.
C OMMAND N OT ES
Grant-AzDiskAccess Generates SAS URI for a managed disk that is used to copy
the underlying VHD to a storage account.
Next steps
Create a managed disk from a VHD
Create a virtual machine from a managed disk
For more information on the Azure PowerShell module, see Azure PowerShell documentation.
Additional virtual machine PowerShell script samples can be found in the Azure Windows VM documentation.
Create a snapshot from a VHD to create multiple
identical managed disks in small amount of time with
PowerShell (Windows)
11/2/2020 • 2 minutes to read • Edit Online
This script creates a snapshot from a VHD file in a storage account in same or different subscription. Use this script
to import a specialized (not generalized/sysprepped) VHD to a snapshot and then use the snapshot to create
multiple identical managed disks in small amount of time. Also, use it to import a data VHD to a snapshot and then
use the snapshot to create multiple managed disks in small amount of time.
If you don't have an Azure subscription, create a free account before you begin.
Sample script
#Provide the subscription Id where snapshot will be created
$subscriptionId = 'yourSubscriptionId'
#Provide the name of your resource group where snapshot will be created.
$resourceGroupName ='yourResourceGroupName'
#Provide the Azure region (e.g. westus) where snapshot will be located.
#This location should be same as the storage account location where VHD file is stored
#Get all the Azure location using command below:
#Get-AzLocation
$location = 'westus'
#Provide the URI of the VHD file (page blob) in a storage account. Please not that this is NOT the SAS URI of
the storage container where VHD file is stored.
#e.g. https://contosostorageaccount1.blob.core.windows.net/vhds/contosovhd123.vhd
#Note: VHD file can be deleted as soon as Managed Disk is created.
$sourceVHDURI = 'https://yourStorageAccountName.blob.core.windows.net/vhds/yourVHDName.vhd'
#Provide the resource Id of the storage account where VHD file is stored.
#e.g. /subscriptions/6582b1g7-e212-446b-b509-
314e17e1efb0/resourceGroups/MDDemo/providers/Microsoft.Storage/storageAccounts/contosostorageaccount1
#This is an optional parameter if you are creating snapshot in the same subscription
$storageAccountId =
'/subscriptions/yourSubscriptionId/resourceGroups/yourResourceGroupName/providers/Microsoft.Storage/storageAcc
ounts/yourStorageAccountName'
#Set the context to the subscription Id where Managed Disk will be created
Select-AzSubscription -SubscriptionId $SubscriptionId
This script copies a snapshot of a managed disk to same or different subscription. Use this script for the following
scenarios:
1. Migrate a snapshot in Premium storage (Premium_LRS) to Standard storage (Standard_LRS or Standard_ZRS)
to reduce your cost.
2. Migrate a snapshot from locally redundant storage (Premium_LRS, Standard_LRS) to zone redundant storage
(Standard_ZRS) to benefit from the higher reliability of ZRS storage.
3. Move a snapshot to different subscription in the same region for longer retention.
If needed, install the Azure PowerShell module using the instructions found in the Azure PowerShell guide, and
then run Connect-AzAccount to create a connection with Azure. Also, you need to have an SSH public key named
id_rsa.pub in the .ssh directory of your user profile.
If you don't have an Azure subscription, create a free account before you begin.
Sample script
#Provide the subscription Id of the subscription where snapshot exists
$sourceSubscriptionId='yourSourceSubscriptionId'
#We recommend you to store your snapshots in Standard storage to reduce cost. Please use Standard_ZRS in
regions where zone redundant storage (ZRS) is available, otherwise use Standard_LRS
#Please check out the availability of ZRS here: https://docs.microsoft.com/en-us/Az.Storage/common/storage-
redundancy-zrs#support-coverage-and-regional-availability
$snapshotConfig = New-AzSnapshotConfig -SourceResourceId $snapshot.Id -Location $snapshot.Location -
CreateOption Copy -SkuName Standard_LRS
Script explanation
This script uses following commands to create a snapshot in the target subscription using the Id of the source
snapshot. Each command in the table links to command specific documentation.
C OMMAND N OT ES
Next steps
Create a virtual machine from a snapshot
For more information on the Azure PowerShell module, see Azure PowerShell documentation.
Additional virtual machine PowerShell script samples can be found in the Azure Windows VM documentation.
Encrypt a Windows virtual machine with Azure
PowerShell
11/2/2020 • 3 minutes to read • Edit Online
This script creates a secure Azure Key Vault, encryption keys, Azure Active Directory service principal, and a
Windows virtual machine (VM). The VM is then encrypted using the encryption key from Key Vault and service
principal credentials.
If you don't have an Azure subscription, create a free account before you begin.
Sample script
# Edit these global variables with you unique Key Vault name, resource group name and location
#Name of the Key Vault
$keyVaultName = "myKeyVault00"
#Resource Group Name
$rgName = "myResourceGroup"
#Region
$location = "East US"
#Password to place w/in the KeyVault
$password = $([guid]::NewGuid()).Guid
$securePassword = ConvertTo-SecureString -String $password -AsPlainText -Force
#Name for the Azure AD Application
$appName = "My App"
#Name for the VM to be encrypt
$vmName = "myEncryptedVM"
#user name for the admin account in the vm being created and then encrypted
$vmAdminName = "encryptedUser"
# Put the password in the Key Vault as a Key Vault Secret so we can use it later
# We should never put passwords in scripts.
Set-AzKeyVaultSecret -VaultName $keyVaultName -Name adminCreds -SecretValue $securePassword
Set-AzKeyVaultSecret -VaultName $keyVaultName -Name protectValue -SecretValue $securePassword
# Set permissions to allow your AAD service principal to read keys from Key Vault
# Set permissions to allow your AAD service principal to read keys from Key Vault
Set-AzKeyVaultAccessPolicy -VaultName $keyvaultName `
-ServicePrincipalName $app.ApplicationId `
-PermissionsToKeys decrypt,encrypt,unwrapKey,wrapKey,verify,sign,get,list,update `
-PermissionsToSecrets get,list,set,delete,backup,restore,recover,purge
<#
#clean up
Remove-AzResourceGroup -Name $rgName
#removes all of the Azure AD Applications you created w/ the same name
Remove-AzADApplication -ObjectId $app.ObjectId -Force
#>
Clean up deployment
Run the following command to remove the resource group, VM, and all related resources.
Script explanation
This script uses the following commands to create the deployment. Each item in the table links to command
specific documentation.
C OMMAND N OT ES
Next steps
For more information on the Azure PowerShell module, see Azure PowerShell documentation.
Additional virtual machine PowerShell script samples can be found in the Azure Windows VM documentation.
Create an Azure Monitor VM with PowerShell
11/2/2020 • 2 minutes to read • Edit Online
This script creates an Azure Virtual Machine, installs the Log Analytics agent, and enrolls the system with a Log
Analytics workspace. Once the script has run, the virtual machine will be visible in Azure Monitor. Also, you need to
update the Log Analytics workspace ID and workspace key.
If you don't have an Azure subscription, create a free account before you begin.
Sample script
# OMS ID and OMS key
$omsId = "<Replace with your OMS ID>"
$omsKey = "<Replace with your OMS key>"
Clean up deployment
Run the following command to remove the resource group, VM, and all related resources.
C OMMAND N OT ES
Next steps
For more information on the Azure PowerShell module, see Azure PowerShell documentation.
Additional virtual machine PowerShell script samples can be found in the Azure Windows VM documentation.
Collect details about all VMs in a subscription with
PowerShell
11/2/2020 • 2 minutes to read • Edit Online
This script creates a csv that contains the VM Name, Resource Group Name, Region, Vm Size, Virtual Network,
Subnet, Private IP Address, OS Type, and Public IP Address of the VMs in the provided subscription.
If you don't have an Azure subscription, create a free account before you begin.
Sample script
#Provide the subscription Id where the VMs reside
$subscriptionId = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
Select-AzSubscription $subscriptionId
$report = @()
$vms = Get-AzVM
$publicIps = Get-AzPublicIpAddress
$nics = Get-AzNetworkInterface | ?{ $_.VirtualMachine -NE $null}
foreach ($nic in $nics) {
$info = "" | Select VmName, ResourceGroupName, Region, VmSize, VirtualNetwork, Subnet, PrivateIpAddress,
OsType, PublicIPAddress, NicName, ApplicationSecurityGroup
$vm = $vms | ? -Property Id -eq $nic.VirtualMachine.id
foreach($publicIp in $publicIps) {
if($nic.IpConfigurations.id -eq $publicIp.ipconfiguration.Id) {
$info.PublicIPAddress = $publicIp.ipaddress
}
}
$info.OsType = $vm.StorageProfile.OsDisk.OsType
$info.VMName = $vm.Name
$info.ResourceGroupName = $vm.ResourceGroupName
$info.Region = $vm.Location
$info.VmSize = $vm.HardwareProfile.VmSize
$info.VirtualNetwork = $nic.IpConfigurations.subnet.Id.Split("/")[-3]
$info.Subnet = $nic.IpConfigurations.subnet.Id.Split("/")[-1]
$info.PrivateIpAddress = $nic.IpConfigurations.PrivateIpAddress
$info.NicName = $nic.Name
$info.ApplicationSecurityGroup = $nic.IpConfigurations.ApplicationSecurityGroups.Id
$report+=$info
}
$report | ft VmName, ResourceGroupName, Region, VmSize, VirtualNetwork, Subnet, PrivateIpAddress, OsType,
PublicIPAddress, NicName, ApplicationSecurityGroup
$report | Export-CSV "$home/$reportName"
Script explanation
This script uses following commands to create a csv export of the details of VMs in a subscription. Each command
in the table links to command specific documentation.
C OMMAND N OT ES
Next steps
For more information on the Azure PowerShell module, see Azure PowerShell documentation.
Additional virtual machine PowerShell script samples can be found in the Azure Windows VM documentation.
Azure CLI Samples for Windows virtual machines
11/2/2020 • 2 minutes to read • Edit Online
The following table includes links to bash scripts built using the Azure CLI that deploy Windows virtual machines.
Create a fully configured virtual machine Creates a resource group, virtual machine, and all related
resources.
Create highly available virtual machines Creates several virtual machines in a highly available and load
balanced configuration.
Create a VM and run configuration script Creates a virtual machine and uses the Azure Custom Script
extension to install IIS.
Create a VM and run DSC configuration Creates a virtual machine and uses the Azure Desired State
Configuration (DSC) extension to install IIS.
Manage storage
Create managed disk from a VHD Creates a managed disk from a specialized VHD as an OS disk
or from a data VHD as data disk.
Create a managed disk from a snapshot Creates a managed disk from a snapshot.
Copy managed disk to same or different subscription Copies managed disk to same or different subscription but in
the same region as the parent managed disk.
Export a snapshot as VHD to a storage account Exports a managed snapshot as VHD to a storage account in
different region.
Export the VHD of a managed disk to a storage account Exports the underlying VHD of a managed disk to a storage
account in different region.
Copy snapshot to same or different subscription Copies snapshot to same or different subscription but in the
same region as the parent snapshot.
Secure network traffic between virtual machines Creates two virtual machines, all related resources, and an
internal and external network security groups (NSG).
Encrypt a VM and data disks Creates an Azure Key Vault, encryption key, and service
principal, then encrypts a VM.
Monitor a VM with Azure Monitor Creates a virtual machine, installs the Log Analytics agent,
and enrolls the VM in a Log Analytics workspace.
Quick Create a virtual machine with the Azure CLI
11/2/2020 • 2 minutes to read • Edit Online
This script creates an Azure Virtual Machine running Windows Server 2016. After running the script, you can
access the virtual machine through a Remote Desktop connection.
To run this sample, install the latest version of the Azure CLI. To start, run az login to create a connection with
Azure.
Samples for the Azure CLI are written for the bash shell. To run this sample in Windows PowerShell or Command
Prompt, you may need to change elements of the script.
If you don't have an Azure subscription, create a free account before you begin.
Sample script
#!/bin/bash
Clean up deployment
Run the following command to remove the resource group, VM, and all related resources.
Script explanation
This script uses the following commands to create a resource group, virtual machine, and all related resources.
Each command in the table links to command specific documentation.
C OMMAND N OT ES
az group create Creates a resource group in which all resources are stored.
Next steps
For more information on the Azure CLI, see Azure CLI documentation.
Additional virtual machine CLI script samples can be found in the Azure Windows VM documentation.
Create a virtual machine with the Azure CLI
11/2/2020 • 2 minutes to read • Edit Online
This script creates an Azure Virtual Machine running Windows Server 2016. After running the script, you can
access the virtual machine through a Remote Desktop connection.
To run this sample, install the latest version of the Azure CLI. To start, run az login to create a connection with
Azure.
Samples for the Azure CLI are written for the bash shell. To run this sample in Windows PowerShell or Command
Prompt, you may need to change elements of the script.
If you don't have an Azure subscription, create a free account before you begin.
Sample script
#!/bin/bash
# Create a virtual network card and associate with public IP address and NSG.
az network nic create \
--resource-group myResourceGroup \
--name myNic \
--vnet-name myVnet \
--subnet mySubnet \
--network-security-group myNetworkSecurityGroup \
--public-ip-address myPublicIP
Clean up deployment
Run the following command to remove the resource group, VM, and all related resources.
az group delete --name myResourceGroup --yes
Script explanation
This script uses the following commands to create a resource group, virtual machine, and all related resources.
Each command in the table links to command specific documentation.
C OMMAND N OT ES
az group create Creates a resource group in which all resources are stored.
az network public-ip create Creates a public IP address with a static IP address and an
associated DNS name.
az network nsg create Creates a network security group (NSG), which is a security
boundary between the internet and the virtual machine.
az network nic create Creates a virtual network card and attaches it to the virtual
network, subnet, and NSG.
Next steps
For more information on the Azure CLI, see Azure CLI documentation.
Additional virtual machine CLI script samples can be found in the Azure Windows VM documentation.
Use an Azure CLI sample script to load balance traffic
between highly available virtual machines
11/2/2020 • 4 minutes to read • Edit Online
This script sample creates everything needed to run several Ubuntu virtual machines configured in a highly
available and load balanced configuration. After running the script, you will have three virtual machines, joined to
an Azure Availability Set, and accessible through an Azure Load Balancer.
To run this sample, install the latest version of the Azure CLI. To start, run az login to create a connection with
Azure.
Samples for the Azure CLI are written for the bash shell. To run this sample in Windows PowerShell or Command
Prompt, you may need to change elements of the script.
If you don't have an Azure subscription, create a free account before you begin.
Sample script
#!/bin/bash
# Create three virtual network cards and associate with public IP address and NSG.
for i in `seq 1 3`; do
az network nic create \
--resource-group myResourceGroup --name myNic$i \
--vnet-name myVnet --subnet mySubnet \
--network-security-group myNetworkSecurityGroup --lb-name myLoadBalancer \
--lb-address-pools myBackEndPool --lb-inbound-nat-rules myLoadBalancerRuleSSH$i
done
Clean up deployment
Run the following command to remove the resource group, VM, and all related resources.
Script explanation
This script uses the following commands to create a resource group, virtual machine, availability set, load balancer,
and all related resources. Each command in the table links to command specific documentation.
C OMMAND N OT ES
az group create Creates a resource group in which all resources are stored.
az network public-ip create Creates a public IP address with a static IP address and an
associated DNS name.
az network lb probe create Creates an NLB probe. An NLB probe is used to monitor each
VM in the NLB set. If any VM becomes inaccessible, traffic is
not routed to the VM.
az network lb rule create Creates an NLB rule. In this sample, a rule is created for port
80. As HTTP traffic arrives at the NLB, it is routed to port 80
one of the VMs in the NLB set.
az network lb inbound-nat-rule create Creates an NLB Network Address Translation (NAT) rule. NAT
rules map a port of the NLB to a port on a VM. In this sample,
a NAT rule is created for SSH traffic to each VM in the NLB set.
az network nsg create Creates a network security group (NSG), which is a security
boundary between the internet and the virtual machine.
az network nsg rule create Creates an NSG rule to allow inbound traffic. In this sample,
port 22 is opened for SSH traffic.
az network nic create Creates a virtual network card and attaches it to the virtual
network, subnet, and NSG.
Next steps
For more information on the Azure CLI, see Azure CLI documentation.
Additional virtual machine CLI script samples can be found in the Azure Windows VM documentation.
Create a virtual machine with the Azure CLI with a
custom script to install IIS
11/2/2020 • 2 minutes to read • Edit Online
This script creates an Azure Virtual Machine running Windows Server 2016, and uses the Azure Virtual Machine
Custom Script Extension to install IIS. After running the script, you can access the default IIS website on the public
IP address of the virtual machine.
To run this sample, install the latest version of the Azure CLI. To start, run az login to create a connection with
Azure.
Samples for the Azure CLI are written for the bash shell. To run this sample in Windows PowerShell or Command
Prompt, you may need to change elements of the script.
If you don't have an Azure subscription, create a free account before you begin.
Sample script
#!/bin/bash
Clean up deployment
Run the following command to remove the resource group, VM, and all related resources.
Script explanation
This script uses the following commands to create a resource group, virtual machine, and all related resources.
Each command in the table links to command specific documentation.
C OMMAND N OT ES
az group create Creates a resource group in which all resources are stored.
azure vm extension set Adds and runs a virtual machine extension to a VM. In this
sample, the custom script extension is used to install IIS.
Next steps
For more information on the Azure CLI, see Azure CLI documentation.
Additional virtual machine CLI script samples can be found in the Azure Windows VM documentation.
Create a VM with IIS using DSC
11/2/2020 • 2 minutes to read • Edit Online
This script creates a virtual machine, and uses the Azure Virtual Machine DSC custom script extension to install and
configure IIS.
To run this sample, install the latest version of the Azure CLI. To start, run az login to create a connection with
Azure.
Samples for the Azure CLI are written for the bash shell. To run this sample in Windows PowerShell or Command
Prompt, you may need to change elements of the script.
If you don't have an Azure subscription, create a free account before you begin.
Sample script
#!/bin/bash
# Create a VM
az vm create \
--resource-group myResourceGroup \
--name myVM \
--image win2016datacenter \
--admin-username azureuser \
--admin-password $AdminPassword
# Start a CustomScript extension to use a simple bash script to update, download and install WordPress and
MySQL
az vm extension set \
--name DSC \
--publisher Microsoft.Powershell \
--version 2.19 \
--vm-name myVM \
--resource-group myResourceGroup \
--settings '{"ModulesURL":"https://github.com/Azure/azure-quickstart-templates/raw/master/dsc-extension-
iis-server-windows-vm/ContosoWebsite.ps1.zip", "configurationFunction": "ContosoWebsite.ps1\\ContosoWebsite",
"Properties": {"MachineName": "myVM"} }'
Clean up deployment
Run the following command to remove the resource group, VM, and all related resources.
C OMMAND N OT ES
az group create Creates a resource group in which all resources are stored.
az vm extension set Add the Custom Script Extension to the virtual machine which
invokes a script to install IIS.
Next steps
For more information on the Azure CLI, see Azure CLI documentation.
Additional virtual machine CLI script samples can be found in the Azure Windows VM documentation.
Create a managed disk from a VHD file in a storage
account in the same subscription with CLI (Linux)
11/2/2020 • 2 minutes to read • Edit Online
This script creates a managed disk from a VHD file in a storage account in the same subscription. Use this script to
import a specialized (not generalized/sysprepped) VHD to managed OS disk to create a virtual machine. Or, use it
to import a data VHD to managed data disk.
To run this sample, install the latest version of the Azure CLI. To start, run az login to create a connection with
Azure.
Samples for the Azure CLI are written for the bash shell. To run this sample in Windows PowerShell or Command
Prompt, you may need to change elements of the script.
If you don't have an Azure subscription, create a free account before you begin.
Sample script
#Provide the subscription Id
subscriptionId=mySubscriptionId
#Provide the size of the disks in GB. It should be greater than the VHD file size.
diskSize=128
#Provide the URI of the VHD file that will be used to create Managed Disk.
# VHD file can be deleted as soon as Managed Disk is created.
# e.g. https://contosostorageaccount1.blob.core.windows.net/vhds/contosovhd123.vhd
vhdUri=https://contosostorageaccount1.blob.core.windows.net/vhds/contosoumd78620170425131836.vhd
#Provide the storage type for the Managed Disk. Premium_LRS or Standard_LRS.
storageType=Premium_LRS
#Provide the Azure location (e.g. westus) where Managed Disk will be located.
#The location should be same as the location of the storage account where VHD file is stored.
#Get all the Azure location supported for your subscription using command below:
#az account list-locations
location=westus
#Set the context to the subscription Id where Managed Disk will be created
az account set --subscription $subscriptionId
Script explanation
This script uses following commands to create a managed disk from a VHD. Each command in the table links to
command specific documentation.
C OMMAND N OT ES
Next steps
Create a virtual machine by attaching a managed disk as OS disk
For more information on the Azure CLI, see Azure CLI documentation.
Additional virtual machine and managed disks CLI script samples can be found in the Azure Linux VM
documentation.
Create a managed disk from a snapshot with CLI
(Linux)
11/2/2020 • 2 minutes to read • Edit Online
This script creates a managed disk from a snapshot. Use it to restore a virtual machine from snapshots of OS and
data disks. Create OS and data managed disks from respective snapshots and then create a new virtual machine by
attaching managed disks. You can also restore data disks of an existing VM by attaching data disks created from
snapshots.
To run this sample, install the latest version of the Azure CLI. To start, run az login to create a connection with
Azure.
Samples for the Azure CLI are written for the bash shell. To run this sample in Windows PowerShell or Command
Prompt, you may need to change elements of the script.
If you don't have an Azure subscription, create a free account before you begin.
Sample script
#Provide the subscription Id of the subscription where you want to create Managed Disks
subscriptionId=dd80b94e-0463-4a65-8d04-c94f403879dc
#Provide the name of the snapshot that will be used to create Managed Disks
snapshotName=mySnapshotName
#Provide the name of the new Managed Disks that will be create
diskName=myDiskName
#Provide the size of the disks in GB. It should be greater than the VHD file size.
diskSize=128
#Set the context to the subscription Id where Managed Disk will be created
az account set --subscription $subscriptionId
Script explanation
This script uses following commands to create a managed disk from a snapshot. Each command in the table links
to command specific documentation.
C OMMAND N OT ES
az snapshot show Gets all the properties of a snapshot using the name and
resource group properties of the snapshot. Id property is
used to create managed disk.
Next steps
Create a virtual machine by attaching a managed disk as OS disk
For more information on the Azure CLI, see Azure CLI documentation.
Additional virtual machine and managed disks CLI script samples can be found in the Azure Linux VM
documentation.
Copy managed disks to same or different
subscription with CLI
11/2/2020 • 2 minutes to read • Edit Online
This script copies a managed disk to same or different subscription but in the same region. The copy works only
when the subscriptions are part of same Azure AD tenant.
To run this sample, install the latest version of the Azure CLI. To start, run az login to create a connection with
Azure.
Samples for the Azure CLI are written for the bash shell. To run this sample in Windows PowerShell or Command
Prompt, you may need to change elements of the script.
If you don't have an Azure subscription, create a free account before you begin.
Sample script
#Provide the subscription Id of the subscription where managed disk exists
sourceSubscriptionId=dd80b94e-0463-4a65-8d04-c94f403879dc
#Provide the name of your resource group where managed disk exists
sourceResourceGroupName=mySourceResourceGroupName
#If managedDiskId is blank then it means that managed disk does not exist.
echo 'source managed disk Id is: ' $managedDiskId
#Provide the subscription Id of the subscription where managed disk will be copied to
targetSubscriptionId=6492b1f7-f219-446b-b509-314e17e1efb0
#Set the context to the subscription Id where managed disk will be copied to
az account set --subscription $targetSubscriptionId
Script explanation
This script uses following commands to create a new managed disk in the target subscription using the Id of the
source managed disk. Each command in the table links to command specific documentation.
C OMMAND N OT ES
az disk show Gets all the properties of a managed disk using the name and
resource group properties of the managed disk. Id property is
used to copy the managed disk to different subscription.
Next steps
Create a virtual machine from a managed disk
For more information on the Azure CLI, see Azure CLI documentation.
Additional virtual machine and managed disks CLI script samples can be found in the Azure Linux VM
documentation.
Export/Copy a snapshot to a storage account in
different region with CLI
11/2/2020 • 2 minutes to read • Edit Online
This script exports a managed snapshot to a storage account in different region. It first generates the SAS URI of
the snapshot and then uses it to copy it to a storage account in different region. Use this script to maintain backup
of your managed disks in different region for disaster recovery.
To run this sample, install the latest version of the Azure CLI. To start, run az login to create a connection with
Azure.
Samples for the Azure CLI are written for the bash shell. To run this sample in Windows PowerShell or Command
Prompt, you may need to change elements of the script.
If you don't have an Azure subscription, create a free account before you begin.
Sample script
#Provide the subscription Id where snapshot is created
subscriptionId=mySubscriptionId
#Provide Shared Access Signature (SAS) expiry duration in seconds e.g. 3600.
#Know more about SAS here: https://docs.microsoft.com/en-us/azure/storage/storage-dotnet-shared-access-
signature-part-1
sasExpiryDuration=3600
#Provide storage account name where you want to copy the snapshot.
storageAccountName=mystorageaccountname
#Name of the storage container where the downloaded snapshot will be stored
storageContainerName=mystoragecontainername
#Provide the key of the storage account where you want to copy snapshot.
storageAccountKey=mystorageaccountkey
#Provide the name of the VHD file to which snapshot will be copied.
destinationVHDFileName=myvhdfilename
C OMMAND N OT ES
az snapshot grant-access Generates read-only SAS that is used to copy underlying VHD
file to a storage account or download it to on-premises
az storage blob copy start Copies a blob asynchronously from one storage account to
another
Next steps
Create a managed disk from a VHD
Create a virtual machine from a managed disk
For more information on the Azure CLI, see Azure CLI documentation.
Additional virtual machine and managed disks CLI script samples can be found in the Azure Linux VM
documentation.
Export/Copy a managed disk to a storage account
using the Azure CLI
11/2/2020 • 2 minutes to read • Edit Online
This script exports the underlying VHD of a managed disk to a storage account in same or different region. It first
generates the SAS URI of the managed disk and then uses it to copy the VHD to a storage account. Use this script
to copy managed disks to another region for regional expansion. If you want to publish the VHD file of a managed
disk in Azure Marketplace, you can use this script to copy the VHD file to a storage account and then generate a
SAS URI of the copied VHD to publish it in the Marketplace.
To run this sample, install the latest version of the Azure CLI. To start, run az login to create a connection with
Azure.
Samples for the Azure CLI are written for the bash shell. To run this sample in Windows PowerShell or Command
Prompt, you may need to change elements of the script.
If you don't have an Azure subscription, create a free account before you begin.
Sample script
#Provide the subscription Id where managed disk is created
subscriptionId=yourSubscriptionId
#Provide the name of your resource group where managed disk is created
resourceGroupName=myResourceGroupName
#Provide Shared Access Signature (SAS) expiry duration in seconds e.g. 3600.
#Know more about SAS here: https://docs.microsoft.com/en-us/azure/storage/storage-dotnet-shared-access-
signature-part-1
sasExpiryDuration=3600
#Provide storage account name where you want to copy the underlying VHD file of the managed disk.
storageAccountName=mystorageaccountname
#Name of the storage container where the downloaded VHD will be stored
storageContainerName=mystoragecontainername
#Provide the key of the storage account where you want to copy the VHD
storageAccountKey=mystorageaccountkey
#Provide the name of the destination VHD file to which the VHD of the managed disk will be copied.
destinationVHDFileName=myvhdfilename.vhd
Script explanation
This script uses following commands to generate the SAS URI for a managed disk and copies the underlying VHD
to a storage account using the SAS URI. Each command in the table links to command specific documentation.
C OMMAND N OT ES
az disk grant-access Generates read-only SAS that is used to copy the underlying
VHD file to a storage account or download it to on-premises
az storage blob copy start Copies a blob asynchronously from one storage account to
another
Next steps
Create a managed disk from a VHD
Create a virtual machine from a managed disk
For more information on the Azure CLI, see Azure CLI documentation.
Additional virtual machine and managed disks CLI script samples can be found in the Azure Linux VM
documentation.
Copy snapshot of a managed disk to same or
different subscription with CLI
11/2/2020 • 2 minutes to read • Edit Online
This script copies a snapshot of a managed disk to same or different subscription. Use this script for the following
scenarios:
1. Migrate a snapshot in Premium storage (Premium_LRS) to Standard storage (Standard_LRS or Standard_ZRS)
to reduce your cost.
2. Migrate a snapshot from locally redundant storage (Premium_LRS, Standard_LRS) to zone redundant storage
(Standard_ZRS) to benefit from the higher reliability of ZRS storage.
3. Move a snapshot to different subscription in the same region for longer retention.
NOTE
Both subscriptions must be located under the same tenant
To run this sample, install the latest version of the Azure CLI. To start, run az login to create a connection with
Azure.
Samples for the Azure CLI are written for the bash shell. To run this sample in Windows PowerShell or Command
Prompt, you may need to change elements of the script.
If you don't have an Azure subscription, create a free account before you begin.
Sample script
#Provide the subscription Id of the subscription where snapshot exists
sourceSubscriptionId=dd80b94e-0463-4a65-8d04-c94f403879dc
#If snapshotId is blank then it means that snapshot does not exist.
echo 'source snapshot Id is: ' $snapshotId
Script explanation
This script uses following commands to create a snapshot in the target subscription using the Id of the source
snapshot. Each command in the table links to command specific documentation.
C OMMAND N OT ES
az snapshot show Gets all the properties of a snapshot using the name and
resource group properties of the snapshot. Id property is
used to copy the snapshot to different subscription.
Next steps
Create a virtual machine from a snapshot
For more information on the Azure CLI, see Azure CLI documentation.
Additional virtual machine and managed disks CLI script samples can be found in the Azure Linux VM
documentation.
Secure network traffic between virtual machines
11/2/2020 • 3 minutes to read • Edit Online
This script creates two virtual machines and secures incoming traffic to both. One virtual machine is accessible on
the internet and has a network security group (NSG) configured to allow traffic on port 3389 and port 80. The
second virtual machine is not accessible on the internet, and has an NSG configured to only allow traffic from the
first virtual machine.
To run this sample, install the latest version of the Azure CLI. To start, run az login to create a connection with
Azure.
Samples for the Azure CLI are written for the bash shell. To run this sample in Windows PowerShell or Command
Prompt, you may need to change elements of the script.
If you don't have an Azure subscription, create a free account before you begin.
Sample script
#!/bin/bash
# Update back-end network security group rule to limit SSH to source prefix (priority 100).
az network nsg rule update --resource-group myResourceGroup --nsg-name myNetworkSecurityGroupBackEnd \
--name $nsgrule --protocol tcp --direction inbound --priority 100 \
--source-address-prefix 10.0.1.0/24 --source-port-range '*' --destination-address-prefix '*' \
--destination-port-range 22 --access allow
# Create backend NSG rule to block all incoming traffic (priority 200).
az network nsg rule create --resource-group myResourceGroup --nsg-name myNetworkSecurityGroupBackEnd \
--name denyAll --access Deny --protocol Tcp --direction Inbound --priority 200 \
--source-address-prefix "*" --source-port-range "*" --destination-address-prefix "*" --destination-port-
range "*"
Clean up deployment
Run the following command to remove the resource group, VM, and all related resources.
Script explanation
This script uses the following commands to create a resource group, virtual machine, and all related resources.
Each command in the table links to command specific documentation.
C OMMAND N OT ES
az group create Creates a resource group in which all resources are stored.
az network nsg rule update Updates an NSG rule. In this sample, the back-end rule is
updated to pass through traffic only from the front-end
subnet.
az network nsg rule list Returns information about a network security group rule. In
this sample, the rule name is stored in a variable for use later
in the script.
Next steps
For more information on the Azure CLI, see Azure CLI documentation.
Additional virtual machine CLI script samples can be found in the Azure Windows VM documentation.
Encrypt a Windows virtual machine in Azure
11/2/2020 • 2 minutes to read • Edit Online
This script creates a secure Azure Key Vault, encryption keys, Azure Active Directory service principal, and a
Windows virtual machine (VM). The VM is then encrypted using the encryption key from Key Vault and service
principal credentials.
To run this sample, install the latest version of the Azure CLI. To start, run az login to create a connection with
Azure.
Samples for the Azure CLI are written for the bash shell. To run this sample in Windows PowerShell or Command
Prompt, you may need to change elements of the script.
If you don't have an Azure subscription, create a free account before you begin.
Sample script
#!/bin/bash
# Create a Key Vault for storing keys and enabled for disk encryption.
az keyvault create --name $keyvault_name --resource-group myResourceGroup --location eastus \
--enabled-for-disk-encryption True
# Create an Azure Active Directory service principal for authenticating requests to Key Vault.
# Read in the service principal ID and password for use in later commands.
read sp_id sp_password <<< $(az ad sp create-for-rbac --query [appId,password] -o tsv)
Clean up deployment
Run the following command to remove the resource group, VM, and all related resources.
Script explanation
This script uses the following commands to create a resource group, Azure Key Vault, service principal, virtual
machine, and all related resources. Each command in the table links to command specific documentation.
C OMMAND N OT ES
az group create Creates a resource group in which all resources are stored.
az keyvault create Creates an Azure Key Vault to store secure data such as
encryption keys.
az keyvault set-policy Sets permissions on the Key Vault to grant the service
principal access to encryption keys.
Next steps
For more information on the Azure CLI, see Azure CLI documentation.
Additional virtual machine CLI script samples can be found in the Azure Windows VM documentation.
Monitor a VM with Azure Monitor logs
11/2/2020 • 2 minutes to read • Edit Online
This script creates an Azure Virtual Machine, installs the Log Analytics agent, and enrolls the system with a Log
Analytics workspace. Once the script has run, the virtual machine will be visible in Azure Monitoring.
To run this sample, install the latest version of the Azure CLI. To start, run az login to create a connection with
Azure.
Samples for the Azure CLI are written for the bash shell. To run this sample in Windows PowerShell or Command
Prompt, you may need to change elements of the script.
If you don't have an Azure subscription, create a free account before you begin.
Sample script
#!/bin/sh
Clean up deployment
Run the following command to remove the resource group, VM, and all related resources.
Script explanation
This script uses the following commands to create a resource group, virtual machine, and all related resources.
Each command in the table links to command specific documentation.
C OMMAND N OT ES
az group create Creates a resource group in which all resources are stored.
Next steps
For more information on the Azure CLI, see Azure CLI documentation.
Additional virtual machine CLI script samples can be found in the Azure Windows VM documentation.
Shared Image Gallery overview
11/2/2020 • 18 minutes to read • Edit Online
Shared Image Gallery is a service that helps you build structure and organization around your images. Shared
Image Galleries provide:
Global replication of images.
Versioning and grouping of images for easier management.
Highly available images with Zone Redundant Storage (ZRS) accounts in regions that support Availability
Zones. ZRS offers better resilience against zonal failures.
Premium storage support (Premium_LRS).
Sharing across subscriptions, and even between Active Directory (AD) tenants, using RBAC.
Scaling your deployments with image replicas in each region.
Using a Shared Image Gallery you can share your images to different users, service principals, or AD groups
within your organization. Shared images can be replicated to multiple regions, for quicker scaling of your
deployments.
An image is a copy of either a full VM (including any attached data disks) or just the OS disk, depending on how it
is created. When you create a VM from the image, a copy of the VHDs in the image are used to create the disks for
the new VM. The image remains in storage and can be used over and over again to create new VMs.
If you have a large number of images that you need to maintain, and would like to make them available
throughout your company, you can use a Shared Image Gallery as a repository.
The Shared Image Gallery feature has multiple resource types:
Image definition Image definitions are created within a gallery and carry
information about the image and requirements for using it
internally. This includes whether the image is Windows or
Linux, release notes, and minimum and maximum memory
requirements. It is a definition of a type of image.
IM A GE DEF IN IT IO N P UB L ISH ER O F F ER SK U
All three of these have unique sets of values. The format is similar to how you can currently specify publisher,
offer, and SKU for Azure Marketplace images in Azure PowerShell to get the latest version of a Marketplace image.
Each image definition needs to have a unique set of these values.
The following parameters determine which types of image versions they can contain:
Operating system state - You can set the OS state to generalized or specialized. This field is required.
Operating system - can be either Windows or Linux. This field is required.
Hyper-V generation - specify whether the image was created from a generation 1 or generation 2 Hyper-V
VHD. Default is generation 1.
The following are other parameters that can be set on your image definition so that you can more easily track
your resources:
Description - use description to give more detailed information on why the image definition exists. For
example, you might have an image definition for your front-end server that has the application pre-installed.
Eula - can be used to point to an end-user license agreement specific to the image definition.
Privacy Statement and Release notes - store release notes and privacy statements in Azure storage and provide
a URI for accessing them as part of the image definition.
End-of-life date - attach an end-of-life date to your image definition to be able to use automation to delete old
image definitions.
Tag - you can add tags when you create your image definition. For more information about tags, see Using
tags to organize your resources
Minimum and maximum vCPU and memory recommendations - if your image has vCPU and memory
recommendations, you can attach that information to your image definition.
Disallowed disk types - you can provide information about the storage needs for your VM. For example, if the
image isn't suited for standard HDD disks, you add them to the disallow list.
Purchase plan information for Marketplace images - -PurchasePlanPublisher , -PurchasePlanName , and
-PurchasePlanProduct . For more information about purchase plan information, see Find images in the Azure
Marketplace and Supply Azure Marketplace purchase plan information when creating images.
Image versions
An image version is what you use to create a VM. You can have multiple versions of an image as needed for
your environment. When you use an image version to create a VM, the image version is used to create new
disks for the VM. Image versions can be used multiple times.
The properties of an image version are:
Version number. This is used as the name of the image version. It is always in the format:
MajorVersion.MinorVersion.Patch. When you specify to use latest when creating a VM, the latest image is
chosen based on the highest MajorVersion, then MinorVersion, then Patch.
Source. The source can be a VM, managed disk, snapshot, managed image, or another image version.
Exclude from latest. You can keep a version from being used as the latest image version.
End of life date. Date after which VMs can't be created from this image.
Regional Support
All public regions can be target regions, but to replicate to Australia Central and Australia Central 2 you need to
have your subscription added to the allow list. To request that a subscriptions is added to the allow list, go to:
https://azure.microsoft.com/global-infrastructure/australia/contact/
Limits
There are limits, per subscription, for deploying resources using Shared Image Galleries:
100 shared image galleries, per subscription, per region
1,000 image definitions, per subscription, per region
10,000 image versions, per subscription, per region
10 image version replicas, per subscription, per region
Any disk attached to the image must be less than or equal to 1TB in size
For more information, see Check resource usage against limits for examples on how to check your current usage.
Scaling
Shared Image Gallery allows you to specify the number of replicas you want Azure to keep of the images. This
helps in multi-VM deployment scenarios as the VM deployments can be spread to different replicas reducing the
chance of instance creation processing being throttled due to overloading of a single replica.
With Shared Image Gallery, you can now deploy up to a 1,000 VM instances in a virtual machine scale set (up
from 600 with managed images). Image replicas provide for better deployment performance, reliability and
consistency. You can set a different replica count in each target region, based on the scale needs for the region.
Since each replica is a deep copy of your image, this helps scale your deployments linearly with each extra replica.
While we understand no two images or regions are the same, here’s our general guideline on how to use replicas
in a region:
For non-Virtual Machine Scale Set (VMSS) Deployments - For every 20 VMs that you create concurrently, we
recommend you keep one replica. For example, if you are creating 120 VMs concurrently using the same
image in a region, we suggest you keep at least 6 replicas of your image.
For Virtual Machine Scale Set (VMSS) deployments - For every scale set deployment with up to 600 instances,
we recommend you keep at least one replica. For example, if you are creating 5 scale sets concurrently, each
with 600 VM instances using the same image in a single region, we suggest you keep at least 5 replicas of
your image.
We always recommend you to overprovision the number of replicas due to factors like image size, content and
OS type.
Make your images highly available
Azure Zone Redundant Storage (ZRS) provides resilience against an Availability Zone failure in the region. With
the general availability of Shared Image Gallery, you can choose to store your images in ZRS accounts in regions
with Availability Zones.
You can also choose the account type for each of the target regions. The default storage account type is
Standard_LRS, but you can choose Standard_ZRS for regions with Availability Zones. Check the regional
availability of ZRS here.
Replication
Shared Image Gallery also allows you to replicate your images to other Azure regions automatically. Each Shared
Image version can be replicated to different regions depending on what makes sense for your organization. One
example is to always replicate the latest image in multi-regions while all older versions are only available in 1
region. This can help save on storage costs for Shared Image versions.
The regions a Shared Image version is replicated to can be updated after creation time. The time it takes to
replicate to different regions depends on the amount of data being copied and the number of regions the version
is replicated to. This can take a few hours in some cases. While the replication is happening, you can view the
status of replication per region. Once the image replication is complete in a region, you can then deploy a VM or
scale-set using that image version in the region.
Access
As the Shared Image Gallery, Image Definition, and Image version are all resources, they can be shared using the
built-in native Azure RBAC controls. Using RBAC you can share these resources to other users, service principals,
and groups. You can even share access to individuals outside of the tenant they were created within. Once a user
has access to the Shared Image version, they can deploy a VM or a Virtual Machine Scale Set. Here is the sharing
matrix that helps understand what the user gets access to:
We recommend sharing at the Gallery level for the best experience. We do not recommend sharing individual
image versions. For more information about RBAC, see Manage access to Azure resources using RBAC.
Images can also be shared, at scale, even across tenants using a multi-tenant app registration. For more
information about sharing images across tenants, see Share gallery VM images across Azure tenants.
Billing
There is no extra charge for using the Shared Image Gallery service. You will be charged for the following
resources:
Storage costs of storing each replica. The storage cost is charged as a snapshot and is based on the occupied
size of the image version, the number of replicas of the image version and the number of regions the version
is replicated to.
Network egress charges for replication of the first image version from the source region to the replicated
regions. Subsequent replicas are handled within the region, so there are no additional charges.
For example, let's say you have an image of a 127 GB OS disk, that only occupies 10GB of storage, and one empty
32 GB data disk. The occupied size of each image would only be 10 GB. The image is replicated to 3 regions and
each region has two replicas. There will be six total snapshots, each using 10GB. You will be charged the storage
cost for each snapshot based on the occupied size of 10 GB. You will pay network egress charges for the first
replica to be copied to the additional two regions. For more information on the pricing of snapshots in each
region, see Managed disks pricing. For more information on network egress, see Bandwidth pricing.
Updating resources
Once created, you can make some changes to the image gallery resources. These are limited to:
Shared image gallery:
Description
Image definition:
Recommended vCPUs
Recommended memory
Description
End of life date
Image version:
Regional replica count
Target regions
Exclude from latest
End of life date
SDK support
The following SDKs support creating Shared Image Galleries:
.NET
Java
Node.js
Python
Go
Templates
You can create Shared Image Gallery resource using templates. There are several Azure Quickstart Templates
available:
Create a Shared Image Gallery
Create an Image Definition in a Shared Image Gallery
Create an Image Version in a Shared Image Gallery
Create a VM from Image Version
For more information, see Manage galler y resources using the Azure CLI or PowerShell.
Can I move my existing image to the shared image gallery?
Yes. There are 3 scenarios based on the types of images you may have.
Scenario 1: If you have a managed image, then you can create an image definition and image version from it. For
more information, see Migrate from a managed image to an image version using the Azure CLI or
PowerShell.
Scenario 2: If you have an unmanaged image, you can create a managed image from it, and then create an image
definition and image version from it.
Scenario 3: If you have a VHD in your local file system, then you need to upload the VHD to a managed image,
then you can create an image definition and image version from it.
If the VHD is of a Windows VM, see Upload a VHD.
If the VHD is for a Linux VM, see Upload a VHD
Can I create an image version from a specialized disk?
Yes, can create a VM from a specialized image using the CLI, PowerShell, or API.
Can I move the Shared Image Gallery resource to a different subscription after it has been created?
No, you can't move the shared image gallery resource to a different subscription. You can replicate the image
versions in the gallery to other regions or copy an image from another gallery using the Azure CLI or PowerShell.
Can I replicate my image versions across clouds such as Azure China 21Vianet or Azure Germany or Azure
Government Cloud?
No, you cannot replicate image versions across clouds.
Can I replicate my image versions across subscriptions?
No, you may replicate the image versions across regions in a subscription and use it in other subscriptions
through RBAC.
Can I share image versions across Azure AD tenants?
Yes, you can use RBAC to share to individuals across tenants. But, to share at scale, see "Share gallery images
across Azure tenants" using PowerShell or CLI.
How long does it take to replicate image versions across the target regions?
The image version replication time is entirely dependent on the size of the image and the number of regions it is
being replicated to. However, as a best practice, it is recommended that you keep the image small, and the source
and target regions close for best results. You can check the status of the replication using the -ReplicationStatus
flag.
What is the difference between source region and target region?
Source region is the region in which your image version will be created, and target regions are the regions in
which a copy of your image version will be stored. For each image version, you can only have one source region.
Also, make sure that you pass the source region location as one of the target regions when you create an image
version.
How do I specify the source region while creating the image version?
While creating an image version, you can use the --location tag in CLI and the -Location tag in PowerShell to
specify the source region. Please ensure the managed image that you are using as the base image to create the
image version is in the same location as the location in which you intend to create the image version. Also, make
sure that you pass the source region location as one of the target regions when you create an image version.
How do I specify the number of image version replicas to be created in each region?
There are two ways you can specify the number of image version replicas to be created in each region:
1. The regional replica count which specifies the number of replicas you want to create per region.
2. The common replica count which is the default per region count in case regional replica count is not specified.
To specify the regional replica count, pass the location along with the number of replicas you want to create in
that region: “South Central US=2”.
If regional replica count is not specified with each location, then the default number of replicas will be the
common replica count that you specified.
To specify the common replica count in CLI, use the --replica-count argument in the
az sig image-version create command.
Can I create the shared image gallery in a different location than the one for the image definition and image
version?
Yes, it is possible. But, as a best practice, we encourage you to keep the resource group, shared image gallery,
image definition, and image version in the same location.
What are the charges for using the Shared Image Gallery?
There are no charges for using the Shared Image Gallery service, except the storage charges for storing the image
versions and network egress charges for replicating the image versions from source region to target regions.
What API version should I use to create Shared Image Gallery and Image Definition and Image Version?
To work with shared image galleries, image definitions, and image versions, we recommend you use API version
2018-06-01. Zone Redundant Storage (ZRS) requires version 2019-03-01 or later.
What API version should I use to create Shared VM or Virtual Machine Scale Set out of the Image Version?
For VM and Virtual Machine Scale Set deployments using an image version, we recommend you use API version
2018-04-01 or higher.
Can I update my Virtual Machine Scale Set created using managed image to use Shared Image Gallery images?
Yes, you can update the scale set image reference from a managed image to a shared image gallery image, as
long as the the OS type, Hyper-V generation, and the data disk layout matches between the images.
Next steps
Learn how to deploy shared images using Azure PowerShell.
What is Azure Resource Manager?
11/2/2020 • 6 minutes to read • Edit Online
Azure Resource Manager is the deployment and management service for Azure. It provides a management layer
that enables you to create, update, and delete resources in your Azure account. You use management features,
like access control, locks, and tags, to secure and organize your resources after deployment.
To learn about Azure Resource Manager templates, see Template deployment overview.
All capabilities that are available in the portal are also available through PowerShell, Azure CLI, REST APIs, and
client SDKs. Functionality initially released through APIs will be represented in the portal within 180 days of
initial release.
Terminology
If you're new to Azure Resource Manager, there are some terms you might not be familiar with.
resource - A manageable item that is available through Azure. Virtual machines, storage accounts, web apps,
databases, and virtual networks are examples of resources. Resource groups, subscriptions, management
groups, and tags are also examples of resources.
resource group - A container that holds related resources for an Azure solution. The resource group
includes those resources that you want to manage as a group. You decide which resources belong in a
resource group based on what makes the most sense for your organization. See Resource groups.
resource provider - A service that supplies Azure resources. For example, a common resource provider is
Microsoft.Compute, which supplies the virtual machine resource. Microsoft.Storage is another common
resource provider. See Resource providers and types.
Resource Manager template - A JavaScript Object Notation (JSON) file that defines one or more resources
to deploy to a resource group, subscription, management group, or tenant. The template can be used to
deploy the resources consistently and repeatedly. See Template deployment overview.
declarative syntax - Syntax that lets you state "Here is what I intend to create" without having to write the
sequence of programming commands to create it. The Resource Manager template is an example of
declarative syntax. In the file, you define the properties for the infrastructure to deploy to Azure. See Template
deployment overview.
Understand scope
Azure provides four levels of scope: management groups, subscriptions, resource groups, and resources. The
following image shows an example of these layers.
You apply management settings at any of these levels of scope. The level you select determines how widely the
setting is applied. Lower levels inherit settings from higher levels. For example, when you apply a policy to the
subscription, the policy is applied to all resource groups and resources in your subscription. When you apply a
policy on the resource group, that policy is applied the resource group and all its resources. However, another
resource group doesn't have that policy assignment.
You can deploy templates to tenants, management groups, subscriptions, or resource groups.
Resource groups
There are some important factors to consider when defining your resource group:
All the resources in your resource group should share the same lifecycle. You deploy, update, and delete
them together. If one resource, such as a server, needs to exist on a different deployment cycle it should be
in another resource group.
Each resource can exist in only one resource group.
You can add or remove a resource to a resource group at any time.
You can move a resource from one resource group to another group. For more information, see Move
resources to new resource group or subscription.
The resources in a resource group can be located in different regions than the resource group.
When creating a resource group, you need to provide a location for that resource group. You may be
wondering, "Why does a resource group need a location? And, if the resources can have different
locations than the resource group, why does the resource group location matter at all?" The resource
group stores metadata about the resources. When you specify a location for the resource group, you're
specifying where that metadata is stored. For compliance reasons, you may need to ensure that your data
is stored in a particular region.
If the resource group's region is temporarily unavailable, you can't update resources in the resource group
because the metadata is unavailable. The resources in other regions will still function as expected, but you
can't update them. For more information about building reliable applications, see Designing reliable Azure
applications.
A resource group can be used to scope access control for administrative actions. To manage a resource
group, you can assign Azure Policies, Azure roles, or resource locks.
You can apply tags to a resource group. The resources in the resource group don't inherit those tags.
A resource can connect to resources in other resource groups. This scenario is common when the two
resources are related but don't share the same lifecycle. For example, you can have a web app that
connects to a database in a different resource group.
When you delete a resource group, all resources in the resource group are also deleted. For information
about how Azure Resource Manager orchestrates those deletions, see Azure Resource Manager resource
group and resource deletion.
You can deploy up to 800 instances of a resource type in each resource group. Some resource types are
exempt from the 800 instance limit.
Some resources can exist outside of a resource group. These resources are deployed to the subscription,
management group, or tenant. Only specific resource types are supported at these scopes.
To create a resource group, you can use the portal, PowerShell, Azure CLI, or an Azure Resource Manager
(ARM) template.
This article describes aspects of an Azure Resource Manager template that apply to virtual machines. This article
doesn’t describe a complete template for creating a virtual machine; for that you need resource definitions for
storage accounts, network interfaces, public IP addresses, and virtual networks. For more information about how
these resources can be defined together, see the Resource Manager template walkthrough.
There are many templates in the gallery that include the VM resource. Not all elements that can be included in a
template are described here.
This example shows a typical resource section of a template for creating a specified number of VMs:
"resources": [
{
"apiVersion": "2016-04-30-preview",
"type": "Microsoft.Compute/virtualMachines",
"name": "[concat('myVM', copyindex())]",
"location": "[resourceGroup().location]",
"copy": {
"name": "virtualMachineLoop",
"count": "[parameters('numberOfInstances')]"
},
"dependsOn": [
"[concat('Microsoft.Network/networkInterfaces/myNIC', copyindex())]"
],
"properties": {
"hardwareProfile": {
"vmSize": "Standard_DS1"
},
"osProfile": {
"computername": "[concat('myVM', copyindex())]",
"adminUsername": "[parameters('adminUsername')]",
"adminPassword": "[parameters('adminPassword')]"
},
"storageProfile": {
"imageReference": {
"publisher": "MicrosoftWindowsServer",
"offer": "WindowsServer",
"sku": "2012-R2-Datacenter",
"version": "latest"
},
"osDisk": {
"name": "[concat('myOSDisk', copyindex())]",
"caching": "ReadWrite",
"createOption": "FromImage"
},
"dataDisks": [
{
"name": "[concat('myDataDisk', copyindex())]",
"diskSizeGB": "100",
"lun": 0,
"createOption": "Empty"
}
]
},
"networkProfile": {
"networkInterfaces": [
{
{
"id": "[resourceId('Microsoft.Network/networkInterfaces',
concat('myNIC', copyindex()))]"
}
]
},
"diagnosticsProfile": {
"bootDiagnostics": {
"enabled": "true",
"storageUri": "[concat('https://', variables('storageName'), '.blob.core.windows.net')]"
}
}
},
"resources": [
{
"name": "Microsoft.Insights.VMDiagnosticsSettings",
"type": "extensions",
"location": "[resourceGroup().location]",
"apiVersion": "2016-03-30",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/myVM', copyindex())]"
],
"properties": {
"publisher": "Microsoft.Azure.Diagnostics",
"type": "IaaSDiagnostics",
"typeHandlerVersion": "1.5",
"autoUpgradeMinorVersion": true,
"settings": {
"xmlCfg": "[base64(concat(variables('wadcfgxstart'),
variables('wadmetricsresourceid'),
concat('myVM', copyindex()),
variables('wadcfgxend')))]",
"storageAccount": "[variables('storageName')]"
},
"protectedSettings": {
"storageAccountName": "[variables('storageName')]",
"storageAccountKey": "[listkeys(variables('accountid'),
'2015-06-15').key1]",
"storageAccountEndPoint": "https://core.windows.net"
}
}
},
{
"name": "MyCustomScriptExtension",
"type": "extensions",
"apiVersion": "2016-03-30",
"location": "[resourceGroup().location]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/myVM', copyindex())]"
],
"properties": {
"publisher": "Microsoft.Compute",
"type": "CustomScriptExtension",
"typeHandlerVersion": "1.7",
"autoUpgradeMinorVersion": true,
"settings": {
"fileUris": [
"[concat('https://', variables('storageName'),
'.blob.core.windows.net/customscripts/start.ps1')]"
],
"commandToExecute": "powershell.exe -ExecutionPolicy Unrestricted -File start.ps1"
}
}
}
]
}
]
NOTE
This example relies on a storage account that was previously created. You could create the storage account by deploying it
from the template. The example also relies on a network interface and its dependent resources that would be defined in the
template. These resources are not shown in the example.
API Version
When you deploy resources using a template, you have to specify a version of the API to use. The example shows
the virtual machine resource using this apiVersion element:
"apiVersion": "2016-04-30-preview",
The version of the API you specify in your template affects which properties you can define in the template. In
general, you should select the most recent API version when creating templates. For existing templates, you can
decide whether you want to continue using an earlier API version, or update your template for the latest version to
take advantage of new features.
Use these opportunities for getting the latest API versions:
REST API - List all resource providers
PowerShell - Get-AzResourceProvider
Azure CLI - az provider show
"parameters": {
"adminUsername": { "type": "string" },
"adminPassword": { "type": "securestring" },
"numberOfInstances": { "type": "int" }
},
When you deploy the example template, you enter values for the name and password of the administrator account
on each VM and the number of VMs to create. You have the option of specifying parameter values in a separate file
that's managed with the template, or providing values when prompted.
Variables make it easy for you to set up values in the template that are used repeatedly throughout it or that can
change over time. This variables section is used in the example:
"variables": {
"storageName": "mystore1",
"accountid": "[concat('/subscriptions/', subscription().subscriptionId,
'/resourceGroups/', resourceGroup().name,
'/providers/','Microsoft.Storage/storageAccounts/', variables('storageName'))]",
"wadlogs": "<WadCfg>
<DiagnosticMonitorConfiguration overallQuotaInMB=\"4096\"
xmlns=\"http://schemas.microsoft.com/ServiceHosting/2010/10/DiagnosticsConfiguration\">
<DiagnosticInfrastructureLogs scheduledTransferLogLevelFilter=\"Error\"/>
<WindowsEventLog scheduledTransferPeriod=\"PT1M\" >
<DataSource name=\"Application!*[System[(Level = 1 or Level = 2)]]\" />
<DataSource name=\"Security!*[System[(Level = 1 or Level = 2)]]\" />
<DataSource name=\"System!*[System[(Level = 1 or Level = 2)]]\" />
</WindowsEventLog>",
"wadperfcounters": "<PerformanceCounters scheduledTransferPeriod=\"PT1M\">
<PerformanceCounterConfiguration counterSpecifier=\"\\Process(_Total)\\Thread Count\"
sampleRate=\"PT15S\" unit=\"Count\">
<annotation displayName=\"Threads\" locale=\"en-us\"/>
</PerformanceCounterConfiguration>
</PerformanceCounters>",
"wadcfgxstart": "[concat(variables('wadlogs'), variables('wadperfcounters'),
'<Metrics resourceId=\"')]",
"wadmetricsresourceid": "[concat('/subscriptions/', subscription().subscriptionId,
'/resourceGroups/', resourceGroup().name ,
'/providers/', 'Microsoft.Compute/virtualMachines/')]",
"wadcfgxend": "\"><MetricAggregation scheduledTransferPeriod=\"PT1H\"/>
<MetricAggregation scheduledTransferPeriod=\"PT1M\"/>
</Metrics></DiagnosticMonitorConfiguration>
</WadCfg>"
},
When you deploy the example template, variable values are used for the name and identifier of the previously
created storage account. Variables are also used to provide the settings for the diagnostic extension. Use the best
practices for creating Azure Resource Manager templates to help you decide how you want to structure the
parameters and variables in your template.
Resource loops
When you need more than one virtual machine for your application, you can use a copy element in a template. This
optional element loops through creating the number of VMs that you specified as a parameter:
"copy": {
"name": "virtualMachineLoop",
"count": "[parameters('numberOfInstances')]"
},
Also, notice in the example that the loop index is used when specifying some of the values for the resource. For
example, if you entered an instance count of three, the names of the operating system disks are myOSDisk1,
myOSDisk2, and myOSDisk3:
"osDisk": {
"name": "[concat('myOSDisk', copyindex())]",
"caching": "ReadWrite",
"createOption": "FromImage"
}
NOTE
This example uses managed disks for the virtual machines.
Keep in mind that creating a loop for one resource in the template may require you to use the loop when creating
or accessing other resources. For example, multiple VMs can’t use the same network interface, so if your template
loops through creating three VMs it must also loop through creating three network interfaces. When assigning a
network interface to a VM, the loop index is used to identify it:
"networkInterfaces": [ {
"id": "[resourceId('Microsoft.Network/networkInterfaces',
concat('myNIC', copyindex()))]"
} ]
Dependencies
Most resources depend on other resources to work correctly. Virtual machines must be associated with a virtual
network and to do that it needs a network interface. The dependsOn element is used to make sure that the network
interface is ready to be used before the VMs are created:
"dependsOn": [
"[concat('Microsoft.Network/networkInterfaces/', 'myNIC', copyindex())]"
],
Resource Manager deploys in parallel any resources that aren't dependent on another resource being deployed. Be
careful when setting dependencies because you can inadvertently slow your deployment by specifying
unnecessary dependencies. Dependencies can chain through multiple resources. For example, the network
interface depends on the public IP address and virtual network resources.
How do you know if a dependency is required? Look at the values you set in the template. If an element in the
virtual machine resource definition points to another resource that is deployed in the same template, you need a
dependency. For example, your example virtual machine defines a network profile:
"networkProfile": {
"networkInterfaces": [ {
"id": "[resourceId('Microsoft.Network/networkInterfaces',
concat('myNIC', copyindex())]"
} ]
},
To set this property, the network interface must exist. Therefore, you need a dependency. You also need to set a
dependency when one resource (a child) is defined within another resource (a parent). For example, the diagnostic
settings and custom script extensions are both defined as child resources of the virtual machine. They can't be
created until the virtual machine exists. Therefore, both resources are marked as dependent on the virtual machine.
Profiles
Several profile elements are used when defining a virtual machine resource. Some are required and some are
optional. For example, the hardwareProfile, osProfile, storageProfile, and networkProfile elements are required, but
the diagnosticsProfile is optional. These profiles define settings such as:
size
name and credentials
disk and operating system settings
network interface
boot diagnostics
"imageReference": {
"publisher": "MicrosoftWindowsServer",
"offer": "WindowsServer",
"sku": "2012-R2-Datacenter",
"version": "latest"
},
If you want to create a Linux operating system, you might use this definition:
"imageReference": {
"publisher": "Canonical",
"offer": "UbuntuServer",
"sku": "14.04.2-LTS",
"version": "latest"
},
Configuration settings for the operating system disk are assigned with the osDisk element. The example defines a
new managed disk with the caching mode set to ReadWrite and that the disk is being created from a platform
image:
"osDisk": {
"name": "[concat('myOSDisk', copyindex())]",
"caching": "ReadWrite",
"createOption": "FromImage"
},
"osDisk": {
"osType": "Windows",
"managedDisk": {
"id": "[resourceId('Microsoft.Compute/disks', [concat('myOSDisk', copyindex())])]"
},
"caching": "ReadWrite",
"createOption": "Attach"
},
"storageProfile": {
"imageReference": {
"id": "[resourceId('Microsoft.Compute/images', 'myImage')]"
},
"osDisk": {
"name": "[concat('myOSDisk', copyindex())]",
"osType": "Windows",
"caching": "ReadWrite",
"createOption": "FromImage"
}
},
"dataDisks": [
{
"name": "[concat('myDataDisk', copyindex())]",
"diskSizeGB": "100",
"lun": 0,
"caching": "ReadWrite",
"createOption": "Empty"
}
],
Extensions
Although extensions are a separate resource, they're closely tied to VMs. Extensions can be added as a child
resource of the VM or as a separate resource. The example shows the Diagnostics Extension being added to the
VMs:
{
"name": "Microsoft.Insights.VMDiagnosticsSettings",
"type": "extensions",
"location": "[resourceGroup().location]",
"apiVersion": "2016-03-30",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/myVM', copyindex())]"
],
"properties": {
"publisher": "Microsoft.Azure.Diagnostics",
"type": "IaaSDiagnostics",
"typeHandlerVersion": "1.5",
"autoUpgradeMinorVersion": true,
"settings": {
"xmlCfg": "[base64(concat(variables('wadcfgxstart'),
variables('wadmetricsresourceid'),
concat('myVM', copyindex()),
variables('wadcfgxend')))]",
"storageAccount": "[variables('storageName')]"
},
"protectedSettings": {
"storageAccountName": "[variables('storageName')]",
"storageAccountKey": "[listkeys(variables('accountid'),
'2015-06-15').key1]",
"storageAccountEndPoint": "https://core.windows.net"
}
}
},
This extension resource uses the storageName variable and the diagnostic variables to provide values. If you want
to change the data that is collected by this extension, you can add more performance counters to the
wadperfcounters variable. You could also choose to put the diagnostics data into a different storage account than
where the VM disks are stored.
There are many extensions that you can install on a VM, but the most useful is probably the Custom Script
Extension. In the example, a PowerShell script named start.ps1 runs on each VM when it first starts:
{
"name": "MyCustomScriptExtension",
"type": "extensions",
"apiVersion": "2016-03-30",
"location": "[resourceGroup().location]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/myVM', copyindex())]"
],
"properties": {
"publisher": "Microsoft.Compute",
"type": "CustomScriptExtension",
"typeHandlerVersion": "1.7",
"autoUpgradeMinorVersion": true,
"settings": {
"fileUris": [
"[concat('https://', variables('storageName'),
'.blob.core.windows.net/customscripts/start.ps1')]"
],
"commandToExecute": "powershell.exe -ExecutionPolicy Unrestricted -File start.ps1"
}
}
}
The start.ps1 script can accomplish many configuration tasks. For example, the data disks that are added to the
VMs in the example aren't initialized; you can use a custom script to initialize them. If you have multiple startup
tasks to do, you can use the start.ps1 file to call other PowerShell scripts in Azure storage. The example uses
PowerShell, but you can use any scripting method that is available on the operating system that you're using.
You can see the status of the installed extensions from the Extensions settings in the portal:
You can also get extension information by using the Get-AzVMExtension PowerShell command, the vm
extension get Azure CLI command, or the Get extension information REST API.
Deployments
When you deploy a template, Azure tracks the resources that you deployed as a group and automatically assigns a
name to this deployed group. The name of the deployment is the same as the name of the template.
If you're curious about the status of resources in the deployment, view the resource group in the Azure portal:
It’s not a problem to use the same template to create resources or to update existing resources. When you use
commands to deploy templates, you have the opportunity to say which mode you want to use. The mode can be
set to either Complete or Incremental . The default is to do incremental updates. Be careful when using the
Complete mode because you may accidentally delete resources. When you set the mode to Complete , Resource
Manager deletes any resources in the resource group that aren't in the template.
Next Steps
Create your own template using Authoring Azure Resource Manager templates.
Deploy the template that you created using Create a Windows virtual machine with a Resource Manager
template.
Learn how to manage the VMs that you created by reviewing Create and manage Windows VMs with the Azure
PowerShell module.
For the JSON syntax and properties of resource types in templates, see Azure Resource Manager template
reference.
Regions for virtual machines in Azure
11/2/2020 • 4 minutes to read • Edit Online
It is important to understand how and where your virtual machines (VMs) operate in Azure, along with your
options to maximize performance, availability, and redundancy. This article provides you with an overview of the
availability and redundancy features of Azure.
Region pairs
Each Azure region is paired with another region within the same geography (such as US, Europe, or Asia). This
approach allows for the replication of resources, such as VM storage, across a geography that should reduce the
likelihood of natural disasters, civil unrest, power outages, or physical network outages affecting both regions at
once. Additional advantages of region pairs include:
In the event of a wider Azure outage, one region is prioritized out of every pair to help reduce the time to
restore for applications.
Planned Azure updates are rolled out to paired regions one at a time to minimize downtime and risk of
application outage.
Data continues to reside within the same geography as its pair (except for Brazil South) for tax and law
enforcement jurisdiction purposes.
Examples of region pairs include:
P RIM A RY SEC O N DA RY
West US East US
Feature availability
Some services or VM features are only available in certain regions, such as specific VM sizes or storage types. There
are also some global Azure services that do not require you to select a particular region, such as Azure Active
Directory, Traffic Manager, or Azure DNS. To assist you in designing your application environment, you can check
the availability of Azure services across each region. You can also programmatically query the supported VM sizes
and restrictions in each region.
Storage availability
Understanding Azure regions and geographies becomes important when you consider the available storage
replication options. Depending on the storage type, you have different replication options.
Azure Managed Disks
Locally redundant storage (LRS)
Replicates your data three times within the region in which you created your storage account.
Storage account-based disks
Locally redundant storage (LRS)
Replicates your data three times within the region in which you created your storage account.
Zone redundant storage (ZRS)
Replicates your data three times across two to three facilities, either within a single region or across two
regions.
Geo-redundant storage (GRS)
Replicates your data to a secondary region that is hundreds of miles away from the primary region.
Read-access geo-redundant storage (RA-GRS)
Replicates your data to a secondary region, as with GRS, but also then provides read-only access to the
data in the secondary location.
The following table provides a quick overview of the differences between the storage replication types:
REP L IC AT IO N
ST RAT EGY L RS Z RS GRS RA - GRS
Number of copies of 3 3 6 6
data maintained on
separate nodes.
You can read more about Azure Storage replication options here. For more information about managed disks, see
Azure Managed Disks overview.
Storage costs
Prices vary depending on the storage type and availability that you select.
Azure Managed Disks
Premium Managed Disks are backed by Solid-State Drives (SSDs) and Standard Managed Disks are backed by
regular spinning disks. Both Premium and Standard Managed Disks are charged based on the provisioned
capacity for the disk.
Unmanaged disks
Premium storage is backed by Solid-State Drives (SSDs) and is charged based on the capacity of the disk.
Standard storage is backed by regular spinning disks and is charged based on the in-use capacity and desired
storage availability.
For RA-GRS, there is an additional Geo-Replication Data Transfer charge for the bandwidth of replicating
that data to another Azure region.
See Azure Storage Pricing for pricing information on the different storage types and availability options.
Next steps
For more information, see Azure regions.
Azure virtual machine extensions and features
11/2/2020 • 2 minutes to read • Edit Online
Extensions are small applications that provide post-deployment configuration and automation on Azure VMs. The
Azure platform hosts many extensions covering VM configuration, monitoring, security, and utility applications.
Publishers take an application, wrap it into an extension, and simplify the installation. All you need to do is
provide mandatory parameters.
Troubleshoot extensions
Troubleshooting information for each extension can be found in the Troubleshoot and suppor t section in the
overview for the extension. Here is a list of the troubleshooting information available:
N A M ESPA C E T RO UB L ESH O OT IN G
Next steps
For more information about how the Linux Agent and extensions work, see Azure VM extensions and features
for Linux.
For more information about how the Windows Guest Agent and extensions work, see Azure VM extensions
and features for Windows.
To install the Windows Guest Agent, see Azure Windows Virtual Machine Agent Overview.
To install the Linux Agent, see Azure Linux Virtual Machine Agent Overview.
Azure Virtual Machine Agent overview
11/2/2020 • 4 minutes to read • Edit Online
The Microsoft Azure Virtual Machine Agent (VM Agent) is a secure, lightweight process that manages virtual
machine (VM) interaction with the Azure Fabric Controller. The VM Agent has a primary role in enabling and
executing Azure virtual machine extensions. VM Extensions enable post-deployment configuration of VM, such as
installing and configuring software. VM extensions also enable recovery features such as resetting the
administrative password of a VM. Without the Azure VM Agent, VM extensions cannot be run.
This article details installation and detection of the Azure Virtual Machine Agent.
"resources": [{
"name": "[parameters('virtualMachineName')]",
"type": "Microsoft.Compute/virtualMachines",
"apiVersion": "2016-04-30-preview",
"location": "[parameters('location')]",
"dependsOn": ["[concat('Microsoft.Network/networkInterfaces/', parameters('networkInterfaceName'))]"],
"properties": {
"osProfile": {
"computerName": "[parameters('virtualMachineName')]",
"adminUsername": "[parameters('adminUsername')]",
"adminPassword": "[parameters('adminPassword')]",
"windowsConfiguration": {
"provisionVmAgent": "false"
}
If you do not have the Agents installed, you cannot use some Azure services, such as Azure Backup or Azure
Security. These services require an extension to be installed. If you have deployed a VM without the WinGA, you can
install the latest version of the agent later.
Manual installation
The Windows VM agent can be manually installed with a Windows installer package. Manual installation may be
necessary when you create a custom VM image that is deployed to Azure. To manually install the Windows VM
Agent, download the VM Agent installer. The VM Agent is supported on Windows Server 2008 (64 bit) and later.
NOTE
It is important to update the AllowExtensionOperations option after manually installing the VMAgent on a VM that was
deployed from image without ProvisionVMAgent enable.
$vm.OSProfile.AllowExtensionOperations = $true
$vm | Update-AzVM
Prerequisites
The Windows VM Agent needs at least Windows Server 2008 SP2 (64-bit) to run, with the .NET Framework
4.0. See Minimum version support for virtual machine agents in Azure.
Ensure your VM has access to IP address 168.63.129.16. For more information, see What is IP address
168.63.129.16.
Ensure that DHCP is enabled inside the guest VM. This is required to get the host or fabric address from
DHCP for the IaaS VM Agent and extensions to work. If you need a static private IP, you should configure it
through the Azure portal or PowerShell, and make sure the DHCP option inside the VM is enabled. Learn
more about setting up a static IP address with PowerShell.
Get-AzVM
The following condensed example output shows the ProvisionVMAgent property nested inside OSProfile . This
property can be used to determine if the VM agent has been deployed to the VM:
OSProfile :
ComputerName : myVM
AdminUsername : myUserName
WindowsConfiguration :
ProvisionVMAgent : True
EnableAutomaticUpdates : True
The following script can be used to return a concise list of VM names and the state of the VM Agent:
$vms = Get-AzVM
Manual Detection
When logged in to a Windows VM, Task Manager can be used to examine running processes. To check for the Azure
VM Agent, open Task Manager, click the Details tab, and look for a process name WindowsAzureGuestAgent.exe .
The presence of this process indicates that the VM agent is installed.
Upgrade the VM Agent
The Azure VM Agent for Windows is automatically upgraded on images deployed from the Azure Marketplace. As
new VMs are deployed to Azure, they receive the latest VM agent at VM provision time. If you have installed the
agent manually or are deploying custom VM images you will need to manually update to include the new VM agent
at image creation time.
OSProfile .
For more information on Virtual Machine Scale Set certificates, see Virtual Machine Scale Sets - How do I remove
deprecated certificates?
Next steps
For more information about VM extensions, see Azure virtual machine extensions and features overview.
Virtual machine extensions and features for
Windows
11/2/2020 • 13 minutes to read • Edit Online
Azure virtual machine (VM) extensions are small applications that provide post-deployment configuration and
automation tasks on Azure VMs. For example, if a virtual machine requires software installation, anti-virus
protection, or to run a script inside of it, a VM extension can be used. Azure VM extensions can be run with the
Azure CLI, PowerShell, Azure Resource Manager templates, and the Azure portal. Extensions can be bundled with
a new VM deployment, or run against any existing system.
This article provides an overview of VM extensions, prerequisites for using Azure VM extensions, and guidance
on how to detect, manage, and remove VM extensions. This article provides generalized information because
many VM extensions are available, each with a potentially unique configuration. Extension-specific details can be
found in each document specific to the individual extension.
Prerequisites
To handle the extension on the VM, you need the Azure Windows Agent installed. Some individual extensions
have prerequisites, such as access to resources or dependencies.
Azure VM agent
The Azure VM agent manages interactions between an Azure VM and the Azure fabric controller. The VM agent is
responsible for many functional aspects of deploying and managing Azure VMs, including running VM
extensions. The Azure VM agent is preinstalled on Azure Marketplace images, and can be installed manually on
supported operating systems. The Azure VM Agent for Windows is known as the Windows Guest agent.
For information on supported operating systems and installation instructions, see Azure virtual machine agent.
Supported agent versions
In order to provide the best possible experience, there are minimum versions of the agent. For more information,
see this article.
Supported OSes
The Windows Guest agent runs on multiple OSes, however the extensions framework has a limit for the OSes
that extensions. For more information, see this article.
Some extensions are not supported across all OSes and may emit Error Code 51, 'Unsupported OS'. Check the
individual extension documentation for supportability.
Network access
Extension packages are downloaded from the Azure Storage extension repository, and extension status uploads
are posted to Azure Storage. If you use supported version of the agents, you do not need to allow access to Azure
Storage in the VM region, as can use the agent to redirect the communication to the Azure fabric controller for
agent communications (HostGAPlugin feature through the privileged channel on private IP 168.63.129.16). If you
are on a non-supported version of the agent, you need to allow outbound access to Azure storage in that region
from the VM.
IMPORTANT
If you have blocked access to 168.63.129.16 using the guest firewall or with a proxy, extensions fail irrespective of the
above. Ports 80, 443, and 32526 are required.
Agents can only be used to download extension packages and reporting status. For example, if an extension
install needs to download a script from GitHub (Custom Script) or needs access to Azure Storage (Azure Backup),
then additional firewall/Network Security Group ports need to be opened. Different extensions have different
requirements, since they are applications in their own right. For extensions that require access to Azure Storage
or Azure Active Directory, you can allow access using Azure NSG Service Tags to Storage or AzureActiveDirectory.
The Windows Guest Agent does not have proxy server support for you to redirect agent traffic requests through,
which means that the Windows Guest Agent will rely on your custom proxy (if you have one) to access resources
on the internet or on the Host through IP 168.63.129.16.
Discover VM extensions
Many different VM extensions are available for use with Azure VMs. To see a complete list, use Get-
AzVMExtensionImage. The following example lists all available extensions in the WestUS location:
Run VM extensions
Azure VM extensions run on existing VMs, which is useful when you need to make configuration changes or
recover connectivity on an already deployed VM. VM extensions can also be bundled with Azure Resource
Manager template deployments. By using extensions with Resource Manager templates, Azure VMs can be
deployed and configured without post-deployment intervention.
The following methods can be used to run an extension against an existing VM.
PowerShell
Several PowerShell commands exist for running individual extensions. To see a list, use Get-Command and filter
on Extension:
The following example uses the Custom Script extension to download a script from a GitHub repository onto the
target virtual machine and then run the script. For more information on the Custom Script extension, see Custom
Script extension overview.
In the following example, the VM Access extension is used to reset the administrative password of a Windows VM
to a temporary password. For more information on the VM Access extension, see Reset Remote Desktop service
in a Windows VM. Once you have run this, you should reset the password at first login:
$cred=Get-Credential
The Set-AzVMExtension command can be used to start any VM extension. For more information, see the Set-
AzVMExtension reference.
Azure portal
VM extensions can be applied to an existing VM through the Azure portal. Select the VM in the portal, choose
Extensions , then select Add . Choose the extension you want from the list of available extensions and follow the
instructions in the wizard.
The following example shows the installation of the Microsoft Antimalware extension from the Azure portal:
Azure Resource Manager templates
VM extensions can be added to an Azure Resource Manager template and executed with the deployment of the
template. When you deploy an extension with a template, you can create fully configured Azure deployments. For
example, the following JSON is taken from a Resource Manager template deploys a set of load-balanced VMs and
an Azure SQL Database, then installs a .NET Core application on each VM. The VM extension takes care of the
software installation.
For more information, see the full Resource Manager template.
{
"apiVersion": "2015-06-15",
"type": "extensions",
"name": "config-app",
"location": "[resourceGroup().location]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', variables('vmName'),copyindex())]",
"[variables('musicstoresqlName')]"
],
"tags": {
"displayName": "config-app"
},
"properties": {
"publisher": "Microsoft.Compute",
"type": "CustomScriptExtension",
"typeHandlerVersion": "1.9",
"autoUpgradeMinorVersion": true,
"settings": {
"fileUris": [
"https://raw.githubusercontent.com/Microsoft/dotnet-core-sample-templates/master/dotnet-core-music-
windows/scripts/configure-music-app.ps1"
]
},
"protectedSettings": {
"commandToExecute": "[concat('powershell -ExecutionPolicy Unrestricted -File configure-music-app.ps1
-user ',parameters('adminUsername'),' -password ',parameters('adminPassword'),' -sqlserver
',variables('musicstoresqlName'),'.database.windows.net')]"
}
}
}
For more information on creating Resource Manager templates, see Authoring Azure Resource Manager
templates with Windows VM extensions.
Moving the command to execute property to the protected configuration secures the execution string, as
shown in the following example:
{
"apiVersion": "2015-06-15",
"type": "extensions",
"name": "config-app",
"location": "[resourceGroup().location]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', variables('vmName'),copyindex())]",
"[variables('musicstoresqlName')]"
],
"tags": {
"displayName": "config-app"
},
"properties": {
"publisher": "Microsoft.Compute",
"type": "CustomScriptExtension",
"typeHandlerVersion": "1.9",
"autoUpgradeMinorVersion": true,
"settings": {
"fileUris": [
"https://raw.githubusercontent.com/Microsoft/dotnet-core-sample-templates/master/dotnet-core-music-
windows/scripts/configure-music-app.ps1"
]
},
"protectedSettings": {
"commandToExecute": "[concat('powershell -ExecutionPolicy Unrestricted -File configure-music-app.ps1
-user ',parameters('adminUsername'),' -password ',parameters('adminPassword'),' -sqlserver
',variables('musicstoresqlName'),'.database.windows.net')]"
}
}
}
On an Azure IaaS VM that uses extensions, in the certificates console, you might see certificates that have the
subject Windows Azure CRP Cer tificate Generator . On a Classic RDFE VM, these certificates have the subject
name Windows Azure Ser vice Management for Extensions .
These certificates secure the communication between the VM and its host during the transfer of protected
settings (password, other credentials) used by extensions. The certificates are built by the Azure fabric controller
and passed to the VM Agent. If you stop and start the VM every day, a new certificate might be created by the
fabric controller. The certificate is stored in the computer's Personal certificates store. These certificates can be
deleted. The VM Agent re-creates certificates if needed.
How do agents and extensions get updated?
The Agents and Extensions share the same update mechanism. Some updates do not require additional firewall
rules.
When an update is available, it is only installed on the VM when there is a change to extensions, and other VM
Model changes such as:
Data disks
Extensions
Boot diagnostics container
Guest OS secrets
VM size
Network profile
Publishers make updates available to regions at different times, so it is possible you can have VMs in different
regions on different versions.
Listing Extensions Deployed to a VM
Agent updates
The Windows Guest Agent only contains Extension Handling code, the Windows Provisioning code is separate.
You can uninstall the Windows Guest Agent. You cannot disable the automatic update of the Window Guest
Agent.
The Extension Handling code is responsible for communicating with the Azure fabric, and handling the VM
extensions operations such as installs, reporting status, updating the individual extensions, and removing them.
Updates contain security fixes, bug fixes, and enhancements to the Extension Handling code.
To check what version you are running, see Detecting installed Windows Guest Agent.
Extension updates
When an extension update is available, the Windows Guest Agent downloads and upgrades the extension.
Automatic extension updates are either Minor or Hotfix. You can opt in or opt out of extensions Minor updates
when you provision the extension. The following example shows how to automatically upgrade minor versions in
a Resource Manager template with autoUpgradeMinorVersion": true,':
"properties": {
"publisher": "Microsoft.Compute",
"type": "CustomScriptExtension",
"typeHandlerVersion": "1.9",
"autoUpgradeMinorVersion": true,
"settings": {
"fileUris": [
"https://raw.githubusercontent.com/Microsoft/dotnet-core-sample-templates/master/dotnet-core-music-
windows/scripts/configure-music-app.ps1"
]
},
To get the latest minor release bug fixes, it is highly recommended that you always select auto update in your
extension deployments. Hotfix updates that carry security or key bug fixes cannot be opted out.
How to identify extension updates
Identifying if the extension is set with autoUpgradeMinorVersion on a VM
You can see from the VM model if the extension was provisioned with 'autoUpgradeMinorVersion'. To check, use
Get-AzVm and provide the resource group and VM name as follows:
ForceUpdateTag :
Publisher : Microsoft.Compute
VirtualMachineExtensionType : CustomScriptExtension
TypeHandlerVersion : 1.9
AutoUpgradeMinorVersion : True
Agent permissions
To perform its tasks, the agent needs to run as Local System.
Troubleshoot VM extensions
Each VM extension may have troubleshooting steps specific to the extension. For example, when you use the
Custom Script extension, script execution details can be found locally on the VM where the extension was run.
Any extension-specific troubleshooting steps are detailed in extension-specific documentation.
The following troubleshooting steps apply to all VM extensions.
1. To check the Windows Guest Agent Log, look at the activity when your extension was being provisioned in
C:\WindowsAzure\Logs\WaAppAgent.log
2. Check the actual extension logs for more details in C:\WindowsAzure\Logs\Plugins<extensionName>
3. Check extension specific documentation troubleshooting sections for error codes, known issues etc.
4. Look at the system logs. Check for other operations that may have interfered with the extension, such as a
long running installation of another application that required exclusive package manager access.
Common reasons for extension failures
1. Extensions have 20 mins to run (exceptions are the CustomScript extensions, Chef, and DSC that have 90
mins). If your deployment exceeds this time, it is marked as a timeout. The cause of this can be due to low
resource VMs, other VM configurations/start up tasks consuming high amounts of resource whilst the
extension is trying to provision.
2. Minimum prerequisites not met. Some extensions have dependencies on VM SKUs, such as HPC images.
Extensions may require certain networking access requirements, such as communicating to Azure Storage
or public services. Other examples could be access to package repositories, running out of disk space, or
security restrictions.
3. Exclusive package manager access. In some cases, you may encounter a long running VM configuration
and extension installation conflicting, where they both need exclusive access to the package manager.
View extension status
After a VM extension has been run against a VM, use Get-AzVM to return extension status. Substatuses[0] shows
that the extension provisioning succeeded, meaning that it successful deployed to the VM, but the execution of
the extension inside the VM failed, Substatuses[1].
Extensions[0] :
Name : CustomScriptExtension
Type : Microsoft.Compute.CustomScriptExtension
TypeHandlerVersion : 1.9
Substatuses[0] :
Code : ComponentStatus/StdOut/succeeded
Level : Info
DisplayStatus : Provisioning succeeded
Message : Windows PowerShell \nCopyright (C) Microsoft Corporation. All rights reserved.\n
Substatuses[1] :
Code : ComponentStatus/StdErr/succeeded
Level : Info
DisplayStatus : Provisioning succeeded
Message : The argument 'cseTest%20Scriptparam1.ps1' to the -File parameter does not exist.
Provide the path to an existing '.ps1' file as an argument to the
-File parameter.
Statuses[0] :
Code : ProvisioningState/failed/-196608
Level : Error
DisplayStatus : Provisioning failed
Message : Finished executing command
Extension execution status can also be found in the Azure portal. To view the status of an extension, select the VM,
choose Extensions , then select the desired extension.
Rerun VM extensions
There may be cases in which a VM extension needs to be rerun. You can rerun an extension by removing it, and
then rerunning the extension with an execution method of your choice. To remove an extension, use Remove-
AzVMExtension as follows:
Custom Script Extension for Windows Run scripts against an Azure virtual Custom Script Extension for Windows
machine
DSC Extension for Windows PowerShell DSC (Desired State DSC Extension for Windows
Configuration) Extension
Azure VM Access Extension Manage users and credentials VM Access Extension for Linux
Next steps
For more information about VM extensions, see Azure virtual machine extensions and features overview.
Availability options for virtual machines in Azure
11/2/2020 • 3 minutes to read • Edit Online
This article provides you with an overview of the availability features of Azure virtual machines (VMs).
High availability
Workloads are typically spread across different virtual machines to gain high throughput, performance, and to
create redundancy in case a VM is impacted due to an update or other event.
There are few options that Azure provides to achieve High Availability. First let’s talk about basic constructs.
Availability zones
Availability zones expand the level of control you have to maintain the availability of the applications and data on
your VMs. An Availability Zone is a physically separate zone, within an Azure region. There are three Availability
Zones per supported Azure region.
Each Availability Zone has a distinct power source, network, and cooling. By architecting your solutions to use
replicated VMs in zones, you can protect your apps and data from the loss of a datacenter. If one zone is
compromised, then replicated apps and data are instantly available in another zone.
Availability sets
An availability set is a logical grouping of VMs within a datacenter that allows Azure to understand how your
application is built to provide for redundancy and availability. We recommended that two or more VMs are created
within an availability set to provide for a highly available application and to meet the 99.95% Azure SLA. There is
no cost for the Availability Set itself, you only pay for each VM instance that you create. When a single VM is using
Azure premium SSDs, the Azure SLA applies for unplanned maintenance events.
In an availability set, VMs are automatically distributed across these fault domains. This approach limits the impact
of potential physical hardware failures, network outages, or power interruptions.
For VMs using Azure Managed Disks, VMs are aligned with managed disk fault domains when using a managed
availability set. This alignment ensures that all the managed disks attached to a VM are within the same managed
disk fault domain.
Only VMs with managed disks can be created in a managed availability set. The number of managed disk fault
domains varies by region - either two or three managed disk fault domains per region. You can read more about
these managed disk fault domains for Linux VMs or Windows VMs.
VMs within an availability set are also automatically distributed across update domains.
Next steps
You can now start to use these availability and redundancy features to build your Azure environment. For best
practices information, see Azure availability best practices.
Co-locate resource for improved latency
11/2/2020 • 6 minutes to read • Edit Online
When deploying your application in Azure, spreading instances across regions or availability zones creates network
latency, which may impact the overall performance of your application.
Best practices
For the lowest latency, use proximity placement groups together with accelerated networking. For more
information, see Create a Linux virtual machine with Accelerated Networking or Create a Windows virtual
machine with Accelerated Networking.
Deploy all VM sizes in a single template. In order to avoid landing on hardware that doesn't support all the VM
SKUs and sizes you require, include all of the application tiers in a single template so that they will all be
deployed at the same time.
If you are scripting your deployment using PowerShell, CLI or the SDK, you may get an allocation error
OverconstrainedAllocationRequest . In this case, you should stop/deallocate all the existing VMs, and change the
sequence in the deployment script to begin with the VM SKU/sizes that failed.
When reusing an existing placement group from which VMs were deleted, wait for the deletion to fully complete
before adding VMs to it.
If latency is your first priority, put VMs in a proximity placement group and the entire solution in an availability
zone. But, if resiliency is your top priority, spread your instances across multiple availability zones (a single
proximity placement group cannot span zones).
Next steps
Deploy a VM to a proximity placement group using Azure PowerShell.
Learn how to test network latency.
Learn how to optimize network throughput.
Learn how to use proximity placement groups with SAP applications.
Optimize network throughput for Azure virtual
machines
11/2/2020 • 3 minutes to read • Edit Online
Azure virtual machines (VM) have default network settings that can be further optimized for network
throughput. This article describes how to optimize network throughput for Microsoft Azure Windows and Linux
VMs, including major distributions such as Ubuntu, CentOS, and Red Hat.
Windows VM
If your Windows VM supports Accelerated Networking, enabling that feature would be the optimal
configuration for throughput. For all other Windows VMs, using Receive Side Scaling (RSS) can reach higher
maximal throughput than a VM without RSS. RSS may be disabled by default in a Windows VM. To determine
whether RSS is enabled, and enable it if it's currently disabled, complete the following steps:
1. See if RSS is enabled for a network adapter with the Get-NetAdapterRss PowerShell command. In the
following example output returned from the Get-NetAdapterRss , RSS is not enabled.
Name : Ethernet
InterfaceDescription : Microsoft Hyper-V Network Adapter
Enabled : False
The previous command does not have an output. The command changed NIC settings, causing
temporary connectivity loss for about one minute. A Reconnecting dialog box appears during the
connectivity loss. Connectivity is typically restored after the third attempt.
3. Confirm that RSS is enabled in the VM by entering the Get-NetAdapterRss command again. If successful,
the following example output is returned:
Name : Ethernet
InterfaceDescription : Microsoft Hyper-V Network Adapter
Enabled : True
Linux VM
RSS is always enabled by default in an Azure Linux VM. Linux kernels released since October 2017 include new
network optimizations options that enable a Linux VM to achieve higher network throughput.
Ubuntu for new deployments
The Ubuntu Azure kernel is the most optimized for network performance on Azure. To get the latest
optimizations, first install the latest supported version of 18.04-LTS, as follows:
"Publisher": "Canonical",
"Offer": "UbuntuServer",
"Sku": "18.04-LTS",
"Version": "latest"
After the creation is complete, enter the following commands to get the latest updates. These steps also work
for VMs currently running the Ubuntu Azure kernel.
The following optional command set may be helpful for existing Ubuntu deployments that already have the
Azure kernel but that have failed to further updates with errors.
#optional steps may be helpful in existing deployments with the Azure kernel
#run as root or preface with sudo
apt-get -f install
apt-get --fix-missing install
apt-get clean
apt-get -y update
apt-get -y upgrade
apt-get -y dist-upgrade
If your VM does not have the Azure kernel, the version number usually begins with "4.4." If the VM does not
have the Azure kernel, run the following commands as root:
CentOS
In order to get the latest optimizations, it is best to create a VM with the latest supported version by specifying
the following parameters:
"Publisher": "OpenLogic",
"Offer": "CentOS",
"Sku": "7.7",
"Version": "latest"
New and existing VMs can benefit from installing the latest Linux Integration Services (LIS). The throughput
optimization is in LIS, starting from 4.2.2-2, although later versions contain further improvements. Enter the
following commands to install the latest LIS:
Red Hat
In order to get the optimizations, it is best to create a VM with the latest supported version by specifying the
following parameters:
"Publisher": "RedHat"
"Offer": "RHEL"
"Sku": "7-RAW"
"Version": "latest"
New and existing VMs can benefit from installing the latest Linux Integration Services (LIS). The throughput
optimization is in LIS, starting from 4.2. Enter the following commands to download and install LIS:
wget https://aka.ms/lis
tar xvf lis
cd LISISO
sudo ./install.sh #or upgrade.sh if prior LIS was previously installed
Learn more about Linux Integration Services Version 4.2 for Hyper-V by viewing the download page.
Next steps
Deploy VMs close to each other for low latency with Proximity Placement Group
See the optimized result with Bandwidth/Throughput testing Azure VM for your scenario.
Read about how bandwidth is allocated to virtual machines
Learn more with Azure Virtual Network frequently asked questions (FAQ)
Sizes for virtual machines in Azure
11/2/2020 • 2 minutes to read • Edit Online
This article describes the available sizes and options for the Azure virtual machines you can use to run your
apps and workloads. It also provides deployment considerations to be aware of when you're planning to use
these resources.
General purpose B, Dsv3, Dv3, Dasv4, Dav4, DSv2, Balanced CPU-to-memory ratio. Ideal
Dv2, Av2, DC, DCv2, Dv4, Dsv4, for testing and development, small to
Ddv4, Ddsv4 medium databases, and low to
medium traffic web servers.
Memory optimized Esv3, Ev3, Easv4, Eav4, Ev4, Esv4, High memory-to-CPU ratio. Great for
Edv4, Edsv4, Mv2, M, DSv2, Dv2 relational database servers, medium
to large caches, and in-memory
analytics.
High performance compute HB, HBv2, HC, H Our fastest and most powerful CPU
virtual machines with optional high-
throughput network interfaces
(RDMA).
For information about pricing of the various sizes, see the pricing pages for Linux or Windows.
For availability of VM sizes in Azure regions, see Products available by region.
To see general limits on Azure VMs, see Azure subscription and service limits, quotas, and constraints.
For more information on how Azure names its VMs, see Azure virtual machine sizes naming conventions.
REST API
For information on using the REST API to query for VM sizes, see the following:
List available virtual machine sizes for resizing
List available virtual machine sizes for a subscription
List available virtual machine sizes in an availability set
ACU
Learn more about how Azure compute units (ACU) can help you compare compute performance across Azure
SKUs.
Benchmark scores
Learn more about compute performance for Linux VMs using the CoreMark benchmark scores.
Learn more about compute performance for Windows VMs using the SPECInt benchmark scores.
Manage costs
Azure services cost money. Azure Cost Management helps you set budgets and configure alerts to keep
spending under control. Analyze, manage, and optimize your Azure costs with Cost Management. To learn
more, see the quickstart on analyzing your costs.
Next steps
Learn more about the different VM sizes that are available:
General purpose
Compute optimized
Memory optimized
Storage optimized
GPU
High performance compute
Check the Previous generation page for A Standard, Dv1 (D1-4 and D11-14 v1), and A8-A11 series
Support for generation 2 VMs on Azure
11/2/2020 • 6 minutes to read • Edit Online
Support for generation 2 virtual machines (VMs) is now available on Azure. You can't change a virtual
machine's generation after you've created it, so review the considerations on this page before you choose a
generation.
Generation 2 VMs support key features that aren't supported in generation 1 VMs. These features include
increased memory, Intel Software Guard Extensions (Intel SGX), and virtualized persistent memory
(vPMEM). Generation 2 VMs running on-premises, have some features that aren't supported in Azure yet.
For more information, see the Features and capabilities section.
Generation 2 VMs use the new UEFI-based boot architecture rather than the BIOS-based architecture used
by generation 1 VMs. Compared to generation 1 VMs, generation 2 VMs might have improved boot and
installation times. For an overview of generation 2 VMs and some of the differences between generation 1
and generation 2, see Should I create a generation 1 or 2 virtual machine in Hyper-V?.
Generation 2 VM sizes
Generation 1 VMs are supported by all VM sizes in Azure (except for Mv2-series VMs). Azure now offers
generation 2 support for the following selected VM series:
B-series
DCsv2-series
DSv2-series
Dsv3-series
Dsv4-series
Dasv4-series
Ddsv4-series
Esv3-series
Easv4-series
Fsv2-series
GS-series
HB-series
HC-series
Ls-series
Lsv2-series
M-series
Mv2-series1
NCv2-series
NCv3-series
ND-series
NVv3-series
1 Mv2-series does not support Generation 1VM images and only support a subset of Generation 2 images.
Please see Mv2-series documentation for details.
Generation 2 VM images in Azure Marketplace
Generation 2 VMs support the following Marketplace images:
Windows Server 2019, 2016, 2012 R2, 2012
Windows 10 Pro, Windows 10 Enterprise
SUSE Linux Enterprise Server 15 SP1
SUSE Linux Enterprise Server 12 SP4
Ubuntu Server 16.04, 18.04, 19.04, 19.10
RHEL 8.1, 8.0, 7.7, 7.6, 7.5, 7.4, 7.0
Cent OS 8.1, 8.0, 7.7, 7.6, 7.5, 7.4
Oracle Linux 7.7, 7.7-CI
NOTE
Specific Virtual machine sizes like Mv2-Series may only support a subset of these images - please look at the
relevant virtual machine size documentation for complete details.
Secure boot ️
✔ ❌
Shielded VM ️
✔ ❌
vTPM ️
✔ ❌
VHDX format ️
✔ ❌
OS disk > 2 TB ❌ ️
✔
C A PA B IL IT Y GEN ERAT IO N 1 GEN ERAT IO N 2
Custom disk/image/swap OS ️
✔ ️
✔
Backup/restore ️
✔ ️
✔
Creating a generation 2 VM
Marketplace image
In the Azure portal or Azure CLI, you can create generation 2 VMs from a Marketplace image that supports
UEFI boot.
Azure portal
Below are the steps to create a generation 2 (Gen2) VM in Azure portal.
1. Sign in to the Azure portal at https://portal.azure.com.
2. Select Create a resource .
3. Click See all from Azure Marketplace on the left.
4. Select an image which supports Gen2.
5. Click Create .
6. In the Advanced tab, under the VM generation section, select the Gen 2 option.
7. In the Basics tab, Under Instance details , go to Size and open the Select a VM size blade.
8. Select a supported generation 2 VM.
9. Go through the rest of the pages to finish creating the VM.
PowerShell
You can also use PowerShell to create a VM by directly referencing the generation 1 or generation 2 SKU.
For example, use the following PowerShell cmdlet to get a list of the SKUs in the WindowsServer offer.
If you're creating a VM with Windows Server 2012 as the OS, then you will select either the generation 1
(BIOS) or generation 2 (UEFI) VM SKU, which looks like this:
2012-Datacenter
2012-datacenter-gensecond
See the Features and capabilities section for a current list of supported Marketplace images.
Azure CLI
Alternatively, you can use the Azure CLI to see any available generation 2 images, listed by Publisher .
3. Once the disk is available, create a VM by attaching this disk. The VM created will be a
generation 2 VM. When the generation 2 VM is created, you can optionally generalize the
image of this VM. By generalizing the image, you can use it to create multiple VMs.
How do I increase the OS disk size?
OS disks larger than 2 TiB are new to generation 2 VMs. By default, OS disks are smaller than 2 TiB
for generation 2 VMs. You can increase the disk size up to a recommended maximum of 4 TiB. Use
the Azure CLI or the Azure portal to increase the OS disk size. For information about how to expand
disks programmatically, see Resize a disk for Windows or Linux.
To increase the OS disk size from the Azure portal:
1. In the Azure portal, go to the VM properties page.
2. To shut down and deallocate the VM, select the Stop button.
3. In the Disks section, select the OS disk you want to increase.
4. In the Disks section, select Configuration , and update the Size to the value you want.
5. Go back to the VM properties page and Star t the VM.
You might see a warning for OS disks larger than 2 TiB. The warning doesn't apply to generation 2
VMs. However, OS disk sizes larger than 4 TiB are not supported.
Do generation 2 VMs suppor t accelerated networking?
Yes. For more information, see Create a VM with accelerated networking.
Do generation 2 VMs suppor t Secure Boot or vTPM in Azure? Both generation 1 and
generation 2 VMs in Azure do not support Secure Boot or vTPM.
Is VHDX suppor ted on generation 2?
No, generation 2 VMs support only VHD.
Do generation 2 VMs suppor t Azure Ultra Disk Storage?
Yes.
Can I migrate a VM from generation 1 to generation 2?
No, you can't change the generation of a VM after you create it. If you need to switch between VM
generations, create a new VM of a different generation.
Why is my VM size not enabled in the size selector when I tr y to create a Gen2 VM?
This may be solved by doing the following:
1. Verify that the VM generation property is set to Gen 2 in the Advanced tab.
2. Verify you are searching for a VM size which supports Gen2 VMs.
Next steps
Learn about generation 2 virtual machines in Hyper-V.
General purpose virtual machine sizes
11/2/2020 • 3 minutes to read • Edit Online
General purpose VM sizes provide balanced CPU-to-memory ratio. Ideal for testing and development, small
to medium databases, and low to medium traffic web servers. This article provides information about the
offerings for general purpose computing.
The Av2-series VMs can be deployed on a variety of hardware types and processors. A-series VMs
have CPU performance and memory configurations best suited for entry level workloads like
development and test. The size is throttled, based upon the hardware, to offer consistent processor
performance for the running instance, regardless of the hardware it is deployed on. To determine the
physical hardware on which this size is deployed, query the virtual hardware from within the Virtual
Machine. Example use cases include development and test servers, low traffic web servers, small to
medium databases, proof-of-concepts, and code repositories.
NOTE
The A8 – A11 VMs are planned for retirement on 3/2021. For more information, see HPC Migration Guide.
B-series burstable VMs are ideal for workloads that do not need the full performance of the CPU
continuously, like web servers, small databases and development and test environments. These
workloads typically have burstable performance requirements. The B-Series provides these customers
the ability to purchase a VM size with a price conscious baseline performance that allows the VM
instance to build up credits when the VM is utilizing less than its base performance. When the VM has
accumulated credit, the VM can burst above the VM’s baseline using up to 100% of the CPU when your
application requires the higher CPU performance.
Dav4 and Dasv4-series are new sizes utilizing AMD’s 2.35Ghz EPYCTM 7452 processor in a multi-
threaded configuration with up to 256 MB L3 cache dedicating 8 MB of that L3 cache to every 8 cores
increasing customer options for running their general purpose workloads. The Dav4-series and Dasv4-
series have the same memory and disk configurations as the D & Dsv3-series.
Dv4 and Dsv4-series The Dv4 and Dsv4-series runs on the Intel® Xeon® Platinum 8272CL (Cascade
Lake) processors in a hyper-threaded configuration, providing a better value proposition for most
general-purpose workloads. It features an all core Turbo clock speed of 3.4 GHz.
Ddv4 and Ddsv4-series The Ddv4 and Ddsv4-series runs on the Intel® Xeon® Platinum 8272CL
(Cascade Lake) processors in a hyper-threaded configuration, providing a better value proposition for
most general-purpose workloads. It features an all core Turbo clock speed of 3.4 GHz, Intel® Turbo
Boost Technology 2.0, Intel® Hyper-Threading Technology and Intel® Advanced Vector Extensions
512 (Intel® AVX-512). They also support Intel® Deep Learning Boost. These new VM sizes will have
50% larger local storage, as well as better local disk IOPS for both read and write compared to the
Dv3/Dsv3 sizes with Gen2 VMs.
Dv3 and Dsv3-series VMs run on 2nd Generation Intel® Xeon® Platinum 8272CL (Cascade Lake),
Intel® Xeon® 8171M 2.1GHz (Skylake), Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell), or the
Intel® Xeon® E5-2673 v3 2.4 GHz (Haswell) processors in a hyper-threaded configuration,
providing a better value proposition for most general purpose workloads. Memory has been expanded
(from ~3.5 GiB/vCPU to 4 GiB/vCPU) while disk and network limits have been adjusted on a per core
basis to align with the move to hyperthreading. The Dv3-series no longer has the high memory VM
sizes of the D/Dv2-series, those have been moved to the memory optimized Ev3 and Esv3-series.
Dv2 and Dsv2-series VMs, a follow-on to the original D-series, features a more powerful CPU and
optimal CPU-to-memory configuration making them suitable for most production workloads. The
Dv2-series is about 35% faster than the D-series. Dv2-series run on 2nd Generation Intel® Xeon®
Platinum 8272CL (Cascade Lake), Intel® Xeon® 8171M 2.1GHz (Skylake), Intel® Xeon® E5-2673
v4 2.3 GHz (Broadwell), or the Intel® Xeon® E5-2673 v3 2.4 GHz (Haswell) processors with the Intel
Turbo Boost Technology 2.0. The Dv2-series has the same memory and disk configurations as the D-
series.
The DCv2-series can help protect the confidentiality and integrity of your data and code while it’s
processed in the public cloud. These machines are backed by the latest generation of Intel XEON E-
2288G Processor with SGX technology. With the Intel Turbo Boost Technology these machines can go
up to 5.0GHz. DCv2 series instances enable customers to build secure enclave-based applications to
protect their code and data while it’s in use.
Other sizes
Compute optimized
Memory optimized
Storage optimized
GPU optimized
High performance compute
Previous generations
Next steps
Learn more about how Azure compute units (ACU) can help you compare compute performance across
Azure SKUs.
For more information on how Azure names its VMs, see Azure virtual machine sizes naming conventions.
Av2-series
11/2/2020 • 2 minutes to read • Edit Online
The Av2-series VMs can be deployed on a variety of hardware types and processors. Av2-series VMs have CPU
performance and memory configurations best suited for entry level workloads like development and test. The size
is throttled to offer consistent processor performance for the running instance, regardless of the hardware it is
deployed on. To determine the physical hardware on which this size is deployed, query the virtual hardware from
within the Virtual Machine. Some example use cases include development and test servers, low traffic web servers,
small to medium databases, proof-of-concepts, and code repositories.
ACU: 100
Premium Storage: Not Supported
Premium Storage caching: Not Supported
Live Migration: Supported
Memory Preserving Updates: Supported
VM Generation Support: Generation 1
M A X T EM P
STO RA GE
T H RO UGH P
UT : M A X DATA EXP EC T ED
T EM P IO P S/ REA D DISK S/ T H R N ET W O RK
M EM O RY : STO RA GE M B P S/ W RIT O UGH P UT : B A N DW IDT
SIZ E VC O RE GIB ( SSD) GIB E MBPS IO P S M A X N IC S H ( M B P S)
Next steps
Learn more about how Azure compute units (ACU) can help you compare compute performance across Azure
SKUs.
B-series burstable virtual machine sizes
11/2/2020 • 6 minutes to read • Edit Online
The B-series VMs are ideal for workloads that do not need the full performance of the CPU continuously, like web
servers, proof of concepts, small databases and development build environments. These workloads typically have
burstable performance requirements. The B-series provides you with the ability to purchase a VM size with
baseline performance that can build up credits when it is using less than its baseline. When the VM has
accumulated credits, the VM can burst above the baseline using up to 100% of the vCPU when your application
requires higher CPU performance.
The B-series comes in the following VM sizes:
Azure Compute Unit (ACU): Varies*
Premium Storage: Supported
Premium Storage caching: Not Supported
Live Migration: Supported
Memory Preserving Updates: Supported
VM Generation Support: Generation 1 and 2
*B-series VMs are burstable and thus ACU numbers will vary depending on workloads and core usage.
MAX
CAC
H ED MAX
AND UN C
T EM P ACHE
STO R D
A GE DISK
THR THR
T EM P B A SE MAX C RED MAX O UG O UG
STO R CPU CPU IN IT I IT S BANK H P UT H P UT
M EM A GE P ERF P ERF AL BANK ED MAX : :
O RY : ( SSD) OF OF C RED ED/ H C RED DATA IO P S/ IO P S/ MAX
SIZ E VC P U GIB GIB VM VM IT S O UR IT S DISK S MBPS MBPS N IC S
Workload example
Consider an office check-in/out application. The application needs CPU bursts during business hours, but not a lot
of computing power during off hours. In this example, the workload requires a 16vCPU virtual machine with 64GiB
of RAM to work efficiently.
The table shows the hourly traffic data and the chart is a visual representation of that traffic.
B16 characteristics:
Max CPU perf: 16vCPU * 100% = 1600%
Baseline: 270%
C REDIT S
SC EN A RIO T IM E C P U USA GE ( % ) A C C UM UL AT ED 1 C REDIT S AVA IL A B L E
For a D16s_v3 which has 16 vCPUs and 64 GiB of memory the hourly rate is $0.936 per hour (monthly $673.92)
and for B16ms with 16 vCPUs and 64 GiB memory the rate is $0.794 per hour (monthly $547.86). This results in
15% savings!
Q&A
Q: What happens when my credits run out?
A : When the credits are exhausted, the VM returns to the baseline performance.
Q: How do you get 135% baseline performance from a VM?
A : The 135% is shared amongst the 8 vCPU’s that make up the VM size. For example, if your application uses 4 of
the 8 cores working on batch processing and each of those 4 vCPU’s are running at 30% utilization the total
amount of VM CPU performance would equal 120%. Meaning that your VM would be building credit time based
on the 15% delta from your baseline performance. But it also means that when you have credits available that
same VM can use 100% of all 8 vCPU’s giving that VM a Max CPU performance of 800%.
Q: How can I monitor my credit balance and consumption?
A : The Credit metric allows you to view how many credits your VM have been banked and the ConsumedCredit
metric will show how many CPU credits your VM has consumed from the bank. You will be able to view these
metrics from the metrics pane in the portal or programmatically through the Azure Monitor APIs.
For more information on how to access the metrics data for Azure, see Overview of metrics in Microsoft Azure.
Q: How are credits accumulated and consumed?
A : The VM accumulation and consumption rates are set such that a VM running at exactly its base performance
level will have neither a net accumulation or consumption of bursting credits. A VM will have a net increase in
credits whenever it is running below its base performance level and will have a net decrease in credits whenever
the VM is utilizing the CPU more than its base performance level.
Example : I deploy a VM using the B1ms size for my small time and attendance database application. This size
allows my application to use up to 20% of a vCPU as my baseline, which is 0.2 credits per minute I can use or bank.
My application is busy at the beginning and end of my employees work day, between 7:00-9:00 AM and 4:00 -
6:00PM. During the other 20 hours of the day, my application is typically at idle, only using 10% of the vCPU. For
the non-peak hours, I earn 0.2 credits per minute but only consume 0.l credits per minute, so my VM will bank 0.1
x 60 = 6 credits per hour. For the 20 hours that I am off-peak, I will bank 120 credits.
During peak hours my application averages 60% vCPU utilization, I still earn 0.2 credits per minute but I consume
0.6 credits per minute, for a net cost of 0.4 credits a minute or 0.4 x 60 = 24 credits per hour. I have 4 hours per
day of peak usage, so it costs 4 x 24 = 96 credits for my peak usage.
If I take the 120 credits I earned off-peak and subtract the 96 credits I used for my peak times, I bank an additional
24 credits per day that I can use for other bursts of activity.
Q: How can I calculate credits accumulated and used?
A : You can use the following formula:
(Base CPU perf of VM - CPU Usage) / 100 = Credits bank or use per minute
e.g in above instance your baseline is 20% and if you use 10% of the CPU you are accumulating (20%-10%)/100 =
0.1 credit per minute.
Q: Does the B -Series support Premium Storage data disks?
A : Yes, all B-Series sizes support Premium Storage data disks.
Q: Why is my remaining credit set to 0 after a redeploy or a stop/start?
A : When a VM is “REDPLOYED” and the VM moves to another node, the accumulated credit is lost. If the VM is
stopped/started, but remains on the same node, the VM retains the accumulated credit. Whenever the VM starts
fresh on a node, it gets an initial credit, for Standard_B8ms it is 240.
Q: What happens if I deploy an unsupported OS image on B1ls?
A : B1ls only supports Linux images and if you deploy any another OS image you might not get the best customer
experience.
Next steps
Learn more about how Azure compute units (ACU) can help you compare compute performance across Azure
SKUs.
DCsv2-series
11/2/2020 • 2 minutes to read • Edit Online
The DCsv2-series can help protect the confidentiality and integrity of your data and code while it’s processed in the
public cloud. These machines are backed by the latest generation of Intel XEON E-2288G Processor with SGX
technology. With the Intel Turbo Boost Technology these machines can go up to 5.0GHz. DCsv2 series instances
enable customers to build secure enclave-based applications to protect their code and data while it’s in use.
Example use cases include: confidential multiparty data sharing, fraud detection, anti-money laundering,
blockchain, confidential usage analytics, intelligence analysis and confidential machine learning.
Premium Storage: Supported*
Premium Storage caching: Supported
Live Migration: Not Supported
Memory Preserving Updates: Not Supported
VM Generation Support: Generation 1 and 2
*Except for Standard_DC8_v2
MAX
C A C H ED
A N D T EM P
STO RA GE
T H RO UGH P M A X N IC S /
UT : IO P S / EXP EC T ED
T EM P MBPS N ET W O RK EP C
M EM O RY : STO RA GE M A X DATA (CACHE B A N DW IDT M EM O RY
SIZ E VC P U GIB ( SSD) GIB DISK S SIZ E IN GIB ) H ( M B P S) ( M IB )
Standard_D 1 4 50 1 2000/16 2 28
C1s_v2
DCsv2-series VMs are generation 2 VMs and only support Gen2 images.
Currently available in the regions listed here.
Previous generation of Confidential Compute VMs: DC series
Create DCsv2 VMs using the Azure portal or Azure Marketplace
Next steps
Learn more about how Azure compute units (ACU) can help you compare compute performance across Azure
SKUs.
Dv2 and DSv2-series
11/2/2020 • 3 minutes to read • Edit Online
The Dv2 and Dsv2-series, a follow-on to the original D-series, feature a more powerful CPU and optimal CPU-to-
memory configuration making them suitable for most production workloads. The Dv2-series is about 35% faster
than the D-series. Dv2-series run on Intel® Xeon® Platinum 8272CL (Cascade Lake), Intel® Xeon® 8171M
2.1GHz (Skylake), Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell), or the Intel® Xeon® E5-2673 v3 2.4 GHz
(Haswell) processors with the Intel Turbo Boost Technology 2.0. The Dv2-series has the same memory and disk
configurations as the D-series.
Dv2-series
Dv2-series sizes run on Intel® Xeon® Platinum 8272CL (Cascade Lake), Intel® Xeon® 8171M 2.1GHz
(Skylake) or the the Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell) or the Intel® Xeon® E5-2673 v3 2.4 GHz
(Haswell) processors with Intel Turbo Boost Technology 2.0.
ACU: 210-250
Premium Storage: Not Supported
Premium Storage caching: Not Supported
Live Migration: Supported
Memory Preserving Updates: Supported
VM Generation Support: Generation 1
MAX
T EM P
STO RA GE
T H RO UG
H P UT : EXP EC T ED
IO P S/ REA N ET W O RK
T EM P D MAX T H RO UG B A N DW ID
M EM O RY : STO RA GE M B P S/ W R DATA H P UT : TH
SIZ E VC P U GIB ( SSD) GIB IT E M B P S DISK S IO P S M A X N IC S ( M B P S)
DSv2-series
DSv2-series sizes run on Intel® Xeon® Platinum 8272CL (Cascade Lake), Intel® Xeon® 8171M 2.1GHz
(Skylake) or the the Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell) or the Intel® Xeon® E5-2673 v3 2.4 GHz
(Haswell) processors with Intel Turbo Boost Technology 2.0 and use premium storage.
ACU: 210-250
Premium Storage: Supported
Premium Storage caching: Supported
Live Migration: Supported
Memory Preserving Updates: Supported
VM Generation Support: Generation 1 and 2
MAX
C A C H ED
AND
T EM P
STO RA GE MAX
T H RO UG UN C A C H E
H P UT : D DISK EXP EC T ED
IO P S/ M B P T H RO UG N ET W O RK
T EM P MAX S (CACHE H P UT : B A N DW ID
M EM O RY : STO RA GE DATA SIZ E IN IO P S/ M B P TH
SIZ E VC P U GIB ( SSD) GIB DISK S GIB ) S M A X N IC S ( M B P S)
Next steps
Learn more about how Azure compute units (ACU) can help you compare compute performance across Azure
SKUs.
Dv3 and Dsv3-series
11/2/2020 • 4 minutes to read • Edit Online
The Dv3-series run on Intel® Xeon® Platinum 8272CL (Cascade Lake), Intel® Xeon® 8171M 2.1GHz
(Skylake), Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell), or the Intel® Xeon® E5-2673 v3 2.4 GHz (Haswell)
processors in a hyper-threaded configuration, providing a better value proposition for most general purpose
workloads. Memory has been expanded (from ~3.5 GiB/vCPU to 4 GiB/vCPU) while disk and network limits have
been adjusted on a per core basis to align with the move to hyperthreading. The Dv3-series no longer has the
high memory VM sizes of the D/Dv2-series, those have been moved to the memory optimized Ev3 and Esv3-
series.
Example D-series use cases include enterprise-grade applications, relational databases, in-memory caching, and
analytics.
Dv3-series
Dv3-series sizes run on Intel® Xeon® Platinum 8272CL (Cascade Lake), Intel® Xeon® 8171M 2.1GHz
(Skylake), Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell), or the Intel® Xeon® E5-2673 v3 2.4 GHz (Haswell)
processors with Intel® Turbo Boost Technology 2.0. The Dv3-series sizes offer a combination of vCPU, memory,
and temporary storage for most production workloads.
Data disk storage is billed separately from virtual machines. To use premium storage disks, use the Dsv3 sizes. The
pricing and billing meters for Dsv3 sizes are the same as Dv3-series.
Dv3-series VMs feature Intel® Hyper-Threading Technology.
ACU: 160-190
Premium Storage: Not Supported
Premium Storage caching: Not Supported
Live Migration: Supported
Memory Preserving Updates: Supported
VM Generation Support: Generation 1
M A X T EM P
STO RA GE
T H RO UGH P U MAX
T EM P T : IO P S/ REA D N IC S/ N ET W O
STO RA GE M A X DATA M B P S/ W RIT E RK
SIZ E VC P U M EM O RY : GIB ( SSD) GIB DISK S MBPS B A N DW IDT H
Dsv3-series
Dsv3-series sizes run on Intel® Xeon® Platinum 8272CL (Cascade Lake), Intel® Xeon® 8171M 2.1GHz
(Skylake), Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell), or the Intel® Xeon® E5-2673 v3 2.4 GHz (Haswell)
processors with Intel® Turbo Boost Technology 2.0 and use premium storage. The Dsv3-series sizes offer a
combination of vCPU, memory, and temporary storage for most production workloads.
Dsv3-series VMs feature Intel® Hyper-Threading Technology.
ACU: 160-190
Premium Storage: Supported
Premium Storage caching: Supported
Live Migration: Supported
Memory Preserving Updates: Supported
VM Generation Support: Generation 1 and 2
MAX
C A C H ED
AND MAX
T EM P B URST
STO RA G C A C H ED MAX
E AND MAX B URST MAX
T H RO U T EM P UN C A C UN C A C N IC S/ EX
GH P UT : STO RA G H ED H ED P EC T ED
IO P S/ M E DISK DISK N ET W O
T EM P BPS T H RO U T H RO U T H RO UG RK
STO RA G MAX (CACHE GH P UT : GH P UT : H P UT : B A N DW I
M EM O R E ( SSD) DATA SIZ E IN IO P S/ M IO P S/ M IO P S/ M DT H
SIZ E VC P U Y : GIB GIB DISK S GIB ) B P S1 BPS B P S1 ( M B P S)
1 Dsv3-series VMs can burst their disk performance and get up to their bursting max for up to 30 minutes at a
time.
Next steps
Learn more about how Azure compute units (ACU) can help you compare compute performance across Azure
SKUs.
Dav4 and Dasv4-series
11/2/2020 • 3 minutes to read • Edit Online
The Dav4-series and Dasv4-series are new sizes utilizing AMD's 2.35Ghz EPYCTM 7452 processor in a multi-
threaded configuration with up to 256 MB L3 cache dedicating 8 MB of that L3 cache to every 8 cores increasing
customer options for running their general purpose workloads. The Dav4-series and Dasv4-series have the same
memory and disk configurations as the D & Dsv3-series.
Dav4-series
ACU: 230-260
Premium Storage: Not Supported
Premium Storage caching: Not Supported
Live Migration: Supported
Memory Preserving Updates: Supported
VM Generation Support: Generation 1
Dav4-series sizes are based on the 2.35Ghz AMD EPYCTM 7452 processor that can achieve a boosted maximum
frequency of 3.35GHz. The Dav4-series sizes offer a combination of vCPU, memory and temporary storage for
most production workloads. Data disk storage is billed separately from virtual machines. To use premium SSD, use
the Dasv4 sizes. The pricing and billing meters for Dasv4 sizes are the same as the Dav4-series.
M A X T EM P
STO RA GE
T H RO UGH P
UT : IO P S / EXP EC T ED
T EM P REA D M B P S N ET W O RK
M EM O RY : STO RA GE M A X DATA / W RIT E B A N DW IDT
SIZ E VC P U GIB ( SSD) GIB DISK S MBPS M A X N IC S H ( M B P S)
Dasv4-series
ACU: 230-260
Premium Storage: Supported
Premium Storage caching: Supported
Live Migration: Supported
Memory Preserving Updates: Supported
VM Generation Support: Generation 1 and 2
Dasv4-series sizes are based on the 2.35Ghz AMD EPYCTM 7452 processor that can achieve a boosted maximum
frequency of 3.35GHz and use premium SSD. The Dasv4-series sizes offer a combination of vCPU, memory and
temporary storage for most production workloads.
MAX
C A C H ED
AND
T EM P
STO RA GE
T H RO UG MAX
H P UT : UN C A C H E
IO P S / D DISK EXP EC T ED
MBPS T H RO UG N ET W O RK
T EM P MAX (CACHE H P UT : B A N DW ID
M EM O RY : STO RA GE DATA SIZ E IN IO P S / TH
SIZ E VC P U GIB ( SSD) GIB DISK S GIB ) MBPS M A X N IC S ( M B P S)
Next steps
Learn more about how Azure compute units (ACU) can help you compare compute performance across Azure
SKUs.
Ddv4 and Ddsv4-series
11/2/2020 • 4 minutes to read • Edit Online
The Ddv4 and Ddsv4-series runs on the Intel® Xeon® Platinum 8272CL (Cascade Lake) processors in a hyper-
threaded configuration, providing a better value proposition for most general-purpose workloads. It features an
all core Turbo clock speed of 3.4 GHz, Intel® Turbo Boost Technology 2.0, Intel® Hyper-Threading Technology
and Intel® Advanced Vector Extensions 512 (Intel® AVX-512). They also support Intel® Deep Learning Boost.
These new VM sizes will have 50% larger local storage, as well as better local disk IOPS for both read and write
compared to the Dv3/Dsv3 sizes with Gen2 VMs.
D-series use cases include enterprise-grade applications, relational databases, in-memory caching, and analytics.
Ddv4-series
Ddv4-series sizes run on the Intel® Xeon® Platinum 8272CL (Cascade Lake). The Ddv4-series offer a
combination of vCPU, memory and temporary disk for most production workloads.
The new Ddv4 VM sizes include fast, larger local SSD storage (up to 2,400 GiB) and are designed for applications
that benefit from low latency, high-speed local storage, such as applications that require fast reads/ writes to temp
storage or that need temp storage for caches or temporary files. You can attach Standard SSDs and Standard
HDDs storage to the Ddv4 VMs. Remote Data disk storage is billed separately from virtual machines.
ACU: 195-210
Premium Storage: Not Supported
Premium Storage caching: Not Supported
Live Migration: Supported
Memory Preserving Updates: Supported
VM Generation Support: Generation 1 and 2
** MAX
C A C H ED
A N D T EM P
STO RA GE EXP EC T ED
T EM P T H RO UGH P N ET W O RK
M EM O RY : STO RA GE M A X DATA UT : B A N DW IDT
SIZ E VC P U GIB ( SSD) GIB DISK S IO P S/ M B P S M A X N IC S H ( M B P S)
Ddsv4-series
Ddsv4-series run on the Intel® Xeon® Platinum 8272CL (Cascade Lake). The Ddsv4-series offer a combination
of vCPU, memory and temporary disk for most production workloads.
The new Ddsv4 VM sizes include fast, larger local SSD storage (up to 2,400 GiB) and are designed for applications
that benefit from low latency, high-speed local storage, such as applications that require fast reads/ writes to temp
storage or that need temp storage for caches or temporary files.
NOTE
The pricing and billing meters for Ddsv4 sizes are the same as Ddv4-series.
ACU: 195-210
Premium Storage: Supported
Premium Storage caching: Supported
Live Migration: Supported
Memory Preserving Updates: Supported
VM Generation Support: Generation 1 and 2
** MAX
C A C H ED
AND
T EM P
STO RA GE
T H RO UG MAX
H P UT : UN C A C H
IO P S/ M B ED DISK EXP EC T ED
PS T H RO UG N ET W O RK
T EM P MAX (CACHE H P UT : B A N DW ID
M EM O RY : STO RA GE DATA SIZ E IN IO P S/ M B MAX TH
SIZ E VC P U GIB ( SSD) GIB DISK S GIB ) PS N IC S ( M B P S)
Next steps
Learn more about how Azure compute units (ACU) can help you compare compute performance across Azure
SKUs.
Dv4 and Dsv4-series
11/2/2020 • 2 minutes to read • Edit Online
The Dv4 and Dsv4-series runs on the Intel® Xeon® Platinum 8272CL (Cascade Lake) processors in a hyper-
threaded configuration, providing a better value proposition for most general-purpose workloads. It features an
all core Turbo clock speed of 3.4 GHz.
NOTE
For frequently asked questions, refer to Azure VM sizes with no local temp disk.
Dv4-series
Dv4-series sizes run on the Intel® Xeon® Platinum 8272CL (Cascade Lake). The Dv4-series sizes offer a
combination of vCPU, memory and remote storage options for most production workloads. Dv4-series VMs
feature Intel® Hyper-Threading Technology.
Remote Data disk storage is billed separately from virtual machines. To use premium storage disks, use the Dsv4
sizes. The pricing and billing meters for Dsv4 sizes are the same as Dv4-series.
ACU: 195-210
Premium Storage: Not Supported
Premium Storage caching: Not Supported
Live Migration: Supported
Memory Preserving Updates: Supported
VM Generation Support: Generation 1
EXP EC T ED
T EM P N ET W O RK
STO RA GE M A X DATA B A N DW IDT H
SIZ E VC P U M EM O RY : GIB ( SSD) GIB DISK S M A X N IC S ( M B P S)
Dsv4-series
Dsv4-series sizes run on the Intel® Xeon® Platinum 8272CL (Cascade Lake). The Dv4-series sizes offer a
combination of vCPU, memory and remote storage options for most production workloads. Dsv4-series VMs
feature Intel® Hyper-Threading Technology. Remote Data disk storage is billed separately from virtual machines.
ACU: 195-210
Premium Storage: Supported
Premium Storage caching: Supported
Live Migration: Supported
Memory Preserving Updates: Supported
VM Generation Support: Generation 1 and 2
MAX
UN C A C H ED
DISK EXP EC T ED
T EM P T H RO UGH P N ET W O RK
M EM O RY : STO RA GE M A X DATA UT : B A N DW IDT
SIZ E VC P U GIB ( SSD) GIB DISK S IO P S/ M B P S M A X N IC S H ( M B P S)
Compute optimized VM sizes have a high CPU-to-memory ratio. These sizes are good for medium traffic web
servers, network appliances, batch processes, and application servers. This article provides information about the
number of vCPUs, data disks, and NICs. It also includes information about storage throughput and network
bandwidth for each size in this grouping.
The Fsv2-series runs on 2nd Generation Intel® Xeon® Platinum 8272CL (Cascade Lake) processors and
Intel® Xeon® Platinum 8168 (Skylake) processors. It features a sustained all core Turbo clock speed of 3.4 GHz
and a maximum single-core turbo frequency of 3.7 GHz. Intel® AVX-512 instructions are new on Intel Scalable
Processors. These instructions provide up to a 2X performance boost to vector processing workloads on both
single and double precision floating point operations. In other words, they're really fast for any computational
workload.
At a lower per-hour list price, the Fsv2-series is the best value in price-performance in the Azure portfolio based
on the Azure Compute Unit (ACU) per vCPU.
Other sizes
General purpose
Memory optimized
Storage optimized
GPU optimized
High performance compute
Previous generations
Next steps
Learn more about how Azure compute units (ACU) can help you compare compute performance across Azure
SKUs.
For more information on how Azure names its VMs, see Azure virtual machine sizes naming conventions.
Fsv2-series
11/2/2020 • 3 minutes to read • Edit Online
The Fsv2-series runs on the Intel® Xeon® Platinum 8272CL (Cascade Lake) processors and Intel® Xeon®
Platinum 8168 (Skylake) processors. It features a sustained all core Turbo clock speed of 3.4 GHz and a maximum
single-core turbo frequency of 3.7 GHz. Intel® AVX-512 instructions are new on Intel Scalable Processors. These
instructions provide up to a 2X performance boost to vector processing workloads on both single and double
precision floating point operations. In other words, they're really fast for any computational workload.
Fsv2-series VMs feature Intel® Hyper-Threading Technology.
ACU: 195 - 210
Premium Storage: Supported
Premium Storage caching: Supported
Live Migration: Supported
Memory Preserving Updates: Supported
VM Generation Support: Generation 1 and 2
MAX
C A C H ED
AND
T EM P
STO RA GE MAX
T H RO UG UN C A C H E
H P UT : D DISK EXP EC T ED
IO P S/ M B P T H RO UG N ET W O RK
T EM P MAX S (CACHE H P UT : B A N DW ID
M EM O RY : STO RA GE DATA SIZ E IN IO P S/ M B P TH
SIZ E VC P U'S GIB ( SSD) GIB DISK S GIB ) S M A X N IC S ( M B P S)
1 The use of more than 64 vCPU require one of these supported guest operating systems:
Windows Server 2016 or later
Ubuntu 16.04 LTS or later, with Azure tuned kernel (4.15 kernel or later)
SLES 12 SP2 or later
RHEL or CentOS version 6.7 through 6.10, with Microsoft-provided LIS package 4.3.1 (or later) installed
RHEL or CentOS version 7.3, with Microsoft-provided LIS package 4.2.1 (or later) installed
RHEL or CentOS version 7.6 or later
Oracle Linux with UEK4 or later
Debian 9 with the backports kernel, Debian 10 or later
CoreOS with a 4.14 kernel or later
2 Instance is isolated to hardware dedicated to a single customer.
Next steps
Learn more about how Azure compute units (ACU) can help you compare compute performance across Azure
SKUs.
Memory optimized virtual machine sizes
11/2/2020 • 3 minutes to read • Edit Online
Memory optimized VM sizes offer a high memory-to-CPU ratio that is great for relational database servers,
medium to large caches, and in-memory analytics. This article provides information about the number of
vCPUs, data disks and NICs as well as storage throughput and network bandwidth for each size in this
grouping.
Dv2 and DSv2-series, a follow-on to the original D-series, features a more powerful CPU. The Dv2-
series is about 35% faster than the D-series. It runs on the Intel® Xeon® 8171M 2.1 GHz (Skylake)
or the Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell) or the Intel® Xeon® E5-2673 v3 2.4 GHz
(Haswell) processors, and with the Intel Turbo Boost Technology 2.0. The Dv2-series has the same
memory and disk configurations as the D-series.
Dv2 and DSv2-series are ideal for applications that demand faster vCPUs, better temporary storage
performance, or have higher memory demands. They offer a powerful combination for many
enterprise-grade applications.
The Eav4 and Easv4-series utilize AMD's 2.35Ghz EPYCTM 7452 processor in a multi-threaded
configuration with up to 256MB L3 cache, increasing options for running most memory optimized
workloads. The Eav4-series and Easv4-series have the same memory and disk configurations as the
Ev3 & Esv3-series.
The Ev3 and Esv3-series Intel® Xeon® 8171M 2.1 GHz (Skylake) or the Intel® Xeon® E5-2673
v4 2.3 GHz (Broadwell) processor in a hyper-threaded configuration, providing a better value
proposition for most general purpose workloads, and bringing the Ev3 into alignment with the
general purpose VMs of most other clouds. Memory has been expanded (from 7 GiB/vCPU to 8
GiB/vCPU) while disk and network limits have been adjusted on a per core basis to align with the
move to hyper-threading. The Ev3 is the follow up to the high memory VM sizes of the D/Dv2
families.
The Ev4 and Esv4-series runs on 2nd Generation Intel® Xeon® Platinum 8272CL (Cascade Lake)
processors in a hyper-threaded configuration, are ideal for various memory-intensive enterprise
applications and feature up to 504 GiB of RAM. It features the Intel® Turbo Boost Technology 2.0,
Intel® Hyper-Threading Technology and Intel® Advanced Vector Extensions 512 (Intel AVX-512).
The Ev4 and Esv4-series do not include a local temp disk. For more information, refer to Azure VM
sizes with no local temp disk.
The Edv4 and Edsv4-series runs on 2nd Generation Intel® Xeon® Platinum 8272CL (Cascade
Lake) processors, ideal for extremely large databases or other applications that benefit from high
vCPU counts and large amounts of memory. Additionally, these VM sizes include fast, larger local
SSD storage for applications that benefit from low latency, high-speed local storage. It features an all
core Turbo clock speed of 3.4 GHz, Intel® Turbo Boost Technology 2.0, Intel® Hyper-Threading
Technology and Intel® Advanced Vector Extensions 512 (Intel AVX-512).
The M-series offers a high vCPU count (up to 128 vCPUs) and a large amount of memory (up to 3.8
TiB). It's also ideal for extremely large databases or other applications that benefit from high vCPU
counts and large amounts of memory.
The Mv2-series offers the highest vCPU count (up to 416 vCPUs) and largest memory (up to 11.4
TiB) of any VM in the cloud. It's ideal for extremely large databases or other applications that benefit
from high vCPU counts and large amounts of memory.
Azure Compute offers virtual machine sizes that are Isolated to a specific hardware type and dedicated to a
single customer. These virtual machine sizes are best suited for workloads that require a high degree of
isolation from other customers for workloads involving elements like compliance and regulatory
requirements. Customers can also choose to further subdivide the resources of these Isolated virtual
machines by using Azure support for nested virtual machines. See the pages for virtual machine families
below for your isolated VM options.
Other sizes
General purpose
Compute optimized
Storage optimized
GPU optimized
High performance compute
Previous generations
Next steps
Learn more about how Azure compute units (ACU) can help you compare compute performance across
Azure SKUs.
For more information on how Azure names its VMs, see Azure virtual machine sizes naming conventions.
Memory optimized Dv2 and Dsv2-series
11/2/2020 • 3 minutes to read • Edit Online
Dv2 and Dsv2-series, a follow-on to the original D-series, features a more powerful CPU. DSv2-series sizes run on
Intel® Xeon® Platinum 8272CL (Cascade Lake), Intel® Xeon® 8171M 2.1 GHz (Skylake), or the Intel®
Xeon® E5-2673 v4 2.3 GHz (Broadwell), or the Intel® Xeon® E5-2673 v3 2.4 GHz (Haswell) processors. The
Dv2-series has the same memory and disk configurations as the D-series.
Dv2-series 11-15
Dv2-series sizes run on Intel® Xeon® Platinum 8272CL (Cascade Lake), Intel® Xeon® 8171M 2.1 GHz
(Skylake), or the Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell), or the Intel® Xeon® E5-2673 v3 2.4 GHz
(Haswell) processors.
ACU: 210 - 250
Premium Storage: Not Supported
Premium Storage caching: Not Supported
Live Migration: Supported
Memory Preserving Updates: Supported
VM Generation Support: Generation 1
M A X T EM P
STO RA GE
T H RO UGH P
UT : M A X DATA EXP EC T ED
T EM P IO P S/ REA D DISK S/ T H R N ET W O RK
M EM O RY : STO RA GE M B P S/ W RIT O UGH P UT : B A N DW IDT
SIZ E VC P U GIB ( SSD) GIB E MBPS IO P S M A X N IC S H ( M B P S)
1 Instance is isolated to hardware dedicated to a single customer. 2 25000 Mbps with Accelerated Networking.
DSv2-series 11-15
DSv2-series sizes run on Intel® Xeon® Platinum 8272CL (Cascade Lake), Intel® Xeon® 8171M 2.1 GHz
(Skylake), or the Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell), or the Intel® Xeon® E5-2673 v3 2.4 GHz
(Haswell) processors.
1
ACU: 210 - 250 1
Premium Storage: Supported
Premium Storage caching: Supported
Live Migration: Supported
Memory Preserving Updates: Supported
VM Generation Support: Generation 1 and 2
MAX
C A C H ED
AND
T EM P
STO RA GE MAX
T H RO UG UN C A C H E
H P UT : D DISK EXP EC T ED
IO P S/ M B P T H RO UG N ET W O RK
T EM P MAX S (CACHE H P UT : B A N DW ID
M EM O RY : STO RA GE DATA SIZ E IN IO P S/ M B P TH
SIZ E VC P U GIB ( SSD) GIB DISK S GIB ) S M A X N IC S ( M B P S)
1 The maximum disk throughput (IOPS or MBps) possible with a DSv2 series VM may be limited by the number,
size and striping of the attached disk(s). For details, see Designing for high performance. 2 Instance is isolated to the
Intel Haswell based hardware and dedicated to a single customer.
3 Constrained core sizes available.
4 25000 Mbps with Accelerated Networking.
Next steps
Learn more about how Azure compute units (ACU) can help you compare compute performance across Azure
SKUs.
Ev3 and Esv3-series
11/2/2020 • 4 minutes to read • Edit Online
The Ev3 and Esv3-series run on Intel® Xeon® Platinum 8272CL (Cascade Lake), or Intel® Xeon® 8171M 2.1
GHz (Skylake), or the Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell) processor in a hyper-threaded
configuration, providing a better value proposition for most general purpose workloads, and bringing the Ev3
into alignment with the general purpose VMs of most other clouds. Memory has been expanded (from 7
GiB/vCPU to 8 GiB/vCPU) while disk and network limits have been adjusted on a per core basis to align with the
move to hyperthreading. The Ev3 is the follow up to the high memory VM sizes of the D/Dv2 families.
Ev3-series
Ev3-series instances run on Intel® Xeon® Platinum 8272CL (Cascade Lake), Intel® Xeon® 8171M 2.1 GHz
(Skylake), or the Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell) processors, and feature Intel Turbo Boost
Technology 2.0. Ev3-series instances are ideal for memory-intensive enterprise applications.
Data disk storage is billed separately from virtual machines. To use premium storage disks, use the ESv3 sizes. The
pricing and billing meters for ESv3 sizes are the same as Ev3-series.
Ev3-series VM’s feature Intel® Hyper-Threading Technology.
ACU: 160 - 190
Premium Storage: Not Supported
Premium Storage caching: Not Supported
Live Migration: Supported
Memory Preserving Updates: Supported
VM Generation Support: Generation 1
M A X T EM P
STO RA GE
T H RO UGH P U
T EM P T : IO P S / M A X N IC S /
STO RA GE M A X DATA REA D M B P S / N ET W O RK
SIZ E VC P U M EM O RY : GIB ( SSD) GIB DISK S W RIT E M B P S B A N DW IDT H
Esv3-series
Esv3-series instances run on Intel® Xeon® Platinum 8272CL (Cascade Lake), Intel® Xeon® 8171M 2.1 GHz
(Skylake), or the Intel® Xeon® E5-2673 v4 2.3 GHz (Broadwell) processor, feature Intel Turbo Boost Technology
2.0 and use premium storage. Esv3-series instances are ideal for memory-intensive enterprise applications.
Esv3-series VM’s feature Intel® Hyper-Threading Technology.
ACU: 160-190
Premium Storage: Supported
Premium Storage caching: Supported
Live Migration: Supported
Memory Preserving Updates: Supported
VM Generation Support: Generation 1 and 2
MAX
C A C H ED
AND
T EM P B URST
STO RA G C A C H ED
E AND MAX B URST MAX
T H RO U T EM P UN C A C UN C A C N IC S/ EX
GH P UT : STO RA G H ED H ED P EC T ED
IO P S/ M E DISK DISK N ET W O
T EM P BPS T H RO U T H RO U T H RO U RK
STO RA G MAX (CACHE GH P UT : GH P UT : GH P UT : B A N DW I
M EM O R E ( SSD) DATA SIZ E IN IO P S/ M IO P S/ M IO P S/ M DT H
SIZ E VC P U Y : GIB GIB DISK S GIB ) B P S3 BPS B P S3 ( M B P S)
3 Esv3-series VMs can burst their disk performance and get up to their bursting max for up to 30 minutes at a
time.
Next steps
Learn more about how Azure compute units (ACU) can help you compare compute performance across Azure
SKUs.
Eav4 and Easv4-series
11/2/2020 • 3 minutes to read • Edit Online
The Eav4-series and Easv4-series utilize AMD's 2.35Ghz EPYCTM 7452 processor in a multi-threaded configuration
with up to 256MB L3 cache, increasing options for running most memory optimized workloads. The Eav4-series
and Easv4-series have the same memory and disk configurations as the Ev3 & Esv3-series.
Eav4-series
ACU: 230 - 260
Premium Storage: Not Supported
Premium Storage caching: Not Supported
Live Migration: Supported
Memory Preserving Updates: Supported
VM Generation Support: Generations 1 and 2
Eav4-series sizes are based on the 2.35Ghz AMD EPYCTM 7452 processor that can achieve a boosted maximum
frequency of 3.35GHz and use premium SSD. The Eav4-series sizes are ideal for memory-intensive enterprise
applications. Data disk storage is billed separately from virtual machines. To use premium SSD, use the Easv4-
series sizes. The pricing and billing meters for Easv4 sizes are the same as the Eav3-series.
M A X T EM P
STO RA GE
T H RO UGH P
UT : IO P S / EXP EC T ED
T EM P REA D M B P S N ET W O RK
M EM O RY : STO RA GE M A X DATA / W RIT E B A N DW IDT
SIZ E VC P U GIB ( SSD) GIB DISK S MBPS M A X N IC S H ( M B P S)
Easv4-series
ACU: 230 - 260
Premium Storage: Supported
Premium Storage caching: Supported
Live Migration: Supported
Memory Preserving Updates: Supported
VM Generation Support: Generations 1 and 2
Easv4-series sizes are based on the 2.35Ghz AMD EPYCTM 7452 processor that can achieve a boosted maximum
frequency of 3.35GHz and use premium SSD. The Easv4-series sizes are ideal for memory-intensive enterprise
applications.
MAX
C A C H ED
AND
T EM P
STO RA GE
T H RO UG MAX
H P UT : UN C A C H E
IO P S / D DISK EXP EC T ED
MBPS T H RO UG N ET W O RK
T EM P MAX (CACHE H P UT : B A N DW ID
M EM O RY : STO RA GE DATA SIZ E IN IO P S / TH
SIZ E VC P U GIB ( SSD) GIB DISK S GIB ) MBPS M A X N IC S ( M B P S)
The Edv4 and Edsv4-series runs on the Intel® Xeon® Platinum 8272CL (Cascade Lake) processors in a hyper-
threaded configuration, and are ideal for various memory-intensive enterprise applications and feature up to 504
GiB of RAM, Intel® Turbo Boost Technology 2.0, Intel® Hyper-Threading Technology and Intel® Advanced
Vector Extensions 512 (Intel® AVX-512). They also support Intel® Deep Learning Boost. These new VM sizes will
have 50% larger local storage, as well as better local disk IOPS for both read and write compared to the Ev3/Esv3
sizes with Gen2 VMs. It features an all core Turbo clock speed of 3.4 GHz.
Edv4-series
Edv4-series sizes run on the Intel® Xeon® Platinum 8272CL (Cascade Lake) processors. The Edv4 virtual
machine sizes feature up to 504 GiB of RAM, in addition to fast and large local SSD storage (up to 2,400 GiB).
These virtual machines are ideal for memory-intensive enterprise applications and applications that benefit from
low latency, high-speed local storage. You can attach Standard SSDs and Standard HDDs disk storage to the Edv4
VMs.
ACU: 195 - 210
Premium Storage: Not Supported
Premium Storage caching: Not Supported
Live Migration: Supported
Memory Preserving Updates: Supported
VM Generation Support: Generation 1 and 2
** MAX
C A C H ED
A N D T EM P
STO RA GE EXP EC T ED
T EM P T H RO UGH P N ET W O RK
M EM O RY : STO RA GE M A X DATA UT : B A N DW IDT
SIZ E VC P U GIB ( SSD) GIB DISK S IO P S/ M B P S M A X N IC S H ( M B P S)
Edsv4-series
Edsv4-series sizes run on the Intel® Xeon® Platinum 8272CL (Cascade Lake) processors. The Edsv4 virtual
machine sizes feature up to 504 GiB of RAM, in addition to fast and large local SSD storage (up to 2,400 GiB).
These virtual machines are ideal for memory-intensive enterprise applications and applications that benefit from
low latency, high-speed local storage.
ACU: 195-210
Premium Storage: Supported
Premium Storage caching: Supported
Live Migration: Supported
Memory Preserving Updates: Supported
VM Generation Support: Generation 1 and 2
** MAX
C A C H ED
AND
T EM P
STO RA GE MAX
T H RO UG UN C A C H E
H P UT : D DISK EXP EC T ED
IO P S/ M B P T H RO UG N ET W O RK
T EM P MAX S (CACHE H P UT : B A N DW ID
M EM O RY : STO RA GE DATA SIZ E IN IO P S/ M B P TH
SIZ E VC P U GIB ( SSD) GIB DISK S GIB ) S M A X N IC S ( M B P S)
The Ev4 and Esv4-series runs on the Intel® Xeon® Platinum 8272CL (Cascade Lake) processors in a hyper-
threaded configuration, are ideal for various memory-intensive enterprise applications and feature up to 504GiB
of RAM. It features an all core Turbo clock speed of 3.4 GHz.
NOTE
For frequently asked questions, refer to Azure VM sizes with no local temp disk.
Ev4-series
Ev4-series sizes run on the Intel Xeon® Platinum 8272CL (Cascade Lake). The Ev4-series instances are ideal for
memory-intensive enterprise applications. Ev4-series VMs feature Intel® Hyper-Threading Technology.
Remote Data disk storage is billed separately from virtual machines. To use premium storage disks, use the Esv4
sizes. The pricing and billing meters for Esv4 sizes are the same as Ev4-series.
ACU: 195 - 210
Premium Storage: Not Supported
Premium Storage caching: Not Supported
Live Migration: Supported
Memory Preserving Updates: Supported
VM Generation Support: Generation 1
EXP EC T ED
T EM P N ET W O RK
STO RA GE M A X DATA B A N DW IDT H
SIZ E VC P U M EM O RY : GIB ( SSD) GIB DISK S M A X N IC S ( M B P S)
Esv4-series
Esv4-series sizes run on the Intel® Xeon® Platinum 8272CL (Cascade Lake). The Esv4-series instances are ideal
for memory-intensive enterprise applications. Evs4-series VMs feature Intel® Hyper-Threading Technology.
Remote Data disk storage is billed separately from virtual machines.
ACU: 195-210
Premium Storage: Supported
Premium Storage caching: Supported
Live Migration: Supported
Memory Preserving Updates: Supported
VM Generation Support: Generation 1
MAX
UN C A C H ED
DISK EXP EC T ED
T EM P T H RO UGH P N ET W O RK
M EM O RY : STO RA GE M A X DATA UT : B A N DW IDT
SIZ E VC P U GIB ( SSD) GIB DISK S IO P S/ M B P S M A X N IC S H ( M B P S)
Next steps
Learn more about how Azure compute units (ACU) can help you compare compute performance across Azure
SKUs.
M-series
11/2/2020 • 3 minutes to read • Edit Online
The M-series offers a high vCPU count (up to 128 vCPUs) and a large amount of memory (up to 3.8 TiB). It's also
ideal for extremely large databases or other applications that benefit from high vCPU counts and large amounts of
memory. M-series sizes are supported both on the Intel® Xeon® CPU E7-8890 v3 @ 2.50GHz and on the
Intel® Xeon® Platinum 8280M (Cascade Lake).
M-series VM's feature Intel® Hyper-Threading Technology.
ACU: 160-180
Premium Storage: Supported
Premium Storage caching: Supported
Live Migration: Not Supported
Memory Preserving Updates: Not Supported
VM Generation Support: Generation 1 and 2
Write Accelerator: Supported
MAX
C A C H ED
AND
T EM P
STO RA GE MAX
T H RO UG UN C A C H E
H P UT : D DISK EXP EC T ED
IO P S/ M B P T H RO UG N ET W O RK
T EM P MAX S (CACHE H P UT : B A N DW ID
M EM O RY : STO RA GE DATA SIZ E IN IO P S/ M B P TH
SIZ E VC P U GIB ( SSD) GIB DISK S GIB ) S M A X N IC S ( M B P S)
1 More than 64 vCPU's require one of these supported guest OSes: Windows Server 2016, Ubuntu 16.04 LTS, SLES
12 SP2, and Red Hat Enterprise Linux, CentOS 7.3 or Oracle Linux 7.3 with LIS 4.2.1.
2 Instance is isolated to hardware dedicated to a single customer.
Next steps
Learn more about how Azure compute units (ACU) can help you compare compute performance across Azure
SKUs.
Mv2-series
11/2/2020 • 2 minutes to read • Edit Online
The Mv2-series features high throughput, low latency platform running on a hyper-threaded Intel® Xeon®
Platinum 8180M 2.5GHz (Skylake) processor with an all core base frequency of 2.5 GHz and a max turbo
frequency of 3.8 GHz. All Mv2-series virtual machine sizes can use both standard and premium persistent disks.
Mv2-series instances are memory optimized VM sizes providing unparalleled computational performance to
support large in-memory databases and workloads, with a high memory-to-CPU ratio that is ideal for relational
database servers, large caches, and in-memory analytics.
Mv2-series VM’s feature Intel® Hyper-Threading Technology
Premium Storage: Supported
Premium Storage caching: Supported
Live Migration: Not Supported
Memory Preserving Updates: Not Supported
VM Generation Support: Generation 1 and 2
Write Accelerator: Supported
MAX
C A C H ED
AND
T EM P
STO RA GE
T H RO UG MAX
H P UT : UN C A C H E
IO P S / D DISK EXP EC T ED
MBPS T H RO UG N ET W O RK
T EM P MAX (CACHE H P UT : B A N DW ID
M EM O RY : STO RA GE DATA SIZ E IN IO P S / TH
SIZ E VC P U GIB ( SSD) GIB DISK S GIB ) MBPS M A X N IC S ( M B P S)
1 Mv2-series VMs are generation 2 only and support a subset of generation 2 supported Images. Please see below
for the complete list of supported images for Mv2-series. If you're using Linux, see Support for generation 2 VMs
on Azure for instructions on how to find and select an image. If you're using Windows, see Support for generation
2 VMs on Azure for instructions on how to find and select an image.
Windows Server 2019 or later
SUSE Linux Enterprise Server 12 SP4 and later or SUSE Linux Enterprise Server 15 SP1 and later
Red Hat Enterprise Linux 7.6, 7.7, 8.1 or later
Oracle Enterprise Linux 7.7 or later
Next steps
Learn more about how Azure compute units (ACU) can help you compare compute performance across Azure
SKUs.
Constrained vCPU capable VM sizes
11/2/2020 • 2 minutes to read • Edit Online
Some database workloads like SQL Server or Oracle require high memory, storage, and I/O bandwidth, but not a
high core count. Many database workloads are not CPU-intensive. Azure offers certain VM sizes where you can
constrain the VM vCPU count to reduce the cost of software licensing, while maintaining the same memory,
storage, and I/O bandwidth.
The vCPU count can be constrained to one half or one quarter of the original VM size. These new VM sizes have a
suffix that specifies the number of active vCPUs to make them easier for you to identify.
For example, the current VM size Standard_GS5 comes with 32 vCPUs, 448 GB RAM, 64 disks (up to 256 TB), and
80,000 IOPs or 2 GB/s of I/O bandwidth. The new VM sizes Standard_GS5-16 and Standard_GS5-8 comes with 16
and 8 active vCPUs respectively, while maintaining the rest of the specs of the Standard_GS5 for memory, storage,
and I/O bandwidth.
The licensing fees charged for SQL Server or Oracle are constrained to the new vCPU count, and other products
should be charged based on the new vCPU count. This results in a 50% to 75% increase in the ratio of the VM
specs to active (billable) vCPUs. These new VM sizes allow customer workloads to use the same memory, storage,
and I/O bandwidth while optimizing their software licensing cost. At this time, the compute cost, which includes OS
licensing, remains the same one as the original size. For more information, see Azure VM sizes for more cost-
effective database workloads.
NAME VC P U SP EC S
Other sizes
Compute optimized
Memory optimized
Storage optimized
GPU
High performance compute
Next steps
Learn more about how Azure compute units (ACU) can help you compare compute performance across Azure
SKUs.
Storage optimized virtual machine sizes
11/2/2020 • 2 minutes to read • Edit Online
Storage optimized VM sizes offer high disk throughput and IO, and are ideal for Big Data, SQL, NoSQL
databases, data warehousing, and large transactional databases. Examples include Cassandra, MongoDB,
Cloudera, and Redis. This article provides information about the number of vCPUs, data disks, and NICs as
well as local storage throughput and network bandwidth for each optimized size.
The Lsv2-series features high throughput, low latency, directly mapped local NVMe storage running on the
AMD EPYCTM 7551 processor with an all core boost of 2.55GHz and a max boost of 3.0GHz. The Lsv2-
series VMs come in sizes from 8 to 80 vCPU in a simultaneous multi-threading configuration. There is 8
GiB of memory per vCPU, and one 1.92TB NVMe SSD M.2 device per 8 vCPUs, with up to 19.2TB
(10x1.92TB) available on the L80s v2.
Other sizes
General purpose
Compute optimized
Memory optimized
GPU optimized
High performance compute
Previous generations
Next steps
Learn more about how Azure compute units (ACU) can help you compare compute performance across
Azure SKUs.
Learn how to optimize performance on the Lsv2-series virtual machines for Windows or Linux.
For more information on how Azure names its VMs, see Azure virtual machine sizes naming conventions.
Lsv2-series
11/2/2020 • 4 minutes to read • Edit Online
The Lsv2-series features high throughput, low latency, directly mapped local NVMe storage running on the AMD
EPYCTM 7551 processor with an all core boost of 2.55GHz and a max boost of 3.0GHz. The Lsv2-series VMs come
in sizes from 8 to 80 vCPU in a simultaneous multi-threading configuration. There is 8 GiB of memory per vCPU,
and one 1.92TB NVMe SSD M.2 device per 8 vCPUs, with up to 19.2TB (10x1.92TB) available on the L80s v2.
NOTE
The Lsv2-series VMs are optimized to use the local disk on the node attached directly to the VM rather than using durable
data disks. This allows for greater IOPs / throughput for your workloads. The Lsv2 and Ls-series do not support the creation
of a local cache to increase the IOPs achievable by durable data disks.
The high throughput and IOPs of the local disk makes the Lsv2-series VMs ideal for NoSQL stores such as Apache
Cassandra and MongoDB which replicate data across multiple VMs to achieve persistence in the event of the failure of a
single VM.
To learn more, see Optimize performance on the Lsv2-series virtual machines for Windows or Linux.
ACU: 150-175
Premium Storage: Supported
Premium Storage caching: Not Supported
Live Migration: Not Supported
Memory Preserving Updates: Not Supported
VM Generation Support: Generation 1 and 2
Bursting: Supported
MAX
B URST
N VM E UN C A C UN C A C
DISK H ED H ED EXP EC
T H RO U DATA DATA T ED
GH P UT DISK DISK N ET W
3 T H RO U T H RO U O RK
M EM O T EM P ( REA D GH P UT GH P UT MAX BAND
RY DISK 1 N VM E IO P S/ ( IO P S/ ( IO P S/ DATA MAX W IDT H
SIZ E VC P U ( GIB ) ( GIB ) DISK S 2 M B P S) M B P S) 4 M B P S) 5 DISK S N IC S ( M B P S)
1 Lsv2-series VMs have a standard SCSI based temp resource disk for OS paging/swap file use (D: on Windows,
/dev/sdb on Linux). This disk provides 80 GiB of storage, 4,000 IOPS, and 80 MBps transfer rate for every 8 vCPUs
(e.g. Standard_L80s_v2 provides 800 GiB at 40,000 IOPS and 800 MBPS). This ensures the NVMe drives can be
fully dedicated to application use. This disk is Ephemeral, and all data will be lost on stop/deallocate.
2 Local NVMe Disks are ephemeral, data will be lost on these disks if you stop/deallocate your VM.
3 Hyper-V NVMe Direct technology provides unthrottled access to local NVMe drives mapped securely into the
guest VM space. Achieving maximum performance requires using either the latest WS2019 build or Ubuntu 18.04
or 16.04 from the Azure Marketplace. Write performance varies based on IO size, drive load, and capacity
utilization.
4 Lsv2-series VMs do not provide host cache for data disk as it does not benefit the Lsv2 workloads.
5 Lsv2-series VMs can burst their disk performance for up to 30 minutes at a time.
6 VMs with more than 64 vCPUs require one of these supported guest operating systems:
Windows Server 2016 or later
Ubuntu 16.04 LTS or later, with Azure tuned kernel (4.15 kernel or later)
SLES 12 SP2 or later
RHEL or CentOS version 6.7 through 6.10, with Microsoft-provided LIS package 4.3.1 (or later) installed
RHEL or CentOS version 7.3, with Microsoft-provided LIS package 4.2.1 (or later) installed
RHEL or CentOS version 7.6 or later
Oracle Linux with UEK4 or later
Debian 9 with the backports kernel, Debian 10 or later
CoreOS with a 4.14 kernel or later
Next steps
Learn more about how Azure compute units (ACU) can help you compare compute performance across Azure
SKUs.
Optimize performance on the Lsv2-series Windows
virtual machines
11/2/2020 • 5 minutes to read • Edit Online
Lsv2-series virtual machines support a variety of workloads that need high I/O and throughput on local storage
across a wide range of applications and industries. The Lsv2-series is ideal for Big Data, SQL, NoSQL databases,
data warehousing and large transactional databases, including Cassandra, MongoDB, Cloudera, and Redis.
The design of the Lsv2-series Virtual Machines (VMs) maximizes the AMD EPYC™ 7551 processor to provide the
best performance between the processor, memory, NVMe devices, and the VMs. In addition to maximizing the
hardware performance, Lsv2-series VMs are designed to work with the needs of Windows and Linux operating
systems for better performance with the hardware and the software.
Tuning the software and hardware resulted in the optimized version of Windows Server 2019 Datacenter, released
in early December 2018 to the Azure Marketplace, which supports maximum performance on the NVMe devices in
Lsv2-series VMs.
This article provides tips and suggestions to ensure your workloads and applications achieve the maximum
performance designed into the VMs. The information on this page will be continuously updated as more Lsv2
optimized images are added to the Azure Marketplace.
Next steps
See specifications for all VMs optimized for storage performance on Azure
GPU optimized virtual machine sizes
11/2/2020 • 2 minutes to read • Edit Online
GPU optimized VM sizes are specialized virtual machines available with single, multiple, or fractional
GPUs. These sizes are designed for compute-intensive, graphics-intensive, and visualization workloads.
This article provides information about the number and type of GPUs, vCPUs, data disks, and NICs.
Storage throughput and network bandwidth are also included for each size in this grouping.
The NCv3-series and NC T4_v3-series sizes are optimized for compute-intensive GPU-accelerated
applications. Some examples are CUDA and OpenCL-based applications and simulations, AI, and
Deep Learning. The NC T4 v3-series is focused on inference workloads featuring NVIDIA's Tesla T4
GPU and AMD EPYC2 Rome processor. The NCv3-series is focused on high-performance computing
and AI workloads featuring NVIDIA’s Tesla V100 GPU.
The NDv2-series size is focused on scale-up and scale-out deep learning training applications. The
NDv2-series uses the Nvidia Volta V100 and the Intel Xeon Platinum 8168 (Skylake) processor.
NV-series and NVv3-series sizes are optimized and designed for remote visualization, streaming,
gaming, encoding, and VDI scenarios using frameworks such as OpenGL and DirectX. These VMs
are backed by the NVIDIA Tesla M60 GPU.
NVv4-series VM sizes optimized and designed for VDI and remote visualization. With partitioned
GPUs, NVv4 offers the right size for workloads requiring smaller GPU resources. These VMs are
backed by the AMD Radeon Instinct MI25 GPU. NVv4 VMs currently support only Windows guest
operating system.
Deployment considerations
For availability of N-series VMs, see Products available by region.
N-series VMs can only be deployed in the Resource Manager deployment model.
N-series VMs differ in the type of Azure Storage they support for their disks. NC and NV VMs only
support VM disks that are backed by Standard Disk Storage (HDD). NCv2, NCv3, ND, NDv2, and
NVv2 VMs only support VM disks that are backed by Premium Disk Storage (SSD).
If you want to deploy more than a few N-series VMs, consider a pay-as-you-go subscription or
other purchase options. If you're using an Azure free account, you can use only a limited number of
Azure compute cores.
You might need to increase the cores quota (per region) in your Azure subscription, and increase
the separate quota for NC, NCv2, NCv3, ND, NDv2, NV, or NVv2 cores. To request a quota increase,
open an online customer support request at no charge. Default limits may vary depending on your
subscription category.
Other sizes
General purpose
Compute optimized
High performance compute
Memory optimized
Storage optimized
Previous generations
Next steps
Learn more about how Azure compute units (ACU) can help you compare compute performance across
Azure SKUs.
NC-series
11/2/2020 • 2 minutes to read • Edit Online
NC-series VMs are powered by the NVIDIA Tesla K80 card and the Intel Xeon E5-2690 v3 (Haswell) processor. Users
can crunch through data faster by leveraging CUDA for energy exploration applications, crash simulations, ray
traced rendering, deep learning, and more. The NC24r configuration provides a low latency, high-throughput
network interface optimized for tightly coupled parallel computing workloads.
Premium Storage: Not Supported
Premium Storage caching: Not Supported
Live Migration: Not Supported
Memory Preserving Updates: Not Supported
VM Generation Support: Generation 1
T EM P GP U
M EM O RY : STO RA GE M EM O RY : M A X DATA
SIZ E VC P U GIB ( SSD) GIB GP U GIB DISK S M A X N IC S
Standard_N 6 56 340 1 12 24 1
C6
Other sizes
General purpose
Memory optimized
Storage optimized
GPU optimized
High performance compute
Previous generations
Next steps
Learn more about how Azure compute units (ACU) can help you compare compute performance across Azure
SKUs.
NCv2-series
11/2/2020 • 2 minutes to read • Edit Online
NCv2-series VMs are powered by NVIDIA Tesla P100 GPUs. These GPUs can provide more than 2x the
computational performance of the NC-series. Customers can take advantage of these updated GPUs for traditional
HPC workloads such as reservoir modeling, DNA sequencing, protein analysis, Monte Carlo simulations, and
others. In addition to the GPUs, the NCv2-series VMs are also powered by Intel Xeon E5-2690 v4 (Broadwell) CPUs.
The NC24rs v2 configuration provides a low latency, high-throughput network interface optimized for tightly
coupled parallel computing workloads.
Premium Storage: Supported
Premium Storage caching: Supported
Live Migration: Not Supported
Memory Preserving Updates: Not Supported
VM Generation Support: Generation 1 and 2
IMPORTANT
For this VM series, the vCPU (core) quota in your subscription is initially set to 0 in each region. Request a vCPU quota
increase for this series in an available region.
MAX
UN C A C H E
D DISK
T H RO UG
T EM P GP U MAX H P UT :
M EM O RY : STO RA GE M EM O RY : DATA IO P S/ M B P
SIZ E VC P U GIB ( SSD) GIB GP U GIB DISK S S M A X N IC S
Other sizes
General purpose
Memory optimized
Storage optimized
GPU optimized
High performance compute
Previous generations
Next steps
Learn more about how Azure compute units (ACU) can help you compare compute performance across Azure
SKUs.
NCv3-series
11/2/2020 • 3 minutes to read • Edit Online
NCv3-series VMs are powered by NVIDIA Tesla V100 GPUs. These GPUs can provide 1.5x the computational
performance of the NCv2-series. Customers can take advantage of these updated GPUs for traditional HPC
workloads such as reservoir modeling, DNA sequencing, protein analysis, Monte Carlo simulations, and others.
The NC24rs v3 configuration provides a low latency, high-throughput network interface optimized for tightly
coupled parallel computing workloads. In addition to the GPUs, the NCv3-series VMs are also powered by Intel
Xeon E5-2690 v4 (Broadwell) CPUs.
Premium Storage: Supported
Premium Storage caching: Supported
Live Migration: Not Supported
Memory Preserving Updates: Not Supported
VM Generation Support: Generation 1 and 2
IMPORTANT
For this VM series, the vCPU (core) quota in your subscription is initially set to 0 in each region. Request a vCPU quota
increase for this series in an available region. These SKUs aren't available to trial or Visual Studio Subscriber Azure
subscriptions. Your subscription level might not support selecting or deploying these SKUs.
MAX
UN C A C H E
D DISK
T H RO UG
T EM P GP U MAX H P UT :
M EM O RY : STO RA GE M EM O RY : DATA IO P S/ M B P
SIZ E VC P U GIB ( SSD) GIB GP U GIB DISK S S M A X N IC S
Other sizes
General purpose
Memory optimized
Storage optimized
GPU optimized
High performance compute
Previous generations
Next steps
Learn more about how Azure compute units (ACU) can help you compare compute performance across Azure
SKUs.
NCasT4_v3-series (In Preview)
11/2/2020 • 2 minutes to read • Edit Online
The NCasT4_v3-series virtual machines are powered by Nvidia Tesla T4 GPUs and AMD EPYC 7V12(Rome) CPUs.
The VMs feature up to 4 NVIDIA T4 GPUs with 16 GB of memory each, up to 64 non-multithreaded AMD EPYC
7V12 (Rome) processor cores, and 440 GiB of system memory. These virtual machines are ideal for deploying AI
services- such as real-time inferencing of user generated requests, or for interactive graphics and visualization
workloads using NVIDIA's GRID driver and virtual GPU technology. Standard GPU compute workloads based
around CUDA, TensorRT, Caffe, ONNX and other frameworks, or GPU-accelerated graphical applications based
OpenGL and DirectX can be deployed economically, with close proximity to users, on the NCasT4_v3 series.
NOTE
Submit a request to be part of the preview program.
ACU: 230-260
Premium Storage: Supported
Premium Storage caching: Supported
Live Migration: Not Supported
Memory Preserving Updates: Not Supported
VM Generation Support: Generation 1
T EM P GP U
M EM O RY : STO RA GE M EM O RY : M A X DATA
SIZ E VC P U GIB ( SSD) GIB GP U GIB DISK S M A X N IC S
Standard_N 4 28 180 1 16 8 2
C4as_T4_v3
Standard_N 8 56 360 1 16 16 4
C8as_T4_v3
Other sizes
General purpose
Memory optimized
Storage optimized
GPU optimized
High performance compute
Previous generations
Next steps
Learn more about how Azure compute units (ACU) can help you compare compute performance across Azure
SKUs.
ND-series
11/2/2020 • 3 minutes to read • Edit Online
The ND-series virtual machines are a new addition to the GPU family designed for AI, and Deep Learning
workloads. They offer excellent performance for training and inference. ND instances are powered by NVIDIA Tesla
P40 GPUs and Intel Xeon E5-2690 v4 (Broadwell) CPUs. These instances provide excellent performance for single-
precision floating point operations, for AI workloads utilizing Microsoft Cognitive Toolkit, TensorFlow, Caffe, and
other frameworks. The ND-series also offers a much larger GPU memory size (24 GB), enabling to fit much larger
neural net models. Like the NC-series, the ND-series offers a configuration with a secondary low-latency, high-
throughput network through RDMA, and InfiniBand connectivity so you can run large-scale training jobs spanning
many GPUs.
Premium Storage: Supported
Premium Storage caching: Supported
Live Migration: Not Supported
Memory Preserving Updates: Not Supported
VM Generation Support: Generation 1 and 2
IMPORTANT
For this VM series, the vCPU (core) quota per region in your subscription is initially set to 0. Request a vCPU quota increase
for this series in an available region.
MAX
UN C A C H E
D DISK
T H RO UG
T EM P GP U MAX H P UT :
M EM O RY : STO RA GE M EM O RY : DATA IO P S/ M B P
SIZ E VC P U GIB ( SSD) GIB GP U GIB DISK S S M A X N IC S
Other sizes
General purpose
Memory optimized
Storage optimized
GPU optimized
High performance compute
Previous generations
Next steps
Learn more about how Azure compute units (ACU) can help you compare compute performance across Azure
SKUs.
Updated NDv2-series
11/2/2020 • 3 minutes to read • Edit Online
The NDv2-series virtual machine is a new addition to the GPU family designed for the needs of the most
demanding GPU-accelerated AI, machine learning, simulation, and HPC workloads.
NDv2 is powered by 8 NVIDIA Tesla V100 NVLINK-connected GPUs, each with 32 GB of GPU memory. Each NDv2
VM also has 40 non-HyperThreaded Intel Xeon Platinum 8168 (Skylake) cores and 672 GiB of system memory.
NDv2 instances provide excellent performance for HPC and AI workloads utilizing CUDA GPU-optimized
computation kernels, and the many AI, ML, and analytics tools that support GPU acceleration 'out-of-box,' such as
TensorFlow, Pytorch, Caffe, RAPIDS, and other frameworks.
Critically, the NDv2 is built for both computationally intense scale-up (harnessing 8 GPUs per VM) and scale-out
(harnessing multiple VMs working together) workloads. The NDv2 series now supports 100-Gigabit InfiniBand
EDR backend networking, similar to that available on the HB series of HPC VM, to allow high-performance
clustering for parallel scenarios including distributed training for AI and ML. This backend network supports all
major InfiniBand protocols, including those employed by NVIDIA’s NCCL2 libraries, allowing for seamless
clustering of GPUs.
IMPORTANT
When enabling InfiniBand on the ND40rs_v2 VM, please use the 4.7-1.0.0.1 Mellanox OFED driver.
Due to increased GPU memory, the new ND40rs_v2 VM requires the use of Generation 2 VMs and marketplace images.
Please note: The ND40s_v2 featuring 16 GB of per-GPU memory is no longer available for preview and has been superceded
by the updated ND40rs_v2.
MAX
UN C A C
H ED
DISK MAX
T EM P T H RO UG N ET W O
STO RA G GP U MAX H P UT : RK
M EM O R E ( SSD) : M EM O R DATA IO P S / B A N DW I MAX
SIZ E VC P U Y : GIB GIB GP U Y : GIB DISK S MBPS DT H N IC S
Other sizes
General purpose
Memory optimized
Storage optimized
GPU optimized
High performance compute
Previous generations
Next steps
Learn more about how Azure compute units (ACU) can help you compare compute performance across Azure
SKUs.
NV-series
11/2/2020 • 2 minutes to read • Edit Online
The NV-series virtual machines are powered by NVIDIA Tesla M60 GPUs and NVIDIA GRID technology for desktop
accelerated applications and virtual desktops where customers are able to visualize their data or simulations. Users
are able to visualize their graphics intensive workflows on the NV instances to get superior graphics capability and
additionally run single precision workloads such as encoding and rendering. NV-series VMs are also powered by
Intel Xeon E5-2690 v3 (Haswell) CPUs.
Each GPU in NV instances comes with a GRID license. This license gives you the flexibility to use an NV instance as
a virtual workstation for a single user, or 25 concurrent users can connect to the VM for a virtual application
scenario.
Premium Storage: Not Supported
Premium Storage caching: Not Supported
Live Migration: Not Supported
Memory Preserving Updates: Not Supported
VM Generation Support: Generation 1
T EM P
STO RA G GP U MAX VIRT UA L VIRT UA L
M EM O R E ( SSD) M EM O R DATA MAX W O RK ST A P P L IC A
SIZ E VC P U Y : GIB GIB GP U Y : GIB DISK S N IC S AT IO N S T IO N S
Standard 6 56 340 1 8 24 1 1 25
_NV6
Other sizes
General purpose
Memory optimized
Storage optimized
GPU optimized
High performance compute
Previous generations
Next steps
Learn more about how Azure compute units (ACU) can help you compare compute performance across Azure
SKUs.
NVv3-series
11/2/2020 • 3 minutes to read • Edit Online
The NVv3-series virtual machines are powered by NVIDIA Tesla M60 GPUs and NVIDIA GRID technology with Intel
E5-2690 v4 (Broadwell) CPUs and Intel Hyper-Threading Technology. These virtual machines are targeted for GPU
accelerated graphics applications and virtual desktops where customers want to visualize their data, simulate
results to view, work on CAD, or render and stream content. Additionally, these virtual machines can run single
precision workloads such as encoding and rendering. NVv3 virtual machines support Premium Storage and come
with twice the system memory (RAM) when compared with its predecessor NV-series.
Each GPU in NVv3 instances comes with a GRID license. This license gives you the flexibility to use an NV instance
as a virtual workstation for a single user, or 25 concurrent users can connect to the VM for a virtual application
scenario.
Premium Storage: Supported
Premium Storage caching: Supported
Live Migration: Not Supported
Memory Preserving Updates: Not Supported
VM Generation Support: Generation 1 and 2
MAX MAX
UN C A C N IC S /
H ED EXP EC
DISK T ED
T EM P T H RO U N ET W VIRT UA
STO RA GH P UT O RK L VIRT UA
GE GP U MAX : BAND W O RK S L
M EM O ( SSD) M EM O DATA IO P S/ W IDT H TAT IO N A P P L IC
SIZ E VC P U RY : GIB GIB GP U RY : GIB DISK S MBPS ( M B P S) S AT IO N S
Other sizes
General purpose
Memory optimized
Storage optimized
GPU optimized
High performance compute
Previous generations
Next steps
Learn more about how Azure compute units (ACU) can help you compare compute performance across Azure
SKUs.
NVv4-series
11/2/2020 • 2 minutes to read • Edit Online
The NVv4-series virtual machines are powered by AMD Radeon Instinct MI25 GPUs and AMD EPYC 7V12(Rome)
CPUs. With NVv4-series Azure is introducing virtual machines with partial GPUs. Pick the right sized virtual
machine for GPU accelerated graphics applications and virtual desktops starting at 1/8th of a GPU with 2 GiB
frame buffer to a full GPU with 16 GiB frame buffer. NVv4 virtual machines currently support only Windows guest
operating system.
ACU: 230-260
Premium Storage: Supported
Premium Storage caching: Supported
Live Migration: Not Supported
Memory Preserving Updates: Not Supported
VM Generation Support: Generation 1
M A X N IC S /
EXP EC T ED
T EM P GP U N ET W O RK
M EM O RY : STO RA GE M EM O RY : M A X DATA B A N DW IDT
SIZ E VC P U GIB ( SSD) GIB GP U GIB DISK S H ( M B P S)
Other sizes
General purpose
Memory optimized
Storage optimized
GPU optimized
High performance compute
Previous generations
Next steps
Learn more about how Azure compute units (ACU) can help you compare compute performance across Azure
SKUs.
Install NVIDIA GPU drivers on N-series VMs running
Windows
11/2/2020 • 3 minutes to read • Edit Online
To take advantage of the GPU capabilities of Azure N-series VMs backed by NVIDIA GPUs, you must install
NVIDIA GPU drivers. The NVIDIA GPU Driver Extension installs appropriate NVIDIA CUDA or GRID drivers on an
N-series VM. Install or manage the extension using the Azure portal or tools such as Azure PowerShell or Azure
Resource Manager templates. See the NVIDIA GPU Driver Extension documentation for supported operating
systems and deployment steps.
If you choose to install NVIDIA GPU drivers manually, this article provides supported operating systems, drivers,
and installation and verification steps. Manual driver setup information is also available for Linux VMs.
For basic specs, storage capacities, and disk details, see GPU Windows VM sizes.
TIP
As an alternative to manual CUDA driver installation on a Windows Server VM, you can deploy an Azure Data Science
Virtual Machine image. The DSVM editions for Windows Server 2016 pre-install NVIDIA CUDA drivers, the CUDA Deep
Neural Network Library, and other tools.
OS DRIVER
Driver installation
1. Connect by Remote Desktop to each N-series VM.
2. Download, extract, and install the supported driver for your Windows operating system.
After GRID driver installation on a VM, a restart is required. After CUDA driver installation, a restart is not
required.
To query the GPU device state, run the nvidia-smi command-line utility installed with the driver.
1. Open a command prompt and change to the C:\Program Files\NVIDIA Corporation\NVSMI
directory.
2. Run nvidia-smi . If the driver is installed, you will see output similar to the following. The GPU-Util shows
0% unless you are currently running a GPU workload on the VM. Your driver version and GPU details may
be different from the ones shown.
RDMA network connectivity
RDMA network connectivity can be enabled on RDMA-capable N-series VMs such as NC24r deployed in the
same availability set or in a single placement group in a virtual machine scale set. The HpcVmDrivers extension
must be added to install Windows network device drivers that enable RDMA connectivity. To add the VM
extension to an RDMA-enabled N-series VM, use Azure PowerShell cmdlets for Azure Resource Manager.
To install the latest version 1.1 HpcVMDrivers extension on an existing RDMA-capable VM named myVM in the
West US region:
For more information, see Virtual machine extensions and features for Windows.
The RDMA network supports Message Passing Interface (MPI) traffic for applications running with Microsoft MPI
or Intel MPI 5.x.
Next steps
Developers building GPU-accelerated applications for the NVIDIA Tesla GPUs can also download and install
the latest CUDA Toolkit. For more information, see the CUDA Installation Guide.
Install AMD GPU drivers on N-series VMs running
Windows
11/2/2020 • 2 minutes to read • Edit Online
To take advantage of the GPU capabilities of the new Azure NVv4 series VMs running Windows, AMD GPU drivers
must be installed. The AMD GPU Driver Extension installs AMD GPU drivers on a NVv4-series VM. Install or
manage the extension using the Azure portal or tools such as Azure PowerShell or Azure Resource Manager
templates. See the AMD GPU Driver Extension documentation for supported operating systems and deployment
steps.
If you choose to install AMD GPU drivers manually, this article provides supported operating systems, drivers, and
installation and verification steps.
Only GPU drivers published by Microsoft are supported on NVv4 VMs. Please DO NOT install GPU drivers from
any other source.
For basic specs, storage capacities, and disk details, see GPU Windows VM sizes.
Driver installation
1. Connect by Remote Desktop to each NVv4-series VM.
2. If you need to uninstall the previous driver version then download the AMD cleanup utility here Please do
not use the utility that comes with the previous version of the driver.
3. Download and install the latest driver.
4. Reboot the VM.
If you are running Windows 10 build 1903 or higher then dxdiag will show no information in the 'Display' tab.
Please use the 'Save All Information' option at the bottom and the output file will show the information related to
AMD MI25 GPU.
High performance computing VM sizes
11/2/2020 • 8 minutes to read • Edit Online
Azure H-series virtual machines (VMs) are designed to deliver leadership-class performance, scalability,
and cost efficiency for a variety of real-world HPC workloads.
HBv2-series VMs are optimized for applications driven by memory bandwidth, such as fluid dynamics,
finite element analysis, and reservoir simulation. HBv2 VMs feature 120 AMD EPYC 7742 processor cores,
4 GB of RAM per CPU core, and no simultaneous multithreading. Each HBv2 VM provides up to 340
GB/sec of memory bandwidth, and up to 4 teraFLOPS of FP64 compute.
HBv2 VMs feature 200 Gb/sec Mellanox HDR InfiniBand, while both HB and HC-series VMs feature 100
Gb/sec Mellanox EDR InfiniBand. Each of these VM types are connected in a non-blocking fat tree for
optimized and consistent RDMA performance. HBv2 VMs support Adaptive Routing and the Dynamic
Connected Transport (DCT, in additional to standard RC and UD transports). These features enhance
application performance, scalability, and consistency, and usage of them is strongly recommended.
HB-series VMs are optimized for applications driven by memory bandwidth, such as fluid dynamics,
explicit finite element analysis, and weather modeling. HB VMs feature 60 AMD EPYC 7551 processor
cores, 4 GB of RAM per CPU core, and no hyperthreading. The AMD EPYC platform provides more than
260 GB/sec of memory bandwidth.
HC-series VMs are optimized for applications driven by dense computation, such as implicit finite element
analysis, molecular dynamics, and computational chemistry. HC VMs feature 44 Intel Xeon Platinum 8168
processor cores, 8 GB of RAM per CPU core, and no hyperthreading. The Intel Xeon Platinum platform
supports Intel’s rich ecosystem of software tools such as the Intel Math Kernel Library.
H-series VMs are optimized for applications driven by high CPU frequencies or large memory per core
requirements. H-series VMs feature 8 or 16 Intel Xeon E5 2667 v3 processor cores, 7 or 14 GB of RAM
per CPU core, and no hyperthreading. H-series features 56 Gb/sec Mellanox FDR InfiniBand in a non-
blocking fat tree configuration for consistent RDMA performance. H-series VMs support Intel MPI 5.x and
MS-MPI.
NOTE
The A8 – A11 VMs are planned for retirement on 3/2021. For more information, see HPC Migration Guide.
RDMA-capable instances
Most of the HPC VM sizes (HBv2, HB, HC, H16r, H16mr, A8 and A9) feature a network interface for remote
direct memory access (RDMA) connectivity. Selected N-series sizes designated with 'r' (ND40rs_v2,
ND24rs, NC24rs_v3, NC24rs_v2 and NC24r) are also RDMA-capable. This interface is in addition to the
standard Azure Ethernet network interface available in the other VM sizes.
This interface allows the RDMA-capable instances to communicate over an InfiniBand (IB) network,
operating at HDR rates for HBv2, EDR rates for HB, HC, NDv2, FDR rates for H16r, H16mr, and other
RDMA-capable N-series virtual machines, and QDR rates for A8 and A9 VMs. These RDMA capabilities
can boost the scalability and performance of certain Message Passing Interface (MPI) applications.
NOTE
In Azure HPC, there are two classes of VMs depending on whether they are SR-IOV enabled for InfiniBand.
Currently, the SR-IOV for InfiniBand enabled VMs are: HBv2, HB, HC, NCv3 and NDv2. Rest of the InfiniBand
enabled VMs are not SR-IOV enabled currently. RDMA is only enabled over the InfiniBand (IB) network and is
supported for all RDMA-capable VMs. IP over IB is only supported on the SR-IOV enabled VMs. RDMA is not
enabled over the Ethernet network.
Operating System - Linux is very well supported for HPC VMs; distros such as CentOS, RHEL,
Ubuntu, SUSE are commonly used. Regarding Windows support, Windows Server 2016 and newer
versions are supported on all the HPC series VMs. Windows Server 2012 R2, Windows Server
2012 are also supported on the non-SR-IOV enabled VMs (H16r, H16mr, A8 and A9). Note that
Windows Server 2012 R2 is not supported on HBv2 and other VMs with more than 64 (virtual or
physical) cores. See VM Images for a list of supported VM Images on the Marketplace and how
they can be configured appropriately.
InfiniBand and Drivers - On InfiniBand enabled VMs, the appropriate drivers are required to
enable RDMA. On Linux, for both SR-IOV and non-SR-IOV enabled VMs, the CentOS-HPC VM
images in the Marketplace come pre-configured with the appropriate drivers. The Ubuntu VM
images can be configured with the right drivers using the instructions here. See Configure and
Optimize VMs for Linux OS for more details on ready-to-use VM Linux OS images.
On Linux, the InfiniBandDriverLinux VM extension can be used to install the Mellanox OFED drivers
and enable InfiniBand on the SR-IOV enabled H- and N-series VMs. Learn more about enabling
InfiniBand on RDMA-capable VMs at HPC Workloads.
On Windows, the InfiniBandDriverWindows VM extension installs Windows Network Direct drivers
(on non-SR-IOV VMs) or Mellanox OFED drivers (on SR-IOV VMs) for RDMA connectivity. In
certain deployments of A8 and A9 instances, the HpcVmDrivers extension is added automatically.
Note that the HpcVmDrivers VM extension is being deprecated; it will not be updated.
To add the VM extension to a VM, you can use Azure PowerShell cmdlets. For more information,
see Virtual machine extensions and features. You can also work with extensions for VMs deployed
in the classic deployment model.
MPI - The SR-IOV enabled VM sizes on Azure (HBv2, HB, HC, NCv3, NDv2) allow almost any flavor
of MPI to be used with Mellanox OFED. On non-SR-IOV enabled VMs, supported MPI
implementations use the Microsoft Network Direct (ND) interface to communicate between VMs.
Hence, only Microsoft MPI (MS-MPI) 2012 R2 or later and Intel MPI 5.x versions are supported.
Later versions (2017, 2018) of the Intel MPI runtime library may or may not be compatible with
the Azure RDMA drivers. See Setup MPI for HPC for more details on setting up MPI on HPC VMs
on Azure.
RDMA network address space - The RDMA network in Azure reserves the address space
172.16.0.0/16. To run MPI applications on instances deployed in an Azure virtual network, make
sure that the virtual network address space does not overlap the RDMA network.
Deployment considerations
Azure subscription – To deploy more than a few compute-intensive instances, consider a pay-as-
you-go subscription or other purchase options. If you're using an Azure free account, you can use
only a limited number of Azure compute cores.
Pricing and availability - These VM sizes are offered only in the Standard pricing tier. Check
Products available by region for availability in Azure regions.
Cores quota – You might need to increase the cores quota in your Azure subscription from the
default value. Your subscription might also limit the number of cores you can deploy in certain VM
size families, including the H-series. To request a quota increase, open an online customer support
request at no charge. (Default limits may vary depending on your subscription category.)
NOTE
Contact Azure Support if you have large-scale capacity needs. Azure quotas are credit limits, not capacity
guarantees. Regardless of your quota, you are only charged for cores that you use.
Vir tual network – An Azure virtual network is not required to use the compute-intensive
instances. However, for many deployments you need at least a cloud-based Azure virtual network,
or a site-to-site connection if you need to access on-premises resources. When needed, create a
new virtual network to deploy the instances. Adding compute-intensive VMs to a virtual network
in an affinity group is not supported.
Resizing – Because of their specialized hardware, you can only resize compute-intensive instances
within the same size family (H-series or N-series). For example, you can only resize an H-series VM
from one H-series size to another. Additional considerations around InfiniBand driver support and
NVMe disks may need to be considered for certain VMs.
Other sizes
General purpose
Compute optimized
Memory optimized
Storage optimized
GPU optimized
Previous generations
Next steps
Learn more about configuring your VMs, enabling InfiniBand, setting up MPI and optimizing HPC
applications for Azure at HPC Workloads.
Read about the latest announcements and some HPC examples and results at the Azure Compute Tech
Community Blogs.
For a higher level architectural view of running HPC workloads, see High Performance Computing
(HPC) on Azure.
H-series
11/2/2020 • 3 minutes to read • Edit Online
H-series VMs are optimized for applications driven by high CPU frequencies or large memory per core
requirements. H-series VMs feature 8 or 16 Intel Xeon E5 2667 v3 processor cores, up to 14 GB of RAM per CPU
core, and no hyperthreading. H-series features 56 Gb/sec Mellanox FDR InfiniBand in a non-blocking fat tree
configuration for consistent RDMA performance. H-series VMs are not SR-IOV enabled currently and support Intel
MPI 5.x and MS-MPI.
ACU: 290-300
Premium Storage: Not Supported
Premium Storage caching: Not Supported
Live Migration: Not Supported
Memory Preserving Updates: Not Supported
VM Generation Support: Generation 1
SIN G
ALL- L E-
C OR C OR
BAS ES E RDM
ME E F RE F RE A MAX
MOR CPU Q UE Q UE P ERF T EM DISK MAX
ME Y F RE NCY NCY O RM P MAX THR ET H
MOR BAN Q UE ( GH ( GH ANC MPI STO DAT O UG ERN
P RO Y DW I NCY Z, Z, E SUP RA G A HPU ET
VC P C ESS ( GIB DT H ( GH P EA P EA ( GB / POR E DISK T: VN IC
SIZ E U OR ) GB / S Z) K) K) S) T ( GIB ) S IO P S S
1 For MPI applications, dedicated RDMA backend network is enabled by FDR InfiniBand network.
NOTE
Among the RDMA capable VMs, the H-series are not-SR-IOV enabled. Therefore, the supported VM Images, InfiniBand
driver requirements and supported MPI libraries are different from the SR-IOV enabled VMs.
Other sizes
General purpose
Memory optimized
Storage optimized
GPU optimized
High performance compute
Previous generations
Next steps
Learn more about configuring your VMs, enabling InfiniBand, setting up MPI and optimizing HPC applications
for Azure at HPC Workloads.
Read about the latest announcements and some HPC examples and results at the Azure Compute Tech
Community Blogs.
For a higher level architectural view of running HPC workloads, see High Performance Computing (HPC) on
Azure.
Learn more about how Azure compute units (ACU) can help you compare compute performance across Azure
SKUs.
HB-series
11/2/2020 • 2 minutes to read • Edit Online
HB-series VMs are optimized for applications driven by memory bandwidth, such as fluid dynamics, explicit finite
element analysis, and weather modeling. HB VMs feature 60 AMD EPYC 7551 processor cores, 4 GB of RAM per
CPU core, and no simultaneous multithreading. An HB VM provides up to 260 GB/sec of memory bandwidth.
HB-series VMs feature 100 Gb/sec Mellanox EDR InfiniBand. These VMs are connected in a non-blocking fat tree
for optimized and consistent RDMA performance. These VMs support Adaptive Routing and the Dynamic
Connected Transport (DCT, in additional to standard RC and UD transports). These features enhance application
performance, scalability, and consistency, and usage of them is strongly recommended.
ACU: 199-216
Premium Storage: Supported
Premium Storage caching: Supported
Live Migration: Not Supported
Memory Preserving Updates: Not Supported
VM Generation Support: Generation 1 and 2
ALL- SIN G
C O RE L E-
S C O RE
B A SE F REQ F REQ RDM
M EM CPU UEN C UEN C A
O RY F REQ Y Y P ERF MAX
BAN UEN C ( GH Z ( GH Z O RM T EM P ET H E
P RO C M EM DW ID Y , , ANCE MPI STO R MAX RN ET
ESSO O RY TH ( GH Z P EA K P EA K ( GB / S SUP P A GE DATA VN IC
SIZ E VC P U R ( GIB ) GB / S ) ) ) ) O RT ( GIB ) DISK S S
Stand 60 AMD 240 263 2.0 2.55 2.55 100 All 700 4 8
ard_ EPYC
HB60 7551
rs
Other sizes
General purpose
Memory optimized
Storage optimized
GPU optimized
High performance compute
Previous generations
Next steps
Learn more about configuring your VMs, enabling InfiniBand, setting up MPI, and optimizing HPC applications
for Azure at HPC Workloads.
Read about the latest announcements and some HPC examples and results at the Azure Compute Tech
Community Blogs.
For a higher level architectural view of running HPC workloads, see High Performance Computing (HPC) on
Azure.
Learn more about how Azure compute units (ACU) can help you compare compute performance across Azure
SKUs.
HBv2-series
11/2/2020 • 2 minutes to read • Edit Online
HBv2-series VMs are optimized for applications driven by memory bandwidth, such as fluid dynamics, finite
element analysis, and reservoir simulation. HBv2 VMs feature 120 AMD EPYC 7742 processor cores, 4 GB of RAM
per CPU core, and no simultaneous multithreading. Each HBv2 VM provides up to 340 GB/sec of memory
bandwidth, and up to 4 teraFLOPS of FP64 compute.
HBv2-series VMs feature 200 Gb/sec Mellanox HDR InfiniBand. These VMs are connected in a non-blocking fat tree
for optimized and consistent RDMA performance. These VMs support Adaptive Routing and the Dynamic
Connected Transport (DCT, in additional to standard RC and UD transports). These features enhance application
performance, scalability, and consistency, and usage of them is strongly recommended.
Premium Storage: Supported
Premium Storage caching: Supported
Live Migration: Not Supported
Memory Preserving Updates: Not Supported
VM Generation Support: Generation 1
ALL- SIN G
C O RE L E-
S C O RE
B A SE F REQ F REQ RDM
M EM CPU UEN C UEN C A
O RY F REQ Y Y P ERF MAX
BAN UEN C ( GH Z ( GH Z O RM T EM P ET H E
P RO C M EM DW ID Y , , ANCE MPI STO R MAX RN ET
ESSO O RY TH ( GH Z P EA K P EA K ( GB / S SUP P A GE DATA VN IC
SIZ E VC P U R ( GIB ) GB / S ) ) ) ) O RT ( GIB ) DISK S S
Stand 120 AMD 480 350 2.45 3.1 3.3 200 All 480 8 8
ard_ EPYC +
HB12 7V12 960
0rs_v
2
Other sizes
General purpose
Memory optimized
Storage optimized
GPU optimized
High performance compute
Previous generations
Next steps
Learn more about configuring your VMs, enabling InfiniBand, setting up MPI and optimizing HPC applications
for Azure at HPC Workloads.
Read about the latest announcements and some HPC examples and results at the Azure Compute Tech
Community Blogs.
For a higher level architectural view of running HPC workloads, see High Performance Computing (HPC) on
Azure.
Learn more about how Azure compute units (ACU) can help you compare compute performance across Azure
SKUs.
HC-series
11/2/2020 • 2 minutes to read • Edit Online
HC-series VMs are optimized for applications driven by dense computation, such as implicit finite element analysis,
molecular dynamics, and computational chemistry. HC VMs feature 44 Intel Xeon Platinum 8168 processor cores,
8 GB of RAM per CPU core, and no hyperthreading. The Intel Xeon Platinum platform supports Intel’s rich
ecosystem of software tools such as the Intel Math Kernel Library and advanced vector processing capabilities
such as AVX-512.
HC-series VMs feature 100 Gb/sec Mellanox EDR InfiniBand. These VMs are connected in a non-blocking fat tree
for optimized and consistent RDMA performance. These VMs support Adaptive Routing and the Dynamic
Connected Transport (DCT, in additional to standard RC and UD transports). These features enhance application
performance, scalability, and consistency, and usage of them is recommended.
ACU: 297-315
Premium Storage: Supported
Premium Storage caching: Supported
Live Migration: Not Supported
Memory Preserving Updates: Not Supported
VM Generation Support: Generation 1 and 2
ALL- SIN G
C O RE L E-
S C O RE
B A SE F REQ F REQ RDM
M EM CPU UEN C UEN C A
O RY F REQ Y Y P ERF MAX
BAN UEN C ( GH Z ( GH Z O RM T EM P ET H E
P RO C M EM DW ID Y , , ANCE MPI STO R MAX RN ET
ESSO O RY TH ( GH Z P EA K P EA K ( GB / S SUP P A GE DATA VN IC
SIZ E VC P U R ( GIB ) GB / S ) ) ) ) O RT ( GIB ) DISK S S
Stand 44 Intel 352 191 2.7 3.4 3.7 100 All 700 4 8
ard_ Xeon
HC4 Platin
4rs um
8168
Other sizes
General purpose
Memory optimized
Storage optimized
GPU optimized
High performance compute
Previous generations
Next steps
Learn more about configuring your VMs, enabling InfiniBand, setting up MPI, and optimizing HPC applications
for Azure at HPC Workloads.
Read about the latest announcements and some HPC examples and results at the Azure Compute Tech
Community Blogs.
For a high-level architectural view of running HPC workloads, see High Performance Computing (HPC) on
Azure.
Learn more about how Azure compute units (ACU) can help you compare compute performance across Azure
SKUs.
Save costs with Azure Reserved VM Instances (Linux)
11/6/2020 • 7 minutes to read • Edit Online
When you commit to an Azure reserved VM instance you can save money. The reservation discount is applied
automatically to the number of running virtual machines that match the reservation scope and attributes. You don't
need to assign a reservation to a virtual machine to get the discounts. A reserved instance purchase covers only the
compute part of your VM usage. For Windows VMs, the usage meter is split into two separate meters. There's a
compute meter, which is same as the Linux meter, and a Windows IP meter. The charges that you see when you
make the purchase are only for the compute costs. Charges don't include Windows software costs. For more
information about software costs, see Software costs not included with Azure Reserved VM Instances.
Subscription The subscription used to pay for the reservation. The payment
method on the subscription is charged the costs for the
reservation. The subscription type must be an enterprise
agreement (offer numbers: MS-AZR-0017P or MS-AZR-
0148P) or Microsoft Customer Agreement or an individual
subscription with pay-as-you-go rates (offer numbers: MS-
AZR-0003P or MS-AZR-0023P). The charges are deducted
from the monetary commitment balance, if available, or
charged as overage. For a subscription with pay-as-you-go
rates, the charges are billed to the credit card or invoice
payment method on the subscription.
Term One year or three years. There's also a 5-year term available
only for HBv2 VMs.
Next steps
To learn how to manage a reservation, see Manage Azure Reservations.
To learn more about Azure Reservations, see the following articles:
What are Azure Reservations?
Manage Reservations in Azure
Understand how the reservation discount is applied
Understand reservation usage for a subscription with pay-as-you-go rates
Understand reservation usage for your Enterprise enrollment
Windows software costs not included with reservations
Azure Reservations in Partner Center Cloud Solution Provider (CSP) program
Save costs with Azure Dedicated Host reservations
11/2/2020 • 5 minutes to read • Edit Online
When you commit to a reserved instance of Azure Dedicated Hosts, you can save money. The reservation discount
is applied automatically to the number of running dedicated hosts that match the reservation scope and attributes.
You don't need to assign a reservation to a dedicated host to get the discounts. A reserved instance purchase
covers only the compute part of your usage and does include software licensing costs. See the Overview of Azure
Dedicated Hosts for virtual machines.
Buy a reservation
You can buy a reserved instance of an Azure Dedicated Host instance in the Azure portal.
Pay for the reservation up front or with monthly payments. These requirements apply to buying a reserved
Dedicated Host instance:
You must be in an Owner role for at least one EA subscription or a subscription with a pay-as-you-go rate.
For EA subscriptions, the Add Reser ved Instances option must be enabled in the EA portal. Or, if that
setting is disabled, you must be an EA Admin for the subscription.
For the Cloud Solution Provider (CSP) program, only the admin agents or sales agents can buy reservations.
To buy an instance:
1. Sign in to the Azure portal.
2. Select All ser vices > Reser vations .
3. Select Add to purchase a new reservation and then click Dedicated Hosts .
4. Enter required fields. Running Dedicated Hosts instances that match the attributes you select qualify to get
the reservation discount. The actual number of your Dedicated Host instances that get the discount depend
on the scope and quantity selected.
If you have an EA agreement, you can use the Add more option to quickly add additional instances. The option
isn't available for other subscription types.
Subscription The subscription used to pay for the reservation. The payment
method on the subscription is charged the costs for the
reservation. The subscription type must be an enterprise
agreement (offer numbers: MS-AZR-0017P or MS-AZR-
0148P) or Microsoft Customer Agreement or an individual
subscription with pay-as-you-go rates (offer numbers: MS-
AZR-0003P or MS-AZR-0023P). The charges are deducted
from the monetary commitment balance, if available, or
charged as overage. For a subscription with pay-as-you-go
rates, the charges are billed to the credit card or invoice
payment method on the subscription.
Single resource group scope — Applies the reservation discount to the matching resources in the
selected resource group only.
Single subscription scope — Applies the reservation discount to the matching resources in the selected
subscription.
Shared scope — Applies the reservation discount to matching resources in eligible subscriptions that are
in the billing context. For EA customers, the billing context is the enrollment. For individual subscriptions
with pay-as-you-go rates, the billing scope is all eligible subscriptions created by the account administrator.
Next steps
To learn how to manage a reservation, see Manage Azure Reservations.
To learn more about Azure Reservations, see the following articles:
What are Azure Reservations?
Using Azure Dedicated Hosts
Dedicated Hosts Pricing
Manage Reservations in Azure
Understand how the reservation discount is applied
Understand reservation usage for a subscription with pay-as-you-go rates
Understand reservation usage for your Enterprise enrollment
Windows software costs not included with reservations
Azure Reservations in Partner Center Cloud Solution Provider (CSP) program
What are Azure Reservations?
11/2/2020 • 6 minutes to read • Edit Online
Azure Reservations help you save money by committing to one-year or three-year plans for multiple products.
Committing allows you to get a discount on the resources you use. Reservations can significantly reduce your
resource costs by up to 72% from pay-as-you-go prices. Reservations provide a billing discount and don't affect the
runtime state of your resources. After you purchase a reservation, the discount automatically applies to matching
resources.
You can pay for a reservation up front or monthly. The total cost of up-front and monthly reservations is the same
and you don't pay any extra fees when you choose to pay monthly. Monthly payment is available for Azure
reservations, not third-party products.
You can buy a reservation in the Azure portal.
Buying a reservation
You can purchase reservations from the Azure portal, APIs, PowerShell, and CLI.
Go to the Azure portal to make a purchase.
For more information, seeBuy a reservation.
How is a reservation billed?
The reservation is charged to the payment method tied to the subscription. The reservation cost is deducted from
your monetary commitment balance, if available. When your monetary commitment balance doesn't cover the cost
of the reservation, you're billed the overage. If you have a subscription from an individual plan with pay-as-you-go
rates, the credit card you have on your account is billed immediately for up-front purchases. Monthly payments
appear on your invoice and your credit card is charged monthly. When you're billed by invoice, you see the charges
on your next invoice.
Next steps
Learn more about Azure Reservations with the following articles:
Manage Azure Reservations
Understand reservation usage for your subscription with pay-as-you-go rates
Understand reservation usage for your Enterprise enrollment
Windows software costs not included with reservations
Azure Reservations in Partner Center Cloud Solution Provider (CSP) program
Learn more about reservations for service plans:
Virtual Machines with Azure Reserved VM Instances
Azure Cosmos DB resources with Azure Cosmos DB reserved capacity
SQL Database compute resources with Azure SQL Database reserved capacity
Azure Cache for Redis resources with Azure Cache for Redis reserved capacity Learn more about
reservations for software plans:
Red Hat software plans from Azure Reservations
SUSE software plans from Azure Reservations
Virtual machine size flexibility with Reserved VM
Instances
11/2/2020 • 2 minutes to read • Edit Online
When you buy a Reserved VM Instance, you can choose to optimize for instance size flexibility or capacity priority.
For more information about setting or changing the optimize setting for reserved VM instances, see Change the
optimize setting for reserved VM instances.
With a reserved virtual machine instance that's optimized for instance size flexibility, the reservation you buy can
apply to the virtual machines (VMs) sizes in the same instance size flexibility group. For example, if you buy a
reservation for a VM size that's listed in the DSv2 Series, like Standard_DS5_v2, the reservation discount can apply
to the other four sizes that are listed in that same instance size flexibility group:
Standard_DS1_v2
Standard_DS2_v2
Standard_DS3_v2
Standard_DS4_v2
But that reservation discount doesn't apply to VMs sizes that are listed in different instance size flexibility groups,
like SKUs in DSv2 Series High Memory: Standard_DS11_v2, Standard_DS12_v2, and so on.
Within the instance size flexibility group, the number of VMs the reservation discount applies to depends on the VM
size you pick when you buy a reservation. It also depends on the sizes of the VMs that you have running. The ratio
column compares the relative footprint for each VM size in that instance size flexibility group. Use the ratio value to
calculate how the reservation discount applies to the VMs you have running.
Examples
The following examples use the sizes and ratios in the DSv2-series table.
You buy a reserved VM instance with the size Standard_DS4_v2 where the ratio or relative footprint compared to
the other sizes in that series is 8.
Scenario 1: Run eight Standard_DS1_v2 sized VMs with a ratio of 1. Your reservation discount applies to all eight
of those VMs.
Scenario 2: Run two Standard_DS2_v2 sized VMs with a ratio of 2 each. Also run a Standard_DS3_v2 sized VM
with a ratio of 4. The total footprint is 2+2+4=8. So your reservation discount applies to all three of those VMs.
Scenario 3: Run one Standard_DS5_v2 with a ratio of 16. Your reservation discount applies to half that VM's
compute cost.
The following sections show what sizes are in the same size series group when you buy a reserved VM instance
optimized for instance size flexibility.
Using Spot VMs allows you to take advantage of our unused capacity at a significant cost savings. At any point in
time when Azure needs the capacity back, the Azure infrastructure will evict Spot VMs. Therefore, Spot VMs are
great for workloads that can handle interruptions like batch processing jobs, dev/test environments, large
compute workloads, and more.
The amount of available capacity can vary based on size, region, time of day, and more. When deploying Spot VMs,
Azure will allocate the VMs if there is capacity available, but there is no SLA for these VMs. A Spot VM offers no
high availability guarantees. At any point in time when Azure needs the capacity back, the Azure infrastructure will
evict Spot VMs with 30 seconds notice.
Eviction policy
VMs can be evicted based on capacity or the max price you set. When creating a Spot VM, you can set the eviction
policy to Deallocate (default) or Delete.
The Deallocate policy moves your VM to the stopped-deallocated state, allowing you to redeploy it later. However,
there is no guarantee that the allocation will succeed. The deallocated VMs will count against your quota and you
will be charged storage costs for the underlying disks.
If you would like your VM to be deleted when it is evicted, you can set the eviction policy to delete. The evicted
VMs are deleted together with their underlying disks, so you will not continue to be charged for the storage.
You can opt-in to receive in-VM notifications through Azure Scheduled Events. This will notify you if your VMs are
being evicted and you will have 30 seconds to finish any jobs and perform shutdown tasks prior to the eviction.
O P T IO N O UTC O M E
Max price is set to >= the current price. VM is deployed if capacity and quota are available.
Max price is set to < the current price. The VM is not deployed. You will get an error message that
the max price needs to be >= current price.
Restarting a stop/deallocate VM if the max price is >= the If there is capacity and quota, then the VM is deployed.
current price
Restarting a stop/deallocate VM if the max price is < the You will get an error message that the max price needs to be
current price >= current price.
Price for the VM has gone up and is now > the max price. The VM gets evicted. You get a 30s notification before actual
eviction.
After eviction the price for the VM goes back to being < the The VM will not be automatically re-started. You can restart
max price. the VM yourself, and it will be charged at the current price.
If the max price is set to -1 The VM will not be evicted for pricing reasons. The max price
will be the current price, up to the price for standard VMs. You
will never be charged above the standard price.
O P T IO N O UTC O M E
Changing the max price You need to deallocate the VM to change the max price.
Deallocate the VM, set a new max price, then update the VM.
Limitations
The following VM sizes are not supported for Spot VMs:
B-series
Promo versions of any size (like Dv2, NV, NC, H promo sizes)
Spot VMs can be deployed to any region, except Microsoft Azure China 21Vianet.
The following offer types are currently supported:
Enterprise Agreement
Pay-as-you-go
Sponsored
For Cloud Service Provider (CSP), contact your partner
Pricing
Pricing for Spot VMs is variable, based on region and SKU. For more information, see VM pricing for Linux and
Windows.
You can also query pricing information using the Azure retail prices API to query for information about Spot
pricing. The meterName and skuName will both contain Spot .
With variable pricing, you have option to set a max price, in US dollars (USD), using up to 5 decimal places. For
example, the value 0.98765 would be a max price of $0.98765 USD per hour. If you set the max price to be -1 , the
VM won't be evicted based on price. The price for the VM will be the current price for spot or the price for a
standard VM, which ever is less, as long as there is capacity and quota available.
Next steps
Use the CLI, portal, ARM template, or PowerShell to deploy Spot VMs.
You can also deploy a scale set with Spot VM instances.
If you encounter an error, see Error codes.
Azure VM sizes with no local temporary disk
11/2/2020 • 2 minutes to read • Edit Online
This article provides answers to frequently asked questions (FAQ) about Azure VM sizes that do not have a local
temporary disk (i.e. no local temp disk). For more information on these VM sizes, see Specifications for Dv4 and
Dsv4-series (General Purpose Workloads) or Specifications for Ev4 and Esv4-series (Memory Optimized
Workloads).
NOTE
Local temporary disk is not persistent; to ensure your data is persistent, please use Standard HDD, Premium SSD or Ultra
SSD options.
What are the differences between these new VM sizes and the General
Purpose Dv3/Dsv3 or the Memory Optimized Ev3/Esv3 VM sizes that I
am used to?
V3 VM FA M IL IES W IT H LO C A L T EM P N EW V4 FA M IL IES W IT H LO C A L T EM P N EW V4 FA M IL IES W IT H N O LO C A L
DISK DISK T EM P DISK
Can I resize a VM size that has a local temp disk to a VM size with no
local temp disk?
No. The only combinations allowed for resizing are:
1. VM (with local temp disk) -> VM (with local temp disk); and
2. VM (with no local temp disk) -> VM (with no local temp disk).
NOTE
If an image depends on the resource disk, or a pagefile or swapfile exists on the local temp disk, the diskless images will not
work—instead, use the ‘with disk’ alternative.
Next steps
In this document, you learned more about the most frequent questions related to Azure VMs with no local temp
disk. For more information about these VM sizes, see the following articles:
Specifications for Dv4 and Dsv4-series (General Purpose Workload)
Specifications for Ev4 and Esv4-series (Memory Optimized Workload)
Azure virtual machine sizes naming conventions
11/2/2020 • 2 minutes to read • Edit Online
This page outlines the naming conventions used for Azure VMs. VMs use these naming conventions to denote
varying features and specifications.
VA L UE EXP L A N AT IO N
Additive Features One or more lower case letters denote additive features, such
as:
a = AMD-based processor
d = disk (local temp disk is present); this is for newer Azure
VMs, see Ddv4 and Ddsv4-series
h = hibernation capable
i = isolated size
l = low memory; a lower amount of memory than the
memory intensive size
m = memory intensive; the most amount of memory in a
particular size
t = tiny memory; the smallest amount of memory in a
particular size
r = RDMA capable
s = Premium Storage capable, including possible use of Ultra
SSD (Note: some newer sizes without the attribute of s can
still support Premium Storage e.g. M128, M64, etc.)
Example breakdown
[Family] + [Sub-family]* + [# of vCPUs] + [Additive Features] + [Accelerator Type]* + [Version]
Example 1: M416ms_v2
VA L UE EXP L A N AT IO N
Family M
VA L UE EXP L A N AT IO N
# of vCPUs 416
Version v2
Example 2: NV16as_v4
VA L UE EXP L A N AT IO N
Family N
Sub-family V
# of vCPUs 16
Version v4
Example 3: NC4as_T4_v3
VA L UE EXP L A N AT IO N
Family N
Sub-family C
# of vCPUs 4
Accelerator Type T4
Version v3
Next steps
Learn more about available VM Sizes in Azure.
Previous generations of virtual machine sizes
11/6/2020 • 14 minutes to read • Edit Online
This section provides information on previous generations of virtual machine sizes. These sizes can still
be used, but there are newer generations available.
F-series
F-series is based on the 2.4 GHz Intel Xeon® E5-2673 v3 (Haswell) processor, which can achieve clock
speeds as high as 3.1 GHz with the Intel Turbo Boost Technology 2.0. This is the same CPU performance
as the Dv2-series of VMs.
F-series VMs are an excellent choice for workloads that demand faster CPUs but do not need as much
memory or temporary storage per vCPU. Workloads such as analytics, gaming servers, web servers, and
batch processing will benefit from the value of the F-series.
ACU: 210 - 250
Premium Storage: Not Supported
Premium Storage caching: Not Supported
M A X T EM P
STO RA GE MAX
T H RO UGH P N IC S/ EXP EC
UT : M A X DATA T ED
T EM P IO P S/ REA D DISK S/ T H RO N ET W O RK
M EM O RY : STO RA GE M B P S/ W RIT UGH P UT : B A N DW IDT H
SIZ E VC P U GIB ( SSD) GIB E MBPS IO P S ( M B P S)
Fs-series 1
The Fs-series provides all the advantages of the F-series, in addition to Premium storage.
ACU: 210 - 250
Premium Storage: Supported
Premium Storage caching: Supported
MAX
C A C H ED
AND
T EM P
STO RA GE MAX MAX
T H RO UGH UN C A C H E N IC S/ EXP E
P UT : D DISK C T ED
IO P S/ M B P T H RO UGH N ET W O RK
T EM P MAX S (CACHE P UT : B A N DW ID
M EM O RY : STO RA GE DATA SIZ E IN IO P S/ M B P TH
SIZ E VC P U GIB ( SSD) GIB DISK S GIB ) S ( M B P S)
NVv2-series
Newer size recommendation : NVv3-series
The NVv2-series virtual machines are powered by NVIDIA Tesla M60 GPUs and NVIDIA GRID technology
with Intel Broadwell CPUs. These virtual machines are targeted for GPU accelerated graphics
applications and virtual desktops where customers want to visualize their data, simulate results to view,
work on CAD, or render and stream content. Additionally, these virtual machines can run single precision
workloads such as encoding and rendering. NVv2 virtual machines support Premium Storage and come
with twice the system memory (RAM) when compared with its predecessor NV-series.
Each GPU in NVv2 instances comes with a GRID license. This license gives you the flexibility to use an NV
instance as a virtual workstation for a single user, or 25 concurrent users can connect to the VM for a
virtual application scenario.
T EM P VIRT UA
STO RA L VIRT UA
GE GP U MAX W O RK S L
M EM O ( SSD) M EM O DATA MAX TAT IO N A P P L IC
SIZ E VC P U RY : GIB GIB GP U RY : GIB DISK S N IC S S AT IO N S
Basic A
Newer size recommendation : Av2-series
Premium Storage: Not Supported
Premium Storage caching: Not Supported
The basic tier sizes are primarily for development workloads and other applications that don't require
load balancing, auto-scaling, or memory-intensive virtual machines.
MAX M A X. DATA M A X. IO P S
SIZ E – T EM P O RA RY DISK S ( 1023 ( 300 P ER
SIZ E\ N A M E VC P U M EM O RY N IC S ( M A X) DISK SIZ E GB EA C H ) DISK )
MAX
N IC S/ EXP EC
M A X DATA T ED
T EM P DISK N ET W O RK
M EM O RY : STO RA GE M A X DATA T H RO UGH P B A N DW IDT H
SIZ E VC P U GIB ( H DD) : GIB DISK S UT : IO P S ( M B P S)
1 The A0 size is over-subscribed on the physical hardware. For this specific size only, other customer
deployments may impact the performance of your running workload. The relative performance is
outlined below as the expected baseline, subject to an approximate variability of 15 percent.
M A X DATA
T EM P DISK
M EM O RY : STO RA GE M A X DATA T H RO UGH P
SIZ E VC P U GIB ( H DD) : GIB DISK S UT : IO P S M A X N IC S
1For MPI applications, dedicated RDMA backend network is enabled by FDR InfiniBand network, which
delivers ultra-low-latency and high bandwidth.
NOTE
The A8 – A11 VMs are planned for retirement on 3/2021. We strongly recommend not creating any new A8 –
A11 VMs. Please migrate any existing A8 – A11 VMs to newer and powerful high-performance computing VM
sizes such as H, HB, HC, HBv2 as well as general purpose compute VM sizes such as D, E, and F for better price-
performance. For more information, see HPC Migration Guide.
D-series
Newer size recommendation : Dav4-series, Dv4-series and Ddv4-series
ACU: 160-250 1
Premium Storage: Not Supported
Premium Storage caching: Not Supported
M A X T EM P
STO RA GE MAX
T H RO UGH P N IC S/ EXP EC
UT : M A X DATA T ED
T EM P IO P S/ REA D DISK S/ T H RO N ET W O RK
M EM O RY : STO RA GE M B P S/ W RIT UGH P UT : B A N DW IDT H
SIZ E VC P U GIB ( SSD) GIB E MBPS IO P S ( M B P S)
M A X T EM P
STO RA GE MAX
T H RO UGH P N IC S/ EXP EC
UT : M A X DATA T ED
T EM P IO P S/ REA D DISK S/ T H RO N ET W O RK
M EM O RY : STO RA GE M B P S/ W RIT UGH P UT : B A N DW IDT H
SIZ E VC P U GIB ( SSD) GIB E MBPS IO P S ( M B P S)
Preview: DC -series
Newer size recommendation : DCsv2-series
Premium Storage: Supported
Premium Storage caching: Supported
The DC-series uses the latest generation of 3.7GHz Intel XEON E-2176G Processor with SGX technology,
and with the Intel Turbo Boost Technology can go up to 4.7GHz.
MAX
C A C H ED
AND
T EM P
STO RA GE M A X N IC S
T H RO UGH MAX /
P UT : IO P S UN C A C H E EXP EC T ED
/ MBPS D DISK N ET W O RK
T EM P MAX (CACHE T H RO UGH B A N DW ID
M EM O RY : STO RA GE DATA SIZ E IN P UT : IO P S TH
SIZ E VC P U GIB ( SSD) GIB DISK S GIB ) / MBPS ( M B P S)
DS -series
Newer size recommendation : Dasv4-series, Dsv4-series and Ddsv4-series
ACU: 160-250 1
Premium Storage: Supported
Premium Storage caching: Supported
MAX
C A C H ED
AND
T EM P
STO RA GE MAX MAX
T H RO UGH UN C A C H E N IC S/ EXP E
P UT : D DISK C T ED
IO P S/ M B P T H RO UGH N ET W O RK
T EM P MAX S (CACHE P UT : B A N DW ID
M EM O RY : STO RA GE DATA SIZ E IN IO P S/ M B P TH
SIZ E VC P U GIB ( SSD) GIB DISK S GIB ) S ( M B P S)
1 The maximum disk throughput (IOPS or MBps) possible with a DS series VM may be limited by the
number, size and striping of the attached disk(s). For details, see Design for high performance. 2 VM
Family can run on one of the following CPU's: 2.2 GHz Intel Xeon® E5-2660 v2, 2.4 GHz Intel Xeon®
E5-2673 v3 (Haswell) or 2.3 GHz Intel XEON® E5-2673 v4 (Broadwell)
Ls-series
Newer size recommendation : Lsv2-series
The Ls-series offers up to 32 vCPUs, using the Intel® Xeon® processor E5 v3 family. The Ls-series
gets the same CPU performance as the G/GS-Series and comes with 8 GiB of memory per vCPU.
The Ls-series does not support the creation of a local cache to increase the IOPS achievable by durable
data disks. The high throughput and IOPS of the local disk makes Ls-series VMs ideal for NoSQL stores
such as Apache Cassandra and MongoDB which replicate data across multiple VMs to achieve
persistence in the event of the failure of a single VM.
ACU: 180-240
Premium Storage: Supported
Premium Storage caching: Not Supported
The maximum disk throughput possible with Ls-series VMs may be limited by the number, size, and
striping of any attached disks. For details, see Design for high performance.
1 Instance is isolated to hardware dedicated to a single customer.
GS -series
Newer size recommendation : Easv4-series, Esv4-series, Edsv4-series and M-series
ACU: 180 - 240 1
Premium Storage: Supported
Premium Storage caching: Supported
MAX
C A C H ED
AND
T EM P
STO RA GE MAX MAX
T H RO UGH UN C A C H E N IC S/ EXP E
P UT : IO P S D DISK C T ED
/ MBPS T H RO UGH N ET W O RK
T EM P MAX (CACHE P UT : B A N DW ID
M EM O RY : STO RA GE DATA SIZ E IN IO P S/ M B P TH
SIZ E VC P U GIB ( SSD) GIB DISK S GIB ) S ( M B P S)
1 The maximum disk throughput (IOPS or MBps) possible with a GS series VM may be limited by the
number, size and striping of the attached disk(s). For details, see Design for high performance.
2 Instance is isolated to hardware dedicated to a single customer.
M A X T EM P
STO RA GE MAX
T H RO UGH P N IC S/ EXP EC
UT : M A X DATA T ED
T EM P IO P S/ REA D DISK S/ T H RO N ET W O RK
M EM O RY : STO RA GE M B P S/ W RIT UGH P UT : B A N DW IDT H
SIZ E VC P U GIB ( SSD) GIB E MBPS IO P S ( M B P S)
NV -series
Newer size recommendation : NVv3-series and NVv4-series
The NV-series virtual machines are powered by NVIDIA Tesla M60 GPUs and NVIDIA GRID technology
for desktop accelerated applications and virtual desktops where customers are able to visualize their
data or simulations. Users are able to visualize their graphics intensive workflows on the NV instances to
get superior graphics capability and additionally run single precision workloads such as encoding and
rendering. NV-series VMs are also powered by Intel Xeon E5-2690 v3 (Haswell) CPUs.
Each GPU in NV instances comes with a GRID license. This license gives you the flexibility to use an NV
instance as a virtual workstation for a single user, or 25 concurrent users can connect to the VM for a
virtual application scenario.
Premium Storage: Not Supported
Premium Storage caching: Not Supported
Live Migration: Not Supported
Memory Preserving Updates: Not Supported
T EM P VIRT UA
STO RA L VIRT UA
GE GP U MAX W O RK S L
M EM O ( SSD) M EM O DATA MAX TAT IO N A P P L IC
SIZ E VC P U RY : GIB GIB GP U RY : GIB DISK S N IC S S AT IO N S
Standar 6 56 340 1 8 24 1 1 25
d_NV6
T EM P GP U
M EM O RY : STO RA GE M EM O RY : M A X DATA
SIZ E VC P U GIB ( SSD) GIB GP U GIB DISK S M A X N IC S
Standard_ 6 56 340 1 12 24 1
NC6
NCv2 series
Newer size recommendation : NC T4 v3-series and NC V100 v3-series
NCv2-series VMs are powered by NVIDIA Tesla P100 GPUs. These GPUs can provide more than 2x the
computational performance of the NC-series. Customers can take advantage of these updated GPUs for
traditional HPC workloads such as reservoir modeling, DNA sequencing, protein analysis, Monte Carlo
simulations, and others. In addition to the GPUs, the NCv2-series VMs are also powered by Intel Xeon
E5-2690 v4 (Broadwell) CPUs.
The NC24rs v2 configuration provides a low latency, high-throughput network interface optimized for
tightly coupled parallel computing workloads.
Premium Storage: Supported
Premium Storage caching: Supported
Live Migration: Not Supported
Memory Preserving Updates: Not Supported
VM Generation Support: Generation 1 and 2
For this VM series, the vCPU (core) quota in your subscription is initially set to 0 in each region.
Request a vCPU quota increase for this series in an available region.
MAX
UN C A C
H ED
DISK
T EM P T H RO UG
STO RA G GP U MAX H P UT :
M EM O R E ( SSD) M EM O R DATA IO P S/ M B MAX
SIZ E VC P U Y : GIB GIB GP U Y : GIB DISK S PS N IC S
ND series
Newer size recommendation : NDv2-series and NC V100 v3-series
The ND-series virtual machines are a new addition to the GPU family designed for AI, and Deep Learning
workloads. They offer excellent performance for training and inference. ND instances are powered by
NVIDIA Tesla P40 GPUs and Intel Xeon E5-2690 v4 (Broadwell) CPUs. These instances provide excellent
performance for single-precision floating point operations, for AI workloads utilizing Microsoft Cognitive
Toolkit, TensorFlow, Caffe, and other frameworks. The ND-series also offers a much larger GPU memory
size (24 GB), enabling to fit much larger neural net models. Like the NC-series, the ND-series offers a
configuration with a secondary low-latency, high-throughput network through RDMA, and InfiniBand
connectivity so you can run large-scale training jobs spanning many GPUs.
Premium Storage: Supported
Premium Storage caching: Supported
Live Migration: Not Supported
Memory Preserving Updates: Not Supported
VM Generation Support: Generation 1 and 2
For this VM series, the vCPU (core) quota per region in your subscription is initially set to 0. Request
a vCPU quota increase for this series in an available region.
MAX
UN C A C
H ED
DISK
T EM P T H RO UG
STO RA G GP U MAX H P UT :
M EM O R E ( SSD) M EM O R DATA IO P S/ M B MAX
SIZ E VC P U Y : GIB GIB GP U Y : GIB DISK S PS N IC S
Next steps
Learn more about how Azure compute units (ACU) can help you compare compute performance across
Azure SKUs.
Virtual machine isolation in Azure
11/2/2020 • 2 minutes to read • Edit Online
Azure Compute offers virtual machine sizes that are Isolated to a specific hardware type and dedicated to a single
customer. The Isolated sizes live and operate on specific hardware generation and will be deprecated when the
hardware generation is retired.
Isolated virtual machine sizes are best suited for workloads that require a high degree of isolation from other
customers’ workloads for reasons that include meeting compliance and regulatory requirements. Utilizing an
isolated size guarantees that your virtual machine will be the only one running on that specific server instance.
Additionally, as the Isolated size VMs are large, customers may choose to subdivide the resources of these VMs by
using Azure support for nested virtual machines.
The current Isolated virtual machine offerings include:
Standard_E64is_v3
Standard_E64i_v3
Standard_M128ms
Standard_GS5
Standard_G5
Standard_F72s_v2
NOTE
Isolated VM Sizes have a hardware limited lifespan. Please see below for details
1 For details on Standard_DS15_v2 and Standard_D15_v2 isolation retirement program see FAQs
FAQ
Q: Is the size going to get retired or only "isolation" feature is?
A : If the virtual machine size does not have the "i" subscript, then only "isolation" feature will be retired. If isolation
is not needed, there is no action to be taken and the VM will continue to work as expected. Examples include
Standard_DS15_v2, Standard_D15_v2, Standard_M128ms etc. If the virtual machine size includes "i" subscript, then
the size is going to get retired.
Q: Is there a downtime when my vm lands on a non-isolated hardware?
A : If there is no need of isolation, no action is needed and there will be no downtime.
Q: Is there any cost delta for moving to a non-isolated virtual machine?
A : No
Q: When are the other isolated sizes going to retire?
A : We will provide reminders 12 months in advance of the official deprecation of the isolated size.
Q: I'm an Azure Service Fabric Customer relying on the Silver or Gold Durability Tiers. Does this change impact
me?
A : No. The guarantees provided by Service Fabric's Durability Tiers will continue to function even after this change.
If you require physical hardware isolation for other reasons, you may still need to take one of the actions described
above.
Q: What are the milestones for D15_v2 or DS15_v2 isolation retirement?
A:
DAT E A C T IO N
May 15, 2021 Retire D/DS15i_v2 (all customers except who bought 3-year RI
of D/DS15_v2 before November 18, 2019)
November 17, 2022 Retire D/DS15i_v2 when 3-year RIs done (for customers who
bought 3-year RI of D/DS15_v2 before November 18, 2019)
Next steps
Customers can also choose to further subdivide the resources of these Isolated virtual machines by using Azure
support for nested virtual machines.
Azure compute unit (ACU)
11/2/2020 • 2 minutes to read • Edit Online
The concept of the Azure Compute Unit (ACU) provides a way of comparing compute (CPU)
performance across Azure SKUs. This will help you easily identify which SKU is most likely to
satisfy your performance needs. ACU is currently standardized on a Small (Standard_A1) VM
being 100 and all other SKUs then represent approximately how much faster that SKU can run a
standard benchmark
*ACUs use Intel® Turbo technology to increase CPU frequency and provide a performance
increase. The amount of the performance increase can vary based on the VM size, workload, and
other workloads running on the same host.
**ACUs use AMD® Boost technology to increase CPU frequency and provide a performance
increase. The amount of the performance increase can vary based on the VM size, workload, and
other workloads running on the same host.
***Hyper-threaded and capable of running nested virtualization
IMPORTANT
The ACU is only a guideline. The results for your workload may vary.
SK U FA M ILY A C U \ VC P U VC P U: C O RE
A0 50 1:1
A1 - A4 100 1:1
A5 - A7 100 1:1
B Varies Varies
The following SPECInt benchmark scores show compute performance for select Azure VMs running Windows
Server. Compute benchmark scores are also available for Linux VMs.
NOTE
Av2-series VMs can be deployed on a variety of hardware types and processors (as seen above). Av2-series VMs have CPU
performance and memory configurations best suited for entry level workloads like development and test. The size is throttled
to offer relatively consistent processor performance for the running instance, regardless of the hardware it is deployed on;
however, software that takes advantage of specific newer processor optimizations may see more significant variation across
processor types.
B - Burstable
(Updated 2019-10-23 to 2019-11-03 PBI:5604451)
AVG B A SE
SIZ E VC P US N UM A N O DES CPU RUN S RAT E ST DDEV
NOTE
B-Series VMs are for workloads with burstable performance requirements. VM instances accumulate credits when using less
than its baseline. When the VM has accumulated credit, the VM can burst above the baseline using up to 100% to meet
short CPU burst requirements. Burst time depends on available credits which is a function of VM size and time.
SPEC Int is a fairly long running test that typically exhausts available burst credits. Therefore the numbers above are closer to
the baseline performance of the VM (although they may reflect some burst time accumulated between runs). For short,
bursty, workloads (typical on B-Series) performance will typically be closer to that of the Ds v3 Series..
AVG B A SE
SIZ E VC P US N UM A N O DES CPU RUN S RAT E ST DDEV
AVG B A SE
SIZ E VC P US N UM A N O DES CPU RUN S RAT E ST DDEV
F - Compute Optimized
AVG B A SE
SIZ E VC P US N UM A N O DES CPU RUN S RAT E ST DDEV
GS - Storage Optimized
AVG B A SE
SIZ E VC P US N UM A N O DES CPU RUN S RAT E ST DDEV
G - Compute Optimized
AVG B A SE
SIZ E VC P US N UM A N O DES CPU RUN S RAT E ST DDEV
Ls - Storage Optimized
AVG B A SE
SIZ E VC P US N UM A N O DES CPU RUN S RAT E ST DDEV
M - Memory Optimized
AVG B A SE
SIZ E VC P US N UM A N O DES CPU RUN S RAT E ST DDEV
NC - GPU Enabled
AVG B A SE
SIZ E VC P US N UM A N O DES CPU RUN S RAT E ST DDEV
NV - GPU Enabled
AVG B A SE
SIZ E VC P US N UM A N O DES CPU RUN S RAT E ST DDEV
About SPECint
Windows numbers were computed by running SPECint 2006 on Windows Server. SPECint was run using the base
rate option (SPECint_rate2006), with one copy per vCPU. SPECint consists of 12 separate tests, each run three
times, taking the median value from each test and weighting them to form a composite score. Those tests were
then run across multiple VMs to provide the average scores shown.
Next steps
For storage capacities, disk details, and additional considerations for choosing among VM sizes, see Sizes for
virtual machines.
Check vCPU quotas using Azure PowerShell
11/2/2020 • 3 minutes to read • Edit Online
The vCPU quotas for virtual machines and virtual machine scale sets are arranged in two tiers for each
subscription, in each region. The first tier is the Total Regional vCPUs, and the second tier is the various VM size
family cores such as the D-series vCPUs. Any time a new VM is deployed the vCPUs for the VM must not exceed the
vCPU quota for the VM size family or the total regional vCPU quota. If either of those quotas are exceeded, the VM
deployment will not be allowed. There is also a quota for the overall number of virtual machines in the region. The
details on each of these quotas can be seen in the Usage + quotas section of the Subscription page in the Azure
portal, or you can query for the values using PowerShell.
NOTE
Quota is calculated based on the total number of cores in use both allocated and deallocated. If you need additional cores,
request a quota increase or delete VMs that are no longer needed.
Check usage
You can use the Get-AzVMUsage cmdlet to check on your quota usage.
Reserved VM Instances
Reserved VM Instances, which are scoped to a single subscription without VM size flexibility, will add a new aspect
to the vCPU quotas. These values describe the number of instances of the stated size that must be deployable in the
subscription. They work as a placeholder in the quota system to ensure that quota is reserved to ensure reserved
VM instances are deployable in the subscription. For example, if a specific subscription has 10 Standard_D1
reserved VM instances the usages limit for Standard_D1 reserved VM instances will be 10. This will cause Azure to
ensure that there are always at least 10 vCPUs available in the Total Regional vCPUs quota to be used for
Standard_D1 instances and there are at least 10 vCPUs available in the Standard D Family vCPU quota to be used
for Standard_D1 instances.
If a quota increase is required to purchase a Single Subscription RI, you can request a quota increase on your
subscription.
Next steps
For more information about billing and quotas, see Azure subscription and service limits, quotas, and constraints.
What are Azure Reservations?
11/2/2020 • 6 minutes to read • Edit Online
Azure Reservations help you save money by committing to one-year or three-year plans for multiple products.
Committing allows you to get a discount on the resources you use. Reservations can significantly reduce your
resource costs by up to 72% from pay-as-you-go prices. Reservations provide a billing discount and don't affect
the runtime state of your resources. After you purchase a reservation, the discount automatically applies to
matching resources.
You can pay for a reservation up front or monthly. The total cost of up-front and monthly reservations is the same
and you don't pay any extra fees when you choose to pay monthly. Monthly payment is available for Azure
reservations, not third-party products.
You can buy a reservation in the Azure portal.
Buying a reservation
You can purchase reservations from the Azure portal, APIs, PowerShell, and CLI.
Go to the Azure portal to make a purchase.
For more information, seeBuy a reservation.
How is a reservation billed?
The reservation is charged to the payment method tied to the subscription. The reservation cost is deducted from
your monetary commitment balance, if available. When your monetary commitment balance doesn't cover the
cost of the reservation, you're billed the overage. If you have a subscription from an individual plan with pay-as-
you-go rates, the credit card you have on your account is billed immediately for up-front purchases. Monthly
payments appear on your invoice and your credit card is charged monthly. When you're billed by invoice, you see
the charges on your next invoice.
Next steps
Learn more about Azure Reservations with the following articles:
Manage Azure Reservations
Understand reservation usage for your subscription with pay-as-you-go rates
Understand reservation usage for your Enterprise enrollment
Windows software costs not included with reservations
Azure Reservations in Partner Center Cloud Solution Provider (CSP) program
Learn more about reservations for service plans:
Virtual Machines with Azure Reserved VM Instances
Azure Cosmos DB resources with Azure Cosmos DB reserved capacity
SQL Database compute resources with Azure SQL Database reserved capacity
Azure Cache for Redis resources with Azure Cache for Redis reserved capacity Learn more about
reservations for software plans:
Red Hat software plans from Azure Reservations
SUSE software plans from Azure Reservations
Virtual machine size flexibility with Reserved VM
Instances
11/2/2020 • 2 minutes to read • Edit Online
When you buy a Reserved VM Instance, you can choose to optimize for instance size flexibility or capacity priority.
For more information about setting or changing the optimize setting for reserved VM instances, see Change the
optimize setting for reserved VM instances.
With a reserved virtual machine instance that's optimized for instance size flexibility, the reservation you buy can
apply to the virtual machines (VMs) sizes in the same instance size flexibility group. For example, if you buy a
reservation for a VM size that's listed in the DSv2 Series, like Standard_DS5_v2, the reservation discount can apply
to the other four sizes that are listed in that same instance size flexibility group:
Standard_DS1_v2
Standard_DS2_v2
Standard_DS3_v2
Standard_DS4_v2
But that reservation discount doesn't apply to VMs sizes that are listed in different instance size flexibility groups,
like SKUs in DSv2 Series High Memory: Standard_DS11_v2, Standard_DS12_v2, and so on.
Within the instance size flexibility group, the number of VMs the reservation discount applies to depends on the
VM size you pick when you buy a reservation. It also depends on the sizes of the VMs that you have running. The
ratio column compares the relative footprint for each VM size in that instance size flexibility group. Use the ratio
value to calculate how the reservation discount applies to the VMs you have running.
Examples
The following examples use the sizes and ratios in the DSv2-series table.
You buy a reserved VM instance with the size Standard_DS4_v2 where the ratio or relative footprint compared to
the other sizes in that series is 8.
Scenario 1: Run eight Standard_DS1_v2 sized VMs with a ratio of 1. Your reservation discount applies to all
eight of those VMs.
Scenario 2: Run two Standard_DS2_v2 sized VMs with a ratio of 2 each. Also run a Standard_DS3_v2 sized VM
with a ratio of 4. The total footprint is 2+2+4=8. So your reservation discount applies to all three of those VMs.
Scenario 3: Run one Standard_DS5_v2 with a ratio of 16. Your reservation discount applies to half that VM's
compute cost.
The following sections show what sizes are in the same size series group when you buy a reserved VM instance
optimized for instance size flexibility.
Azure Dedicated Host is a service that provides physical servers - able to host one or more virtual machines -
dedicated to one Azure subscription. Dedicated hosts are the same physical servers used in our data centers,
provided as a resource. You can provision dedicated hosts within a region, availability zone, and fault domain.
Then, you can place VMs directly into your provisioned hosts, in whatever configuration best meets your needs.
Benefits
Reserving the entire host provides the following benefits:
Hardware isolation at the physical server level. No other VMs will be placed on your hosts. Dedicated hosts are
deployed in the same data centers and share the same network and underlying storage infrastructure as other,
non-isolated hosts.
Control over maintenance events initiated by the Azure platform. While the majority of maintenance events
have little to no impact on your virtual machines, there are some sensitive workloads where each second of
pause can have an impact. With dedicated hosts, you can opt-in to a maintenance window to reduce the impact
to your service.
With the Azure hybrid benefit, you can bring your own licenses for Windows and SQL to Azure. Using the
hybrid benefits provides you with additional benefits. For more information, see Azure Hybrid Benefit.
A host group is a resource that represents a collection of dedicated hosts. You create a host group in a region and
an availability zone, and add hosts to it.
A host is a resource, mapped to a physical server in an Azure data center. The physical server is allocated when
the host is created. A host is created within a host group. A host has a SKU describing which VM sizes can be
created. Each host can host multiple VMs, of different sizes, as long as they are from the same size series.
When creating a VM in Azure, you can select which dedicated host to use. You can also use the option to
automatically place your VMs on existing hosts, within a host group.
When creating a new host group, make sure the setting for automatic VM placement is selected. When creating
your VM, select the host group and let Azure pick the best host for your VM.
Host groups that are enabled for automatic placement do not require all the VMs to be automatically placed. You
will still be able to explicitly pick a host, even when automatic placement is selected for the host group.
Limitations
Known issues and limitations when using automatic VM placement:
You will not be able to apply Azure Hybrid Benefits on your dedicated hosts.
You will not be able to redeploy your VM.
You will not be able to control maintenance for your dedicated hosts.
You will not be able to use Lsv2, NVasv4, NVsv3, Msv2, or M-series VMs with dedicated hosts
IMPORTANT
Virtual Machine Scale Sets on Dedicated Hosts is currently in public preview. To participate in the preview, complete the
preview onboarding survey at https://aka.ms/vmss-adh-preview. This preview version is provided without a service level
agreement, and it's not recommended for production workloads. Certain features might not be supported or might have
constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.
When creating a virtual machine scale set you can specify an existing host group to have all of the VM instances
created on dedicated hosts.
The following requirements apply when creating a virtual machine scale set in a dedicated host group:
Automatic VM placement needs to be enabled.
The availability setting of your host group should match your scale set.
A regional host group (created without specifying an availability zone) should be used for regional scale
sets.
The host group and the scale set must be using the same availability zone.
The fault domain count for the host group level should match the fault domain count for your scale set.
The Azure portal lets you specify max spreading for your scale set, which sets the fault domain count of
1.
Dedicated hosts should be created first, with sufficient capacity, and the same settings for scale set zones and
fault domains.
The supported VM sizes for your dedicated hosts should match the one used for your scale set.
Not all scale-set orchestration and optimizations settings are supported by dedicated hosts. Apply the following
settings to your scale set:
Disable overprovisioning.
Use the ScaleSetVM orchestration mode
Do not use proximity placement groups for co-location
Maintenance control
The infrastructure supporting your virtual machines may occasionally be updated to improve reliability,
performance, security, and to launch new features. The Azure platform tries to minimize the impact of platform
maintenance whenever possible, but customers with maintenance sensitive workloads can't tolerate even few
seconds that the VM needs to be frozen or disconnected for maintenance.
Maintenance Control provides customers with an option to skip regular platform updates scheduled on their
dedicated hosts, then apply it at the time of their choice within a 35-day rolling window.
For more information, see Managing platform updates with Maintenance Control.
Capacity considerations
Once a dedicated host is provisioned, Azure assigns it to physical server. This guarantees the availability of the
capacity when you need to provision your VM. Azure uses the entire capacity in the region (or zone) to pick a
physical server for your host. It also means that customers can expect to be able to grow their dedicated host
footprint without the concern of running out of space in the cluster.
Quotas
There are two types of quota that are consumed when you deploy a dedicated host.
1. Dedicated host vCPU quota. The default quota limit is 3000 vCPUs, per region.
2. VM size family quota. For example, a Pay-as-you-go subscription may only have a quota of 10 vCPUs
available for the Dsv3 size series, in the East US region. To deploy a Dsv3 dedicated host, you would need to
request a quota increase to at least 64 vCPUs before you can deploy the dedicated host.
To request a quota increase, create a support request in the Azure portal.
Provisioning a dedicated host will consume both dedicated host vCPU and the VM family vCPU quota, but it will
not consume the regional vCPU.
Pricing
Users are charged per dedicated host, regardless how many VMs are deployed. In your monthly statement you
will see a new billable resource type of hosts. The VMs on a dedicated host will still be shown in your statement,
but will carry a price of 0.
The host price is set based on VM family, type (hardware size), and region. A host price is relative to the largest VM
size supported on the host.
Software licensing, storage and network usage are billed separately from the host and VMs. There is no change to
those billable items.
For more information, see Azure Dedicated Host pricing.
You can also save on costs with a Reserved Instance of Azure Dedicated Hosts.
Host Under Investigation We’re having some issues with the host which we’re looking
into. This is a transitional state required for Azure to try and
identify the scope and root cause for the issue identified.
Virtual machines running on the host may be impacted.
Host Pending Deallocate Azure can’t restore the host back to a healthy state and ask
you to redeploy your virtual machines out of this host. If
autoReplaceOnFailure is enabled, your virtual machines are
service healed to healthy hardware. Otherwise, your virtual
machine may be running on a host that is about to fail.
Host deallocated All virtual machines have been removed from the host. You
are no longer being charged for this host since the hardware
was taken out of rotation.
Next steps
You can deploy a dedicated host using Azure PowerShell, the portal, and Azure CLI.
There is sample template, found here, that uses both zones and fault domains for maximum resiliency in a
region.
You can also save on costs with a Reserved Instance of Azure Dedicated Hosts.
Maintenance for virtual machines in
Azure
11/2/2020 • 6 minutes to read • Edit Online
Azure periodically updates its platform to improve the reliability, performance, and security
of the host infrastructure for virtual machines. The purpose of these updates ranges from
patching software components in the hosting environment to upgrading networking
components or decommissioning hardware.
Updates rarely affect the hosted VMs. When updates do have an effect, Azure chooses the
least impactful method for updates:
If the update doesn't require a reboot, the VM is paused while the host is updated, or the
VM is live-migrated to an already updated host.
If maintenance requires a reboot, you're notified of the planned maintenance. Azure also
provides a time window in which you can start the maintenance yourself, at a time that
works for you. The self-maintenance window is typically 35 days unless the
maintenance is urgent. Azure is investing in technologies to reduce the number of cases
in which planned platform maintenance requires the VMs to be rebooted. For
instructions on managing planned maintenance, see Handling planned maintenance
notifications using the Azure CLI, PowerShell or portal.
This page describes how Azure performs both types of maintenance. For more information
about unplanned events (outages), see Manage the availability of VMs for Windows or the
corresponding article for Linux.
Within a VM, you can get notifications about upcoming maintenance by using Scheduled
Events for Windows or for Linux.
Next steps
You can use the Azure CLI, Azure PowerShell or the portal to manage planned maintenance.
Introduction to Azure managed disks
11/6/2020 • 10 minutes to read • Edit Online
Azure managed disks are block-level storage volumes that are managed by Azure and used with Azure Virtual
Machines. Managed disks are like a physical disk in an on-premises server but, virtualized. With managed disks,
all you have to do is specify the disk size, the disk type, and provision the disk. Once you provision the disk,
Azure handles the rest.
The available types of disks are ultra disks, premium solid-state drives (SSD), standard SSDs, and standard hard
disk drives (HDD). For information about each individual disk type, see Select a disk type for IaaS VMs.
Security
Private Links
Private Link support for managed disks can be used to import or export a managed disk internal to your
network. Private Links allow you to generate a time bound Shared Access Signature (SAS) URI for unattached
managed disks and snapshots that you can use to export the data to other regions for regional expansion,
disaster recovery, and forensic analysis. You can also use the SAS URI to directly upload a VHD to an empty disk
from on-premises. Now you can leverage Private Links to restrict the export and import of managed disks so
that it can only occur within your Azure virtual network. Private Links allows you to ensure your data only travels
within the secure Microsoft backbone network.
To learn how to enable Private Links for importing or exporting a managed disk, see the CLI or Portal articles.
Encryption
Managed disks offer two different kinds of encryption. The first is Server Side Encryption (SSE), which is
performed by the storage service. The second one is Azure Disk Encryption (ADE), which you can enable on the
OS and data disks for your VMs.
Server-side encryption
Server-side encryption provides encryption-at-rest and safeguards your data to meet your organizational
security and compliance commitments. Server-side encryption is enabled by default for all managed disks,
snapshots, and images, in all the regions where managed disks are available. (Temporary disks, on the other
hand, are not encrypted by server-side encryption unless you enable encryption at host; see Disk Roles:
temporary disks).
You can either allow Azure to manage your keys for you, these are platform-managed keys, or you can manage
the keys yourself, these are customer-managed keys. Visit the Server-side encryption of Azure Disk Storage
article for details.
Azure Disk Encryption
Azure Disk Encryption allows you to encrypt the OS and Data disks used by an IaaS Virtual Machine. This
encryption includes managed disks. For Windows, the drives are encrypted using industry-standard BitLocker
encryption technology. For Linux, the disks are encrypted using the DM-Crypt technology. The encryption
process is integrated with Azure Key Vault to allow you to control and manage the disk encryption keys. For
more information, see Azure Disk Encryption for Linux VMs or Azure Disk Encryption for Windows VMs.
Disk roles
There are three main disk roles in Azure: the data disk, the OS disk, and the temporary disk. These roles map to
disks that are attached to your virtual machine.
Data disk
A data disk is a managed disk that's attached to a virtual machine to store application data, or other data you
need to keep. Data disks are registered as SCSI drives and are labeled with a letter that you choose. Each data
disk has a maximum capacity of 32,767 gibibytes (GiB). The size of the virtual machine determines how many
data disks you can attach to it and the type of storage you can use to host the disks.
OS disk
Every virtual machine has one attached operating system disk. That OS disk has a pre-installed OS, which was
selected when the VM was created. This disk contains the boot volume.
This disk has a maximum capacity of 4,095 GiB.
Temporary disk
Most VMs contain a temporary disk, which is not a managed disk. The temporary disk provides short-term
storage for applications and processes, and is intended to only store data such as page or swap files. Data on the
temporary disk may be lost during a maintenance event or when you redeploy a VM. During a successful
standard reboot of the VM, data on the temporary disk will persist. For more information about VMs without
temporary disks, see Azure VM sizes with no local temporary disk.
On Azure Linux VMs, the temporary disk is typically /dev/sdb and on Windows VMs the temporary disk is D: by
default. The temporary disk is not encrypted by server side encryption unless you enable encryption at host.
The first level provisioning sets the per-disk IOPS and bandwidth assignment. At the second level, compute
server host implements SSD provisioning, applying it only to data that is stored on the server's SSD, which
includes disks with caching (ReadWrite and ReadOnly) as well as local and temp disks. Finally, VM network
provisioning takes place at the third level for any I/O that the compute host sends to Azure Storage's backend.
With this scheme, the performance of a VM depends on a variety of factors, from how the VM uses the local SSD,
to the number of disks attached, as well as the performance and caching type of the disks it has attached.
As an example of these limitations, a Standard_DS1v1 VM is prevented from achieving the 5,000 IOPS potential
of a P30 disk, whether it is cached or not, because of limits at the SSD and network levels:
Azure uses prioritized network channel for disk traffic, which gets the precedence over other low priority of
network traffic. This helps disks maintain their expected performance in case of network contentions. Similarly,
Azure Storage handles resource contentions and other issues in the background with automatic load balancing.
Azure Storage allocates required resources when you create a disk, and applies proactive and reactive balancing
of resources to handle the traffic level. This further ensures disks can sustain their expected IOPS and throughput
targets. You can use the VM-level and Disk-level metrics to track the performance and setup alerts as needed.
Refer to our design for high performance article, to learn the best practices for optimizing VM + Disk
configurations so that you can achieve your desired performance
Next steps
If you'd like a video going into more detail on managed disks, check out: Better Azure VM Resiliency with
Managed Disks.
Learn more about the individual disk types Azure offers, which type is a good fit for your needs, and learn about
their performance targets in our article on disk types.
Select a disk type for IaaS VMs
What disk types are available in Azure?
11/2/2020 • 14 minutes to read • Edit Online
Azure managed disks currently offers four disk types, each type is aimed towards specific customer scenarios.
Disk comparison
The following table provides a comparison of ultra disks, premium solid-state drives (SSD), standard SSD, and
standard hard disk drives (HDD) for managed disks to help you decide what to use.
Max disk size 65,536 gibibyte (GiB) 32,767 GiB 32,767 GiB 32,767 GiB
Max throughput 2,000 MB/s 900 MB/s 750 MB/s 500 MB/s
Ultra disk
Azure ultra disks deliver high throughput, high IOPS, and consistent low latency disk storage for Azure IaaS VMs.
Some additional benefits of ultra disks include the ability to dynamically change the performance of the disk,
along with your workloads, without the need to restart your virtual machines (VM). Ultra disks are suited for
data-intensive workloads such as SAP HANA, top tier databases, and transaction-heavy workloads. Ultra disks
can only be used as data disks. We recommend using premium SSDs as OS disks.
Performance
When you provision an ultra disk, you can independently configure the capacity and the performance of the disk.
Ultra disks come in several fixed sizes, ranging from 4 GiB up to 64 TiB, and feature a flexible performance
configuration model that allows you to independently configure IOPS and throughput.
Some key capabilities of ultra disks are:
Disk capacity: Ultra disks capacity ranges from 4 GiB up to 64 TiB.
Disk IOPS: Ultra disks support IOPS limits of 300 IOPS/GiB, up to a maximum of 160 K IOPS per disk. To
achieve the IOPS that you provisioned, ensure that the selected Disk IOPS are less than the VM IOPS limit. The
minimum guaranteed IOPS per disk is 2 IOPS/GiB, with an overall baseline minimum of 100 IOPS. For
example, if you had a 4 GiB ultra disk, you will have a minimum of 100 IOPS, instead of eight IOPS.
Disk throughput: With ultra disks, the throughput limit of a single disk is 256 KiB/s for each provisioned IOPS,
up to a maximum of 2000 MBps per disk (where MBps = 10^6 Bytes per second). The minimum guaranteed
throughput per disk is 4KiB/s for each provisioned IOPS, with an overall baseline minimum of 1 MBps.
Ultra disks support adjusting the disk performance attributes (IOPS and throughput) at runtime without
detaching the disk from the virtual machine. Once a disk performance resize operation has been issued on a
disk, it can take up to an hour for the change to actually take effect. There is a limit of four performance resize
operations during a 24 hour window. It is possible for a performance resize operation to fail due to a lack of
performance bandwidth capacity.
Disk size
DISK SIZ E ( GIB ) IO P S C A P T H RO UGH P UT C A P ( M B P S)
4 1,200 300
8 2,400 600
16 4,800 1,200
32 9,600 2,000
64 19,200 2,000
NOTE
If a region in the following list has no ultra disk capable availability zones, then VMs in that region must be deployed
without any infrastructure redundancy options in order to attach an ultra disk.
REGIO N S REDUN DA N C Y O P T IO N S
Brazil South Single VMs only (Availability sets and virtual machine scale
sets are not supported)
Central India Single VMs only (Availability sets and virtual machine scale
sets are not supported)
East Asia Single VMs only (Availability sets and virtual machine scale
sets are not supported)
REGIO N S REDUN DA N C Y O P T IO N S
Germany West Central Single VMs only (Availability sets and virtual machine scale
sets are not supported)
Korea Central Single VMs only (Availability sets and virtual machine scale
sets are not supported)
South Central US Single VMs only (Availability sets and virtual machine scale
sets are not supported)
US Gov Arizona Single VMs only (Availability sets and virtual machine scale
sets are not supported)
US Gov Virginia Single VMs only (Availability sets and virtual machine scale
sets are not supported)
West US Single VMs only (Availability sets and virtual machine scale
sets are not supported)
Australia Central Single VMs only (Availability sets and virtual machine scale
sets are not supported)
* Contact Azure Support to get access to Availability Zones for this region.
Are only supported on the following VM series:
ESv3
Easv4
Edsv4
Esv4
DSv3
Dasv4
Ddsv4
Dsv4
FSv2
LSv2
M
Mv2
Not every VM size is available in every supported region with ultra disks.
Are only available as data disks.
Support 4k physical sector size by default. 512E sector size is available as a generally available offering but,
you must sign up for it. Most applications are compatible with 4k sector sizes but, some require 512 byte
sector sizes. One example would be Oracle Database, which requires release 12.2 or later in order to support
the 4k native disks. For older versions of Oracle DB, 512 byte sector size is required.
Can only be created as empty disks.
Doesn't currently support disk snapshots, VM images, availability sets, Azure Dedicated Hosts, or Azure disk
encryption.
Doesn't currently support integration with Azure Backup or Azure Site Recovery.
Only supports un-cached reads and un-cached writes.
The current maximum limit for IOPS on GA VMs is 80,000.
Azure ultra disks offer up to 16 TiB per region per subscription by default, but ultra disks support higher capacity
by request. To request an increase in capacity, contact Azure Support.
If you would like to start using ultra disks, see our article on the subject: Using Azure ultra disks.
Premium SSD
Azure premium SSDs deliver high-performance and low-latency disk support for virtual machines (VMs) with
input/output (IO)-intensive workloads. To take advantage of the speed and performance of premium storage
disks, you can migrate existing VM disks to Premium SSDs. Premium SSDs are suitable for mission-critical
production applications. Premium SSDs can only be used with VM series that are premium storage-compatible.
To learn more about individual VM types and sizes in Azure for Windows or Linux, including which sizes are
premium storage-compatible, see Sizes for virtual machines in Azure. From this article, you need to check each
individual VM size article to determine if it is premium storage-compatible.
Disk size
P RE
M IU
M
SSD
SIZ
ES P1 P2 P3 P4 P6 P 10 P 15 P 20 P 30 P 40 P 50 P 60 P 70 P 80
Dis 4 8 16 32 64 128 256 512 1,0 2,0 4,0 8,1 16, 32,
k 24 48 96 92 384 767
size
in
GiB
P RE
M IU
M
SSD
SIZ
ES P1 P2 P3 P4 P6 P 10 P 15 P 20 P 30 P 40 P 50 P 60 P 70 P 80
Pro 120 120 120 120 240 500 1,1 2,3 5,0 7,5 7,5 16, 18, 20,
visi 00 00 00 00 00 000 000 000
one
d
IOP
S
per
disk
Pro 25 25 25 25 50 100 125 150 200 250 250 500 750 900
visi MB MB/ MB/ MB/ MB/ MB/ MB/ MB/ MB/ MB/ MB/ MB/ MB/ MB/
one /sec sec sec sec sec sec sec sec sec sec sec sec sec sec
d
Thr
oug
hpu
t
per
disk
Ma 30 30 30 30 30 30 30 30
x min min min min min min min min
bur
st
dur
atio
n
Bursting
Premium SSD sizes smaller than P30 now offer disk bursting and can burst their IOPS per disk up to 3,500 and
their bandwidth up to 170 Mbps. Bursting is automated and operates based on a credit system. Credits are
automatically accumulated in a burst bucket when disk traffic is below the provisioned performance target and
credits are automatically consumed when traffic bursts beyond the target, up to the max burst limit. The max
burst limit defines the ceiling of disk IOPS & Bandwidth even if you have burst credits to consume from. Disk
bursting provides better tolerance on unpredictable changes of IO patterns. You can best leverage it for OS disk
boot and applications with spiky traffic.
Disks bursting support will be enabled on new deployments of applicable disk sizes by default, with no user
action required. For existing disks of the applicable sizes, you can enable bursting with either of two the options:
detach and reattach the disk or stop and restart the attached VM. All burst applicable disk sizes will start with a
full burst credit bucket when the disk is attached to a Virtual Machine that supports a max duration at peak burst
limit of 30 mins. To learn more about how bursting work on Azure Disks, see Premium SSD bursting.
Transactions
For premium SSDs, each I/O operation less than or equal to 256 KiB of throughput is considered a single I/O
operation. I/O operations larger than 256 KiB of throughput are considered multiple I/Os of size 256 KiB.
Standard SSD
Azure standard SSDs are a cost-effective storage option optimized for workloads that need consistent
performance at lower IOPS levels. Standard SSD offers a good entry level experience for those who wish to move
to the cloud, especially if you experience issues with the variance of workloads running on your HDD solutions on
premises. Compared to standard HDDs, standard SSDs deliver better availability, consistency, reliability, and
latency. Standard SSDs are suitable for Web servers, low IOPS application servers, lightly used enterprise
applications, and Dev/Test workloads. Like standard HDDs, standard SSDs are available on all Azure VMs.
Disk size
STA
ND
AR
D
SSD
SIZ
ES E1 E2 E3 E4 E6 E10 E15 E20 E30 E40 E50 E60 E70 E80
Dis 4 8 16 32 64 128 256 512 1,0 2,0 4,0 8,1 16, 32,
k 24 48 96 92 384 767
size
in
GiB
IOP Up Up Up Up Up Up Up Up Up Up Up Up Up Up
S to to to to to to to to to to to to to to
per 500 500 500 500 500 500 500 500 500 500 500 2,0 4,0 6,0
disk 00 00 00
STA
ND
AR
D
SSD
SIZ
ES E1 E2 E3 E4 E6 E10 E15 E20 E30 E40 E50 E60 E70 E80
Thr Up Up Up Up Up Up Up Up Up Up Up Up Up Up
oug to to to to to to to to to to to to to to
hpu 60 60 60 60 60 60 60 60 60 60 60 400 600 750
t MB MB/ MB/ MB/ MB/ MB/ MB/ MB/ MB/ MB/ MB/ MB/ MB/ MB/
per /sec sec sec sec sec sec sec sec sec sec sec sec sec sec
disk
Standard SSDs are designed to provide single-digit millisecond latencies and the IOPS and throughput up to the
limits described in the preceding table 99% of the time. Actual IOPS and throughput may vary sometimes
depending on the traffic patterns. Standard SSDs will provide more consistent performance than the HDD disks
with the lower latency.
Transactions
For standard SSDs, each I/O operation less than or equal to 256 KiB of throughput is considered a single I/O
operation. I/O operations larger than 256 KiB of throughput are considered multiple I/Os of size 256 KiB. These
transactions have a billing impact.
Standard HDD
Azure standard HDDs deliver reliable, low-cost disk support for VMs running latency-insensitive workloads. With
standard storage, the data is stored on hard disk drives (HDDs). Latency, IOPS, and Throughput of Standard HDD
disks may vary more widely as compared to SSD-based disks. Standard HDD Disks are designed to deliver write
latencies under 10ms and read latencies under 20ms for most IO operations, however the actual performance
may vary depending on the IO size and workload pattern. When working with VMs, you can use standard HDD
disks for dev/test scenarios and less critical workloads. Standard HDDs are available in all Azure regions and can
be used with all Azure VMs.
Disk size
STA N
DA RD
DISK
TYPE S4 S6 S10 S15 S20 S30 S40 S50 S60 S70 S80
Disk 32 64 128 256 512 1,024 2,048 4,096 8,192 16,38 32,76
size in 4 7
GiB
IOPS Up to Up to Up to Up to Up to Up to Up to Up to Up to Up to Up to
per 500 500 500 500 500 500 500 500 1,300 2,000 2,000
disk
Throu Up to Up to Up to Up to Up to Up to Up to Up to Up to Up to Up to
ghput 60 60 60 60 60 60 60 60 300 500 500
per MB/se MB/se MB/se MB/se MB/se MB/se MB/se MB/se MB/se MB/se MB/se
disk c c c c c c c c c c c
Transactions
For Standard HDDs, each IO operation is considered as a single transaction, regardless of the I/O size. These
transactions have a billing impact.
Billing
When using managed disks, the following billing considerations apply:
Disk type
managed disk Size
Snapshots
Outbound data transfers
Number of transactions
Managed disk size : managed disks are billed on the provisioned size. Azure maps the provisioned size
(rounded up) to the nearest offered disk size. For details of the disk sizes offered, see the previous tables. Each
disk maps to a supported provisioned disk size offering and is billed accordingly. For example, if you provisioned
a 200 GiB Standard SSD, it maps to the disk size offer of E15 (256 GiB). Billing for any provisioned disk is prorated
hourly by using the monthly price for the storage offering. For example, if you provisioned an E10 disk and
deleted it after 20 hours, you're billed for the E10 offering prorated to 20 hours. This is regardless of the amount
of actual data written to the disk.
Snapshots : Snapshots are billed based on the size used. For example, if you create a snapshot of a managed disk
with provisioned capacity of 64 GiB and actual used data size of 10 GiB, the snapshot is billed only for the used
data size of 10 GiB.
For more information on snapshots, see the section on snapshots in the managed disk overview.
Outbound data transfers : Outbound data transfers (data going out of Azure data centers) incur billing for
bandwidth usage.
Transactions : You're billed for the number of transactions that you perform on a standard managed disk. For
standard SSDs, each I/O operation less than or equal to 256 KiB of throughput is considered a single I/O
operation. I/O operations larger than 256 KiB of throughput are considered multiple I/Os of size 256 KiB. For
Standard HDDs, each IO operation is considered as a single transaction, regardless of the I/O size.
For detailed information on pricing for Managed Disks, including transaction costs, see Managed Disks Pricing.
Ultra disk VM reservation fee
Azure VMs have the capability to indicate if they are compatible with ultra disks. An ultra disk compatible VM
allocates dedicated bandwidth capacity between the compute VM instance and the block storage scale unit to
optimize the performance and reduce latency. Adding this capability on the VM results in a reservation charge
that is only imposed if you enabled ultra disk capability on the VM without attaching an ultra disk to it. When an
ultra disk is attached to the ultra disk compatible VM, this charge would not be applied. This charge is per vCPU
provisioned on the VM.
NOTE
For constrained core VM sizes, the reservation fee is based on the actual number of vCPUs and not the constrained cores.
For Standard_E32-8s_v3, the reservation fee will be based on 32 cores.
Refer to the Azure Disks pricing page for ultra disk pricing details.
Azure disk reservation
Disk reservation is the option to purchase one year of disk storage in advance at a discount, reducing your total
cost. When purchasing a disk reservation, you select a specific Disk SKU in a target region, for example, 10 P30
(1TiB) premium SSDs in East US 2 region for a one year term. The reservation experience is similar to reserved
virtual machine (VM) instances. You can bundle VM and Disk reservations to maximize your savings. For now,
Azure Disks Reservation offers one year commitment plan for premium SSD SKUs from P30 (1TiB) to P80 (32
TiB) in all production regions. For more details on the Reserved Disks pricing, see Azure Disks pricing page.
Next steps
See Managed Disks pricing to get started.
Server-side encryption of Azure Disk Storage
11/2/2020 • 9 minutes to read • Edit Online
Server-side encryption (SSE) protects your data and helps you meet your organizational security and compliance
commitments. SSE automatically encrypts your data stored on Azure managed disks (OS and data disks) at rest by
default when persisting it to the cloud.
Data in Azure managed disks is encrypted transparently using 256-bit AES encryption, one of the strongest block
ciphers available, and is FIPS 140-2 compliant. For more information about the cryptographic modules underlying
Azure managed disks, see Cryptography API: Next Generation
Server-side encryption does not impact the performance of managed disks and there is no additional cost.
NOTE
Temporary disks are not managed disks and are not encrypted by SSE, unless you enable encryption at host.
IMPORTANT
Customer-managed keys rely on managed identities for Azure resources, a feature of Azure Active Directory (Azure AD).
When you configure customer-managed keys, a managed identity is automatically assigned to your resources under the
covers. If you subsequently move the subscription, resource group, or managed disk from one Azure AD directory to another,
the managed identity associated with managed disks isn't transferred to the new tenant, so customer-managed keys may no
longer work. For more information, see Transferring a subscription between Azure AD directories.
General purpose Dv3, Dav4, Dv2, Av2 B, DSv2, Dsv3, DC, DCv2, Dasv4
Upgrading the VM size will result in validation to check if the new VM size supports the EncryptionAtHost feature.
Next steps
Enable end-to-end encryption using encryption at host with either the Azure PowerShell module, the Azure CLI,
or the Azure portal.
Enable double encryption at rest for managed disks with either the Azure PowerShell module, the Azure CLI or
the Azure portal.
Enable customer-managed keys for managed disks with either the Azure PowerShell module, the Azure CLI or
the Azure portal.
Explore the Azure Resource Manager templates for creating encrypted disks with customer-managed keys
What is Azure Key Vault?
Understand how your reservation discount is applied
to Azure disk storage
11/2/2020 • 2 minutes to read • Edit Online
After you purchase Azure disk reserved capacity, a reservation discount is automatically applied to disk resources
that match the terms of your reservation. The reservation discount applies to disk SKUs only. Disk snapshots are
charged at pay-as-you-go rates.
For more information about Azure disk reservation, see Save costs with Azure disk reservation. For information
about pricing for Azure disk reservation, see Azure Managed Disks pricing.
Discount examples
The following examples show how the Azure disk reservation discount applies depending on your deployment.
Suppose you purchase and reserve 100 P30 disks in the US West 2 region for a one-year term. Each disk has
approximately 1 TiB of storage. Assume the cost of this sample reservation is $140,100. You can choose to pay
either the full amount up front or fixed monthly installments of $11,675 for the next 12 months.
The following scenarios describe what happens if you underuse, overuse, or tier your reserved capacity. For these
examples, assume you've signed up for a monthly reservation-payment plan.
Underusing your capacity
Suppose you deploy only 99 of your 100 reserved Azure premium solid-state drive (SSD) P30 disks for an hour
within the reservation period. The remaining P30 disk isn't applied during that hour. It also doesn't carry over.
Overusing your capacity
Suppose that for an hour within the reservation period, you use 101 premium SSD P30 disks. The reservation
discount applies only to 100 P30 disks. The remaining P30 disk is charged at pay-as-you-go rates for that hour. For
the next hour, if your usage goes down to 100 P30 disks, all usage is covered by the reservation.
Tiering your capacity
Suppose that in a given hour within your reservation period, you want to use a total of 200 P30 premium SSDs.
Also suppose you use only 100 for the first 30 minutes. During this period, your use is fully covered because you
made a reservation for 100 P30 disks. If you then discontinue the use of the first 100 (so that you're using zero)
and then begin to use the other 100 for the remaining 30 minutes, that usage is also covered under your
reservation.
Need help? Contact us
If you have questions or need help, create a support request.
Next steps
Reduce costs with Azure Disks Reservation
What are Azure Reservations?
Virtual machine and disk performance
11/2/2020 • 11 minutes to read • Edit Online
This article helps clarify disk performance and how it works when you combine Azure Virtual Machines and Azure
disks. It also describes how you can diagnose bottlenecks for your disk IO and the changes you can make to
optimize for performance.
Disk IO capping
Setup:
Standard_D8s_v3
Uncached IOPS: 12,800
E30 OS disk
IOPS: 500
Two E30 data disks × 2
IOPS: 500
The application running on the virtual machine makes a request that requires 10,000 IOPS to the virtual machine.
All of which are allowed by the VM because the Standard_D8s_v3 virtual machine can execute up to 12,800 IOPS.
The 10,000 IOPS requests are broken down into three different requests to the different disks:
1,000 IOPS are requested to the operating system disk.
4,500 IOPS are requested to each data disk.
All attached disks are E30 disks and can only handle 500 IOPS. So, they respond back with 500 IOPS each. The
application's performance is capped by the attached disks, and it can only process 1,500 IOPS. The application could
work at peak performance at 10,000 IOPS if better-performing disks are used, such as Premium SSD P30 disks.
You can adjust the host caching to match your workload requirements for each disk. You can set your host caching
to be:
Read-only : For workloads that only do read operations
Read/write : For workloads that do a balance of read and write operations
If your workload doesn't follow either of these patterns, we don't recommend that you use host caching.
Let's run through a couple examples of different host cache settings to see how it affects the data flow and
performance. In this first example, we'll look at what happens with IO requests when the host caching setting is set
to Read-only .
Setup:
Standard_D8s_v3
Cached IOPS: 16,000
Uncached IOPS: 12,800
P30 data disk
IOPS: 5,000
Host caching: Read-only
When a read is performed and the desired data is available on the cache, the cache returns the requested data.
There is no need to read from the disk. This read is counted toward the VM's cached limits.
When a read is performed and the desired data is not available on the cache, the read request is relayed to the disk.
Then the disk surfaces it to both the cache and the VM. This read is counted toward both the VM's uncached limit
and the VM's cached limit.
When a write is performed, the write has to be written to both the cache and the disk before it is considered
complete. This write is counted toward the VM's uncached limit and the VM's cached limit.
Next let's look at what happens with IO requests when the host cache setting is set to Read/write .
Setup:
Standard_D8s_v3
Cached IOPS: 16,000
Uncached IOPS: 12,800
P30 data disk
IOPS: 5,000
Host caching: Read/write
A read is handled the same way as a read-only. Writes are the only thing that's different with read/write caching.
When writing with host caching is set to Read/write , the write only needs to be written to the host cache to be
considered complete. The write is then lazily written to the disk as a background process. This means that a write is
counted toward cached IO when it is written to the cache. When it is lazily written to the disk, it counts toward the
uncached IO.
Let’s continue with our Standard_D8s_v3 virtual machine. Except this time, we'll enable host caching on the disks.
Also, now the VM's IOPS limit is 16,000 IOPS. Attached to the VM are three underlying P30 disks that can each
handle 5,000 IOPS.
Setup:
Standard_D8s_v3
Cached IOPS: 16,000
Uncached IOPS: 12,800
P30 OS disk
IOPS: 5,000
Host caching: Read/write
Two P30 data disks × 2
IOPS: 5,000
Host caching: Read/write
The application uses a Standard_D8s_v3 virtual machine with caching enabled. It makes a request for 15,000 IOPS.
The requests are broken down as 5,000 IOPS to each underlying disk attached. No performance capping occurs.
In this case, the application running on a Standard_D8s_v3 virtual machine makes a request for 25,000 IOPS. The
request is broken down as 5,000 IOPS to each of the attached disks. Three disks use host caching and two disks
don't use host caching.
Since the three disks that use host caching are within the cached limits of 16,000, those requests are successfully
completed. No storage performance capping occurs.
Since the two disks that don't use host caching are within the uncached limits of 12,800, those requests are also
successfully completed. No capping occurs.
The Standard_D8s_v3 can achieve a total of 28,600 IOPS. Using the metrics, let's investigate what's going on and
identify our storage IO bottleneck. On the left pane, select Metrics :
Let's first take a look at our VM Cached IOPS Consumed Percentage metric:
This metric tells us that 61% of the 16,000 IOPS allotted to the cached IOPS on the VM is being used. This
percentage means that the storage IO bottleneck isn't with the disks that are cached because it isn't at 100%. Now
let's look at our VM Uncached IOPS Consumed Percentage metric:
This metric is at 100%. It tells us that all of the 12,800 IOPS allotted to the uncached IOPS on the VM are being
used. One way we can remediate this issue is to change the size of our VM to a larger size that can handle the
additional IO. But before we do that, let's look at the attached disk to find out how many IOPS they are seeing.
Check the OS Disk by looking at the OS Disk IOPS Consumed Percentage :
This metric tells us that around 90% of the 5,000 IOPS provisioned for this P30 OS disk is being used. This
percentage means there's no bottleneck at the OS Disk. Now let's check the data disks that are attached to the VM
by looking at the Data Disk IOPS Consumed Percentage :
This metric tells us that the average IOPS consumed percentage across all the disks attached is around 42%. This
percentage is calculated based on the IOPS that are used by the disks, and that aren't being served from the host
cache. Let's drill deeper into this metric by applying splitting on these metrics and splitting by the LUN value:
This metric tells us the data disks attached on LUN 3 and 2 are using around 85% of their provisioned IOPS. Here is
a diagram of what the IO looks like from the VM and disks architecture:
Azure premium storage: design for high
performance
11/2/2020 • 35 minutes to read • Edit Online
This article provides guidelines for building high performance applications using Azure
Premium Storage. You can use the instructions provided in this document combined with
performance best practices applicable to technologies used by your application. To
illustrate the guidelines, we have used SQL Server running on Premium Storage as an
example throughout this document.
While we address performance scenarios for the Storage layer in this article, you will need
to optimize the application layer. For example, if you are hosting a SharePoint Farm on
Azure Premium Storage, you can use the SQL Server examples from this article to
optimize the database server. Additionally, optimize the SharePoint Farm's Web server and
Application server to get the most performance.
This article will help answer following common questions about optimizing application
performance on Azure Premium Storage,
How to measure your application performance?
Why are you not seeing expected high performance?
Which factors influence your application performance on Premium Storage?
How do these factors influence performance of your application on Premium Storage?
How can you optimize for IOPS, Bandwidth and Latency?
We have provided these guidelines specifically for Premium Storage because workloads
running on Premium Storage are highly performance sensitive. We have provided
examples where appropriate. You can also apply some of these guidelines to applications
running on IaaS VMs with Standard Storage disks.
NOTE
Sometimes, what appears to be a disk performance issue is actually a network bottleneck. In
these situations, you should optimize your network performance.
If you are looking to benchmark your disk, see our articles on benchmarking a disk:
For Linux: Benchmark your application on Azure Disk Storage
For Windows: Benchmarking a disk.
If your VM supports accelerated networking, you should make sure it is enabled. If it is not
enabled, you can enable it on already deployed VMs on both Windows and Linux.
Before you begin, if you are new to Premium Storage, first read the Select an Azure disk
type for IaaS VMs and Scalability targets for premium page blob storage accounts.
IOPS
IOPS, or Input/output Operations Per Second, is the number of requests that your
application is sending to the storage disks in one second. An input/output operation could
be read or write, sequential, or random. Online Transaction Processing (OLTP) applications
like an online retail website need to process many concurrent user requests immediately.
The user requests are insert and update intensive database transactions, which the
application must process quickly. Therefore, OLTP applications require very high IOPS.
Such applications handle millions of small and random IO requests. If you have such an
application, you must design the application infrastructure to optimize for IOPS. In the
later section, Optimizing Application Performance, we discuss in detail all the factors that
you must consider to get high IOPS.
When you attach a premium storage disk to your high scale VM, Azure provisions for you
a guaranteed number of IOPS as per the disk specification. For example, a P50 disk
provisions 7500 IOPS. Each high scale VM size also has a specific IOPS limit that it can
sustain. For example, a Standard GS5 VM has 80,000 IOPS limit.
Throughput
Throughput, or bandwidth is the amount of data that your application is sending to the
storage disks in a specified interval. If your application is performing input/output
operations with large IO unit sizes, it requires high throughput. Data warehouse
applications tend to issue scan intensive operations that access large portions of data at a
time and commonly perform bulk operations. In other words, such applications require
higher throughput. If you have such an application, you must design its infrastructure to
optimize for throughput. In the next section, we discuss in detail the factors you must tune
to achieve this.
When you attach a premium storage disk to a high scale VM, Azure provisions throughput
as per that disk specification. For example, a P50 disk provisions 250 MB per second disk
throughput. Each high scale VM size also has as specific throughput limit that it can
sustain. For example, Standard GS5 VM has a maximum throughput of 2,000 MB per
second.
There is a relation between throughput and IOPS as shown in the formula below.
Therefore, it is important to determine the optimal throughput and IOPS values that your
application requires. As you try to optimize one, the other also gets affected. In a later
section, Optimizing Application Performance, we will discuss in more details about
optimizing IOPS and Throughput.
Latency
Latency is the time it takes an application to receive a single request, send it to the storage
disks and send the response to the client. This is a critical measure of an application's
performance in addition to IOPS and Throughput. The Latency of a premium storage disk
is the time it takes to retrieve the information for a request and communicate it back to
your application. Premium Storage provides consistent low latencies. Premium Disks are
designed to provide single-digit millisecond latencies for most IO operations. If you enable
ReadOnly host caching on premium storage disks, you can get much lower read latency.
We will discuss Disk Caching in more detail in later section on Optimizing Application
Performance.
When you are optimizing your application to get higher IOPS and Throughput, it will affect
the latency of your application. After tuning the application performance, always evaluate
the latency of the application to avoid unexpected high latency behavior.
The following control plane operations on Managed Disks may involve movement of the
Disk from one Storage location to another. This is orchestrated via background copy of
data that can take several hours to complete, typically less than 24 hours depending on
the amount of data in the disks. During that time your application can experience higher
than usual read latency as some reads can get redirected to the original location and can
take longer to complete. There is no impact on write latency during this period.
Update the storage type.
Detach and attach a disk from one VM to another.
Create a managed disk from a VHD.
Create a managed disk from a snapshot.
Convert unmanaged disks to managed disks.
% Read operations
% Write operations
% Random
operations
% Sequential
operations
IO request size
Average Throughput
Max. Throughput
Min. Latency
Average Latency
Max. CPU
Average CPU
Max. Memory
Average Memory
Queue Depth
NOTE
You should consider scaling these numbers based on expected future growth of your application.
It is a good idea to plan for growth ahead of time, because it could be harder to change the
infrastructure for improving performance later.
If you have an existing application and want to move to Premium Storage, first build the
checklist above for the existing application. Then, build a prototype of your application on
Premium Storage and design the application based on guidelines described in Optimizing
Application Performance in a later section of this document. The next article describes the
tools you can use to gather the performance measurements.
Counters to measure application performance requirements
The best way to measure performance requirements of your application, is to use
performance-monitoring tools provided by the operating system of the server. You can
use PerfMon for Windows and iostat for Linux. These tools capture counters
corresponding to each measure explained in the above section. You must capture the
values of these counters when your application is running its normal, peak, and off-hours
workloads.
The PerfMon counters are available for processor, memory and, each logical disk and
physical disk of your server. When you use premium storage disks with a VM, the physical
disk counters are for each premium storage disk, and logical disk counters are for each
volume created on the premium storage disks. You must capture the values for the disks
that host your application workload. If there is a one to one mapping between logical and
physical disks, you can refer to physical disk counters; otherwise refer to the logical disk
counters. On Linux, the iostat command generates a CPU and disk utilization report. The
disk utilization report provides statistics per physical device or partition. If you have a
database server with its data and logs on separate disks, collect this data for both disks.
Below table describes counters for disks, processors, and memory:
Disk Reads and % of Reads and Write % Disk Read Time r/s
Writes operations performed % Disk Write Time w/s
on the disk.
IO P S T H RO UGH P UT L AT EN C Y
Performance
factors
VM size Use a VM size that Use a VM size with Use a VM size that
offers IOPS greater throughput limit offers scale limits
than your application greater than your greater than your
requirement. application application
requirement. requirement.
Disk size Use a disk size that Use a disk size with Use a disk size that
offers IOPS greater Throughput limit offers scale limits
than your application greater than your greater than your
requirement. application application
requirement. requirement.
IO P S T H RO UGH P UT L AT EN C Y
VM and Disk Scale IOPS limit of the VM Throughput limit of Scale limits of the VM
Limits size chosen should be the VM size chosen size chosen must be
greater than total should be greater greater than total
IOPS driven by than total scale limits of
storage disks Throughput driven by attached premium
attached to it. premium storage storage disks.
disks attached to it.
Stripe Size Smaller stripe size for Larger stripe size for
random small IO sequential large IO
pattern seen in OLTP pattern seen in Data
applications. For Warehouse
example, use stripe applications. For
size of 64 KB for SQL example, use 256 KB
Server OLTP stripe size for SQL
application. Server Data
warehouse
application.
Queue Depth Larger Queue Depth Larger Queue Depth Smaller Queue Depth
yields higher IOPS. yields higher yields lower latencies.
Throughput.
Nature of IO requests
An IO request is a unit of input/output operation that your application will be performing.
Identifying the nature of IO requests, random or sequential, read or write, small or large,
will help you determine the performance requirements of your application. It is important
to understand the nature of IO requests, to make the right decisions when designing your
application infrastructure. IOs must be distributed evenly to achieve the best performance
possible.
IO size is one of the more important factors. The IO size is the size of the input/output
operation request generated by your application. The IO size has a significant impact on
performance especially on the IOPS and Bandwidth that the application is able to achieve.
The following formula shows the relationship between IOPS, IO size, and
Bandwidth/Throughput.
Some applications allow you to alter their IO size, while some applications do not. For
example, SQL Server determines the optimal IO size itself, and does not provide users with
any knobs to change it. On the other hand, Oracle provides a parameter called
DB_BLOCK_SIZE using which you can configure the I/O request size of the database.
If you are using an application, which does not allow you to change the IO size, use the
guidelines in this article to optimize the performance KPI that is most relevant to your
application. For example,
An OLTP application generates millions of small and random IO requests. To handle
these types of IO requests, you must design your application infrastructure to get
higher IOPS.
A data warehousing application generates large and sequential IO requests. To handle
these types of IO requests, you must design your application infrastructure to get
higher Bandwidth or Throughput.
If you are using an application, which allows you to change the IO size, use this rule of
thumb for the IO size in addition to other performance guidelines,
Smaller IO size to get higher IOPS. For example, 8 KB for an OLTP application.
Larger IO size to get higher Bandwidth/Throughput. For example, 1024 KB for a data
warehouse application.
Here is an example on how you can calculate the IOPS and Throughput/Bandwidth for
your application. Consider an application using a P30 disk. The maximum IOPS and
Throughput/Bandwidth a P30 disk can achieve is 5000 IOPS and 200 MB per second
respectively. Now, if your application requires the maximum IOPS from the P30 disk and
you use a smaller IO size like 8 KB, the resulting Bandwidth you will be able to get is 40
MB per second. However, if your application requires the maximum
Throughput/Bandwidth from P30 disk, and you use a larger IO size like 1024 KB, the
resulting IOPS will be less, 200 IOPS. Therefore, tune the IO size such that it meets both
your application's IOPS and Throughput/Bandwidth requirement. The following table
summarizes the different IO sizes and their corresponding IOPS and Throughput for a P30
disk.
A P P L IC AT IO N T H RO UGH P UT / B A N D
REQ UIREM EN T I/ O SIZ E IO P S W IDT H
To get IOPS and Bandwidth higher than the maximum value of a single premium storage
disk, use multiple premium disks striped together. For example, stripe two P30 disks to get
a combined IOPS of 10,000 IOPS or a combined Throughput of 400 MB per second. As
explained in the next section, you must use a VM size that supports the combined disk
IOPS and Throughput.
NOTE
As you increase either IOPS or Throughput the other also increases, make sure you do not hit
throughput or IOPS limits of the disk or VM when increasing either one.
To witness the effects of IO size on application performance, you can run benchmarking
tools on your VM and disks. Create multiple test runs and use different IO size for each run
to see the impact. Refer to the Benchmarking article, linked at the end, for more details.
B A N DW I
DT H
M A X. CACHE
CPU M EM O R VM DISK DATA CACHE IO
VM SIZ E C O RES Y SIZ ES DISK S SIZ E IO P S L IM IT S
B A N DW I
DT H
M A X. CACHE
CPU M EM O R VM DISK DATA CACHE IO
VM SIZ E C O RES Y SIZ ES DISK S SIZ E IO P S L IM IT S
To view a complete list of all available Azure VM sizes, refer to Sizes for virtual machines in
Azure or . Choose a VM size that can meet and scale to your desired application
performance requirements. In addition to this, take into account following important
considerations when choosing VM sizes.
Scale Limits
The maximum IOPS limits per VM and per disk are different and independent of each
other. Make sure that the application is driving IOPS within the limits of the VM as well as
the premium disks attached to it. Otherwise, application performance will experience
throttling.
As an example, suppose an application requirement is a maximum of 4,000 IOPS. To
achieve this, you provision a P30 disk on a DS1 VM. The P30 disk can deliver up to 5,000
IOPS. However, the DS1 VM is limited to 3,200 IOPS. Consequently, the application
performance will be constrained by the VM limit at 3,200 IOPS and there will be degraded
performance. To prevent this situation, choose a VM and disk size that will both meet
application requirements.
Cost of Operation
In many cases, it is possible that your overall cost of operation using Premium Storage is
lower than using Standard Storage.
For example, consider an application requiring 16,000 IOPS. To achieve this performance,
you will need a Standard_D14 Azure IaaS VM, which can give a maximum IOPS of 16,000
using 32 standard storage 1 TB disks. Each 1-TB standard storage disk can achieve a
maximum of 500 IOPS. The estimated cost of this VM per month will be $1,570. The
monthly cost of 32 standard storage disks will be $1,638. The estimated total monthly cost
will be $3,208.
However, if you hosted the same application on Premium Storage, you will need a smaller
VM size and fewer premium storage disks, thus reducing the overall cost. A
Standard_DS13 VM can meet the 16,000 IOPS requirement using four P30 disks. The
DS13 VM has a maximum IOPS of 25,600 and each P30 disk has a maximum IOPS of
5,000. Overall, this configuration can achieve 5,000 x 4 = 20,000 IOPS. The estimated cost
of this VM per month will be $1,003. The monthly cost of four P30 premium storage disks
will be $544.34. The estimated total monthly cost will be $1,544.
Table below summarizes the cost breakdown of this scenario for Standard and Premium
Storage.
Cost of Disks per month $1,638.40 (32 x 1-TB disks) $544.34 (4 x P30 disks)
Linux Distros
With Azure Premium Storage, you get the same level of Performance for VMs running
Windows and Linux. We support many flavors of Linux distros, and you can see the
complete list here. It is important to note that different distros are better suited for
different types of workloads. You will see different levels of performance depending on the
distro your workload is running on. Test the Linux distros with your application and choose
the one that works best.
When running Linux with Premium Storage, check the latest updates about required
drivers to ensure high performance.
P
RE
M
IU
M
SS
D
SI
ZE P1 P1 P2 P3 P4 P5 P6 P7 P8
S P1 P2 P3 P4 P6 0 5 0 0 0 0 0 0 0
Di 4 8 1 3 6 1 2 5 1, 2, 4, 8, 16 32
sk 6 2 4 2 5 1 0 04 09 19 ,3 ,7
siz 8 6 2 2 8 6 2 84 67
e 4
in
Gi
B
P
RE
M
IU
M
SS
D
SI
ZE P1 P1 P2 P3 P4 P5 P6 P7 P8
S P1 P2 P3 P4 P6 0 5 0 0 0 0 0 0 0
Pr 1 1 1 1 2 5 1, 2, 5, 7, 7, 16 18 20
ov 2 2 2 2 4 0 1 3 0 50 50 ,0 ,0 ,0
isi 0 0 0 0 0 0 0 0 0 0 0 00 00 00
o 0 0 0
ne
d
IO
PS
pe
r
di
sk
Pr 2 2 2 2 5 1 1 1 2 25 25 50 75 90
ov 5 5 5 5 0 0 2 5 0 0 0 0 0 0
isi M M M M M 0 5 0 0 M M M M M
o B/ B/ B/ B/ B/ M M M M B/ B/ B/ B/ B/
ne se se se se se B/ B/ B/ B/ se se se se se
d c c c c c se se se se c c c c c
Th c c c c
ro
u
g
h
p
ut
pe
r
di
sk
M 3, 3, 3, 3, 3, 3, 3, 3,
ax 5 5 5 5 5 5 5 5
b 0 0 0 0 0 0 0 0
ur 0 0 0 0 0 0 0 0
st
IO
PS
pe
r
di
sk
P
RE
M
IU
M
SS
D
SI
ZE P1 P1 P2 P3 P4 P5 P6 P7 P8
S P1 P2 P3 P4 P6 0 5 0 0 0 0 0 0 0
M 1 1 1 1 1 1 1 1
ax 7 7 7 7 7 7 7 7
b 0 0 0 0 0 0 0 0
ur M M M M M M M M
st B/ B/ B/ B/ B/ B/ B/ B/
th se se se se se se se se
ro c c c c c c c c
u
g
h
p
ut
pe
r
di
sk
M 3 3 3 3 3 3 3 3
ax 0 0 0 0 0 0 0 0
b mi mi mi mi mi mi mi mi
ur n n n n n n n n
st
d
ur
ati
o
n
Eli N N N N N N N N Ye Ye Ye Ye Ye Ye
gi o o o o o o o o s, s, s, s, s, s,
bl u up up up up up
e p to to to to to
fo to on on on on on
r o e e e e e
re ne ye ye ye ye ye
se ye ar ar ar ar ar
rv ar
ati
o
n
How many disks you choose depends on the disk size chosen. You could use a single P50
disk or multiple P10 disks to meet your application requirement. Take into account
considerations listed below when making the choice.
Scale Limits (IOPS and Throughput)
The IOPS and Throughput limits of each Premium disk size is different and independent
from the VM scale limits. Make sure that the total IOPS and Throughput from the disks are
within scale limits of the chosen VM size.
For example, if an application requirement is a maximum of 250 MB/sec Throughput and
you are using a DS4 VM with a single P30 disk. The DS4 VM can give up to 256 MB/sec
Throughput. However, a single P30 disk has Throughput limit of 200 MB/sec.
Consequently, the application will be constrained at 200 MB/sec due to the disk limit. To
overcome this limit, provision more than one data disks to the VM or resize your disks to
P40 or P50.
NOTE
Reads served by the cache are not included in the disk IOPS and Throughput, hence not subject
to disk limits. Cache has its separate IOPS and Throughput limit per VM.
For example, initially your reads and writes are 60MB/sec and 40MB/sec respectively. Over time,
the cache warms up and serves more and more of the reads from the cache. Then, you can get
higher write Throughput from the disk.
Number of Disks
Determine the number of disks you will need by assessing application requirements. Each
VM size also has a limit on the number of disks that you can attach to the VM. Typically,
this is twice the number of cores. Ensure that the VM size you choose can support the
number of disks needed.
Remember, the Premium Storage disks have higher performance capabilities compared to
Standard Storage disks. Therefore, if you are migrating your application from Azure IaaS
VM using Standard Storage to Premium Storage, you will likely need fewer premium disks
to achieve the same or higher performance for your application.
Disk caching
High Scale VMs that leverage Azure Premium Storage have a multi-tier caching
technology called BlobCache. BlobCache uses a combination of the host RAM and local
SSD for caching. This cache is available for the Premium Storage persistent disks and the
VM local disks. By default, this cache setting is set to Read/Write for OS disks and
ReadOnly for data disks hosted on Premium Storage. With disk caching enabled on the
Premium Storage disks, the high scale VMs can achieve extremely high levels of
performance that exceed the underlying disk performance.
WARNING
Disk Caching is not supported for disks 4 TiB and larger. If multiple disks are attached to your
VM, each disk that is smaller than 4 TiB will support caching.
Changing the cache setting of an Azure disk detaches and re-attaches the target disk. If it is the
operating system disk, the VM is restarted. Stop all applications/services that might be affected
by this disruption before changing the disk cache setting. Not following those recommendations
could lead to data corruption.
To learn more about how BlobCache works, refer to the Inside Azure Premium Storage
blog post.
It is important to enable cache on the right set of disks. Whether you should enable disk
caching on a premium disk or not will depend on the workload pattern that disk will be
handling. Table below shows the default cache settings for OS and Data disks.
DISK T Y P E DEFA ULT C A C H E SET T IN G
OS disk ReadWrite
Following are the recommended disk cache settings for data disks,
ReadOnly
By configuring ReadOnly caching on Premium Storage data disks, you can achieve low
Read latency and get very high Read IOPS and Throughput for your application. This is due
two reasons,
1. Reads performed from cache, which is on the VM memory and local SSD, are much
faster than reads from the data disk, which is on the Azure blob storage.
2. Premium Storage does not count the Reads served from cache, towards the disk IOPS
and Throughput. Therefore, your application is able to achieve higher total IOPS and
Throughput.
ReadWrite
By default, the OS disks have ReadWrite caching enabled. We have recently added support
for ReadWrite caching on data disks as well. If you are using ReadWrite caching, you must
have a proper way to write the data from cache to persistent disks. For example, SQL
Server handles writing cached data to the persistent storage disks on its own. Using
ReadWrite cache with an application that does not handle persisting the required data can
lead to data loss, if the VM crashes.
None
Currently, None is only supported on data disks. It is not supported on OS disks. If you set
None on an OS disk it will override this internally and set it to ReadOnly .
As an example, you can apply these guidelines to SQL Server running on Premium
Storage by doing the following,
1. Configure "ReadOnly" cache on premium storage disks hosting data files.
a. The fast reads from cache lower the SQL Server query time since data pages are
retrieved much faster from the cache compared to directly from the data disks.
b. Serving reads from cache, means there is additional Throughput available from
premium data disks. SQL Server can use this additional Throughput towards retrieving
more data pages and other operations like backup/restore, batch loads, and index
rebuilds.
2. Configure "None" cache on premium storage disks hosting the log files.
a. Log files have primarily write-heavy operations. Therefore, they do not benefit from
the ReadOnly cache.
Disk striping
When a high scale VM is attached with several premium storage persistent disks, the disks
can be striped together to aggregate their IOPs, bandwidth, and storage capacity.
On Windows, you can use Storage Spaces to stripe disks together. You must configure one
column for each disk in a pool. Otherwise, the overall performance of striped volume can
be lower than expected, due to uneven distribution of traffic across the disks.
Important: Using Server Manager UI, you can set the total number of columns up to 8 for
a striped volume. When attaching more than eight disks, use PowerShell to create the
volume. Using PowerShell, you can set the number of columns equal to the number of
disks. For example, if there are 16 disks in a single stripe set; specify 16 columns in the
NumberOfColumns parameter of the New-VirtualDisk PowerShell cmdlet.
On Linux, use the MDADM utility to stripe disks together. For detailed steps on striping
disks on Linux refer to Configure Software RAID on Linux.
Stripe Size
An important configuration in disk striping is the stripe size. The stripe size or block size is
the smallest chunk of data that application can address on a striped volume. The stripe size
you configure depends on the type of application and its request pattern. If you choose the
wrong stripe size, it could lead to IO misalignment, which leads to degraded performance
of your application.
For example, if an IO request generated by your application is bigger than the disk stripe
size, the storage system writes it across stripe unit boundaries on more than one disk.
When it is time to access that data, it will have to seek across more than one stripe units to
complete the request. The cumulative effect of such behavior can lead to substantial
performance degradation. On the other hand, if the IO request size is smaller than stripe
size, and if it is random in nature, the IO requests may add up on the same disk causing a
bottleneck and ultimately degrading the IO performance.
Depending on the type of workload your application is running, choose an appropriate
stripe size. For random small IO requests, use a smaller stripe size. Whereas for large
sequential IO requests use a larger stripe size. Find out the stripe size recommendations
for the application you will be running on Premium Storage. For SQL Server, configure
stripe size of 64 KB for OLTP workloads and 256 KB for data warehousing workloads. See
Performance best practices for SQL Server on Azure VMs to learn more.
NOTE
You can stripe together a maximum of 32 premium storage disks on a DS series VM and 64
premium storage disks on a GS series VM.
Multi-threading
Azure has designed Premium Storage platform to be massively parallel. Therefore, a multi-
threaded application achieves much higher performance than a single-threaded
application. A multi-threaded application splits up its tasks across multiple threads and
increases efficiency of its execution by utilizing the VM and disk resources to the
maximum.
For example, if your application is running on a single core VM using two threads, the CPU
can switch between the two threads to achieve efficiency. While one thread is waiting on a
disk IO to complete, the CPU can switch to the other thread. In this way, two threads can
accomplish more than a single thread would. If the VM has more than one core, it further
decreases running time since each core can execute tasks in parallel.
You may not be able to change the way an off-the-shelf application implements single
threading or multi-threading. For example, SQL Server is capable of handling multi-CPU
and multi-core. However, SQL Server decides under what conditions it will leverage one or
more threads to process a query. It can run queries and build indexes using multi-
threading. For a query that involves joining large tables and sorting data before returning
to the user, SQL Server will likely use multiple threads. However, a user cannot control
whether SQL Server executes a query using a single thread or multiple threads.
There are configuration settings that you can alter to influence this multi-threading or
parallel processing of an application. For example, in case of SQL Server it is the maximum
Degree of Parallelism configuration. This setting called MAXDOP, allows you to configure
the maximum number of processors SQL Server can use when parallel processing. You
can configure MAXDOP for individual queries or index operations. This is beneficial when
you want to balance resources of your system for a performance critical application.
For example, say your application using SQL Server is executing a large query and an
index operation at the same time. Let us assume that you wanted the index operation to be
more performant compared to the large query. In such a case, you can set MAXDOP value
of the index operation to be higher than the MAXDOP value for the query. This way, SQL
Server has more number of processors that it can leverage for the index operation
compared to the number of processors it can dedicate to the large query. Remember, you
do not control the number of threads SQL Server will use for each operation. You can
control the maximum number of processors being dedicated for multi-threading.
Learn more about Degrees of Parallelism in SQL Server. Find out such settings that
influence multi-threading in your application and their configurations to optimize
performance.
Queue depth
The queue depth or queue length or queue size is the number of pending IO requests in
the system. The value of queue depth determines how many IO operations your
application can line up, which the storage disks will be processing. It affects all the three
application performance indicators that we discussed in this article viz., IOPS, throughput,
and latency.
Queue Depth and multi-threading are closely related. The Queue Depth value indicates
how much multi-threading can be achieved by the application. If the Queue Depth is large,
application can execute more operations concurrently, in other words, more multi-
threading. If the Queue Depth is small, even though application is multi-threaded, it will
not have enough requests lined up for concurrent execution.
Typically, off the shelf applications do not allow you to change the queue depth, because if
set incorrectly it will do more harm than good. Applications will set the right value of
queue depth to get the optimal performance. However, it is important to understand this
concept so that you can troubleshoot performance issues with your application. You can
also observe the effects of queue depth by running benchmarking tools on your system.
Some applications provide settings to influence the Queue Depth. For example, the
MAXDOP (maximum degree of parallelism) setting in SQL Server explained in previous
section. MAXDOP is a way to influence Queue Depth and multi-threading, although it does
not directly change the Queue Depth value of SQL Server.
High queue depth
A high queue depth lines up more operations on the disk. The disk knows the next request
in its queue ahead of time. Consequently, the disk can schedule operations ahead of time
and process them in an optimal sequence. Since the application is sending more requests
to the disk, the disk can process more parallel IOs. Ultimately, the application will be able
to achieve higher IOPS. Since application is processing more requests, the total
Throughput of the application also increases.
Typically, an application can achieve maximum Throughput with 8-16+ outstanding IOs
per attached disk. If a queue depth is one, application is not pushing enough IOs to the
system, and it will process less amount of in a given period. In other words, less
Throughput.
For example, in SQL Server, setting the MAXDOP value for a query to "4" informs SQL
Server that it can use up to four cores to execute the query. SQL Server will determine
what is best queue depth value and the number of cores for the query execution.
Optimal queue depth
Very high queue depth value also has its drawbacks. If queue depth value is too high, the
application will try to drive very high IOPS. Unless application has persistent disks with
sufficient provisioned IOPS, this can negatively affect application latencies. Following
formula shows the relationship between IOPS, latency, and queue depth.
You should not configure Queue Depth to any high value, but to an optimal value, which
can deliver enough IOPS for the application without affecting latencies. For example, if the
application latency needs to be 1 millisecond, the Queue Depth required to achieve 5,000
IOPS is, QD = 5000 x 0.001 = 5.
Queue Depth for Striped Volume
For a striped volume, maintain a high enough queue depth such that, every disk has a
peak queue depth individually. For example, consider an application that pushes a queue
depth of 2 and there are four disks in the stripe. The two IO requests will go to two disks
and remaining two disks will be idle. Therefore, configure the queue depth such that all the
disks can be busy. Formula below shows how to determine the queue depth of striped
volumes.
Throttling
Azure Premium Storage provisions specified number of IOPS and Throughput depending
on the VM sizes and disk sizes you choose. Anytime your application tries to drive IOPS or
Throughput above these limits of what the VM or disk can handle, Premium Storage will
throttle it. This manifests in the form of degraded performance in your application. This
can mean higher latency, lower Throughput, or lower IOPS. If Premium Storage does not
throttle, your application could completely fail by exceeding what its resources are capable
of achieving. So, to avoid performance issues due to throttling, always provision sufficient
resources for your application. Take into consideration what we discussed in the VM sizes
and Disk sizes sections above. Benchmarking is the best way to figure out what resources
you will need to host your application.
Next steps
If you are looking to benchmark your disk, see our articles on benchmarking a disk:
For Linux: Benchmark your application on Azure Disk Storage
For Windows: Benchmarking a disk.
Learn more about the available disk types:
For Linux: Select a disk type
For Windows: Select a disk type
For SQL Server users, read articles on Performance Best Practices for SQL Server:
Performance Best Practices for SQL Server in Azure Virtual Machines
Azure Premium Storage provides highest performance for SQL Server in Azure VM
Managed disk bursting
11/2/2020 • 6 minutes to read • Edit Online
On Azure, we offer the ability to boost disk storage IOPS and MB/s performance referred to as bursting on both
Virtual Machines and disks. Bursting is useful in many scenarios, such as handling unexpected disk traffic or
processing batch jobs. You can effectively leverage VM and disk level bursting to achieve great baseline and
bursting performance on both your VM and disk. This way you can achieve great baseline performance and
bursting performance on both your vm and disk.
Please note that bursting on Disks and VMs are independent from one another. If you have a bursting disk, you do
not need a bursting VM to allow your disk to burst. If you have a bursting VM, you do not need a bursting disk to
allow your VM to burst.
Common scenarios
The following scenarios can benefit greatly from bursting:
Improving boot times – With bursting, your instance will boot at a significantly faster rate. For example, the
default OS disk for premium enabled VMs is the P4 disk, which is a provisioned performance of up to 120 IOPS
and 25 MB/s. With bursting, the P4 can go up to 3500 IOPS and 170 MB/s allowing for a boot time to accelerate
by 6X.
Handling batch jobs – Some application’s workloads are cyclical in nature and require a baseline performance
for most of the time and require higher performance for a short period of time. An example of this is an
accounting program that process transactions daily that require a small amount of disk traffic. Then at the end
of the month, does reconciling reports that need a much higher amount of disk traffic.
Preparedness for traffic spikes – Web servers and their applications can experience traffic surges at any
time. If your web server is backed by VMs or disks using bursting, the servers are better equipped to handle
traffic spikes.
Bursting flow
The bursting credit system applies in the same manner at both the virtual machine level and disk level. Your
resource, either a VM or disk, will start with fully stocked credits. These credits will allow you to burst for 30
minutes at the maximum burst rate. Bursting credits accumulate when your resource is running under their
performance disk storage limits. For all IOPS and MB/s that your resource is using below the performance limit you
begin to accumulate credits. If your resource has accrued credits to use for bursting and your workload needs the
extra performance, your resource can use those credits to go above your performance limit to give it the disk IO
performance it needs to meet the demand.
Its up to completely up to you on how you want to use the 30 minutes of bursting. You can use it for 30 minutes
consecutively or sporadically throughout the day. When the product is deployed it comes ready with full credits and
when it depletes the credits it takes less than a day to get fully stocked full of credits again. You can accumulate and
spend their bursting credits at your discretion and the 30 minute bucket does not need to be full again to burst.
One thing to note about burst accumulation is that it is different for each resource since it is based on the unused
IOPS and MB/s below their performance amounts. This means that higher baseline performance products can
accrue their bursting amounts faster than lower baseline performing products. For example, a P1 disk idling with
no activity will accrue 120 IOPS per second whereas a P20 disk accrues 2,300 IOPS per second while idling with no
activity.
Bursting states
There are three states your resource can be in with bursting enabled:
Accruing – The resource’s IO traffic is using less than the performance target. Accumulating bursting credits for
IOPS and MB/s are done separate from one another. Your resource can be accruing IOPS credits and spending
MB/s credits or vice versa.
Bursting – The resource’s traffic is using more than the performance target. The burst traffic will independently
consume credits from IOPS or bandwidth.
Constant – The resource’s traffic is exactly at the performance target.
Examples of bursting
The following examples show how bursting works with various virtual machine and disk combinations. To make the
examples easy to follow, we will focus on MB/s, but the same logic is applied independently to IOPS.
Non-burstable virtual machine with burstable Disks
VM and disk combination:
Standard_D8as_v4
Uncached MB/s: 192
P4 OS Disk
Provisioned MB/s: 25
Max burst MB/s: 170
2 P10 Data Disks
Provisioned MB/s: 25
Max burst MB/s: 170
When the VM boots up it will retrieve data from the OS disk. Since the OS disk is part of a VM that is getting
started, the OS disk will be full of bursting credits. These credits will allow the OS disk burst its startup at 170 MB/s
second as seen below:
After the boot up is complete, an application is then run on the VM and has a non-critical workload. This workload
requires 15 MB/S that gets spread evenly across all the disks:
Then the application needs to process a batched job that requires 192 MB/s. 2 MB/s are used by the OS Disk and
the rest are evenly split between the data disks:
Burstable virtual machine with non-burstable disks
VM and disk combination:
Standard_L8s_v2
Uncached MB/s: 160
Max burst MB/s: 1,280
P50 OS Disk
Provisioned MB/s: 250
2 P10 Data Disks
Provisioned MB/s: 250
After the initial boot up, an application is run on the VM and has a non-critical workload. This workload requires 30
MB/s that gets spread evenly across all the disks:
Then the application needs to process a batched job that requires 600 MB/s. The Standard_L8s_v2 bursts to meet
this demand and then requests to the disks get evenly spread out to P50 disks:
Burstable virtual machine with burstable Disks
VM and disk combination:
Standard_L8s_v2
Uncached MB/s: 160
Max burst MB/s: 1,280
P4 OS Disk
Provisioned MB/s: 25
Max burst MB/s: 170
2 P4 Data Disks
Provisioned MB/s: 25
Max burst MB/s: 170
When the VM boots up, it will burst to request its burst limit of 1,280 MB/s from the OS disk and the OS disk will
respond with its burst performance of 170 MB/s:
Then after the boot up is complete, an application is then run on the VM. The application has a non-critical
workload that requires 15 MB/s that gets spread evenly across all the disks:
Then the application needs to process a batched job that requires 360 MB/s. The Standard_L8s_v2 bursts to meet
this demand and then requests. Only 20 MB/s are needed by the OS disk. The remaining 340 MB/s are handled by
the bursting P4 data disks:
Scalability and performance targets for VM disks on
Windows
11/2/2020 • 5 minutes to read • Edit Online
You can attach a number of data disks to an Azure virtual machine. Based on the scalability and performance
targets for a VM's data disks, you can determine the number and type of disk that you need to meet your
performance and capacity requirements.
IMPORTANT
For optimal performance, limit the number of highly utilized disks attached to the virtual machine to avoid possible throttling.
If all attached disks aren't highly utilized at the same time, the virtual machine can support a larger number of disks.
RESO URC E L IM IT
For Standard storage accounts: A Standard storage account has a maximum total request rate of 20,000 IOPS.
The total IOPS across all of your virtual machine disks in a Standard storage account should not exceed this limit.
You can roughly calculate the number of highly utilized disks supported by a single Standard storage account
based on the request rate limit. For example, for a Basic tier VM, the maximum number of highly utilized disks
is about 66, which is 20,000/300 IOPS per disk. The maximum number of highly utilized disks for a Standard tier
VM is about 40, which is 20,000/500 IOPS per disk.
For Premium storage accounts: A Premium storage account has a maximum total throughput rate of 50 Gbps.
The total throughput across all of your VM disks should not exceed this limit.
See Windows VM sizes for additional details.
STA N
DA RD
DISK
TYPE S4 S6 S10 S15 S20 S30 S40 S50 S60 S70 S80
Disk 32 64 128 256 512 1,024 2,048 4,096 8,192 16,38 32,76
size in 4 7
GiB
IOPS Up to Up to Up to Up to Up to Up to Up to Up to Up to Up to Up to
per 500 500 500 500 500 500 500 500 1,300 2,000 2,000
disk
Throu Up to Up to Up to Up to Up to Up to Up to Up to Up to Up to Up to
ghput 60 60 60 60 60 60 60 60 300 500 500
per MB/se MB/se MB/se MB/se MB/se MB/se MB/se MB/se MB/se MB/se MB/se
disk c c c c c c c c c c c
STA
ND
A RD
SSD
SIZ E
S E1 E2 E3 E4 E6 E10 E15 E20 E30 E40 E50 E60 E70 E80
Disk 4 8 16 32 64 128 256 512 1,02 2,04 4,09 8,19 16,3 32,7
size 4 8 6 2 84 67
in
GiB
IOP Up Up Up Up Up Up Up Up Up Up Up Up Up Up
S to to to to to to to to to to to to to to
per 500 500 500 500 500 500 500 500 500 500 500 2,00 4,00 6,00
disk 0 0 0
Thr Up Up Up Up Up Up Up Up Up Up Up Up Up Up
oug to to to to to to to to to to to to to to
hpu 60 60 60 60 60 60 60 60 60 60 60 400 600 750
t MB/ MB/ MB/ MB/ MB/ MB/ MB/ MB/ MB/ MB/ MB/ MB/ MB/ MB/
per sec sec sec sec sec sec sec sec sec sec sec sec sec sec
disk
P RE
M IU
M
SSD
SIZ E
S P1 P2 P3 P4 P6 P 10 P 15 P 20 P 30 P 40 P 50 P 60 P 70 P 80
Disk 4 8 16 32 64 128 256 512 1,02 2,04 4,09 8,19 16,3 32,7
size 4 8 6 2 84 67
in
GiB
P RE
M IU
M
SSD
SIZ E
S P1 P2 P3 P4 P6 P 10 P 15 P 20 P 30 P 40 P 50 P 60 P 70 P 80
Pro 120 120 120 120 240 500 1,10 2,30 5,00 7,50 7,50 16,0 18,0 20,0
visi 0 0 0 0 0 00 00 00
one
d
IOP
S
per
disk
Pro 25 25 25 25 50 100 125 150 200 250 250 500 750 900
visi MB/ MB/ MB/ MB/ MB/ MB/ MB/ MB/ MB/ MB/ MB/ MB/ MB/ MB/
one sec sec sec sec sec sec sec sec sec sec sec sec sec sec
d
Thr
oug
hpu
t
per
disk
Max 30 30 30 30 30 30 30 30
bur min min min min min min min min
st
dur
atio
n
RESO URC E L IM IT
P REM IUM
STO RA GE DISK
TYPE P 10 P 20 P 30 P 40 P 50
Disk size 128 GiB 512 GiB 1,024 GiB (1 TB) 2,048 GiB (2 TB) 4,095 GiB (4 TB)
Maximum 100 MB/sec 150 MB/sec 200 MB/sec 250 MB/sec 250 MB/sec
throughput per
disk
Maximum 280 70 35 17 8
number of disks
per storage
account
See also
Azure subscription and service limits, quotas, and constraints
Backup and disaster recovery for Azure IaaS disks
11/2/2020 • 21 minutes to read • Edit Online
This article explains how to plan for backup and disaster recovery (DR) of IaaS virtual machines (VMs) and disks in
Azure. This document covers both managed and unmanaged disks.
First, we cover the built-in fault tolerance capabilities in the Azure platform that helps guard against local failures.
We then discuss the disaster scenarios not fully covered by the built-in capabilities. We also show several examples
of workload scenarios where different backup and DR considerations can apply. We then review possible solutions
for the DR of IaaS disks.
Introduction
The Azure platform uses various methods for redundancy and fault tolerance to help protect customers from
localized hardware failures. Local failures can include problems with an Azure Storage server machine that stores
part of the data for a virtual disk or failures of an SSD or HDD on that server. Such isolated hardware component
failures can happen during normal operations.
The Azure platform is designed to be resilient to these failures. Major disasters can result in failures or the
inaccessibility of many storage servers or even a whole datacenter. Although your VMs and disks are normally
protected from localized failures, additional steps are necessary to protect your workload from region-wide
catastrophic failures, such as a major disaster, that can affect your VM and disks.
In addition to the possibility of platform failures, problems with a customer application or data can occur. For
example, a new version of your application might inadvertently make a change to the data that causes it to break.
In that case, you might want to revert the application and the data to a prior version that contains the last known
good state. This requires maintaining regular backups.
For regional disaster recovery, you must back up your IaaS VM disks to a different region.
Before we look at backup and DR options, let’s recap a few methods available for handling localized failures.
Azure IaaS resiliency
Resiliency refers to the tolerance for normal failures that occur in hardware components. Resiliency is the ability to
recover from failures and continue to function. It's not about avoiding failures, but responding to failures in a way
that avoids downtime or data loss. The goal of resiliency is to return the application to a fully functioning state
following a failure. Azure virtual machines and disks are designed to be resilient to common hardware faults. Let's
look at how the Azure IaaS platform provides this resiliency.
A virtual machine consists mainly of two parts: a compute server and the persistent disks. Both affect the fault
tolerance of a virtual machine.
If the Azure compute host server that houses your VM experiences a hardware failure, which is rare, Azure is
designed to automatically restore the VM on another server. If this scenario, your computer reboots, and the VM
comes back up after some time. Azure automatically detects such hardware failures and executes recoveries to help
ensure the customer VM is available as soon as possible.
Regarding IaaS disks, the durability of data is critical for a persistent storage platform. Azure customers have
important business applications running on IaaS, and they depend on the persistence of the data. Azure designs
protection for these IaaS disks, with three redundant copies of the data that is stored locally. These copies provide
for high durability against local failures. If one of the hardware components that holds your disk fails, your VM is
not affected, because there are two additional copies to support disk requests. It works fine, even if two different
hardware components that support a disk fail at the same time (which is rare).
To ensure that you always maintain three replicas, Azure Storage automatically spawns a new copy of the data in
the background if one of the three copies becomes unavailable. Therefore, it should not be necessary to use RAID
with Azure disks for fault tolerance. A simple RAID 0 configuration should be sufficient for striping the disks, if
necessary, to create larger volumes.
Because of this architecture, Azure has consistently delivered enterprise-grade durability for IaaS disks, with an
industry-leading zero percent annualized failure rate.
Localized hardware faults on the compute host or in the Storage platform can sometimes result in of the
temporary unavailability of the VM that is covered by the Azure SLA for VM availability. Azure also provides an
industry-leading SLA for single VM instances that use Azure premium SSDs.
To safeguard application workloads from downtime due to the temporary unavailability of a disk or VM, customers
can use availability sets. Two or more virtual machines in an availability set provide redundancy for the application.
Azure then creates these VMs and disks in separate fault domains with different power, network, and server
components.
Because of these separate fault domains, localized hardware failures typically do not affect multiple VMs in the set
at the same time. Having separate fault domains provides high availability for your application. It's considered a
good practice to use availability sets when high availability is required. The next section covers the disaster
recovery aspect.
Backup and disaster recovery
Disaster recovery is the ability to recover from rare, but major, incidents. These incidents include non-transient,
wide-scale failures, such as service disruption that affects an entire region. Disaster recovery includes data backup
and archiving, and might include manual intervention, such as restoring a database from a backup.
The Azure platform’s built-in protection against localized failures might not fully protect the VMs/disks if a major
disaster causes large-scale outages. These large-scale outages include catastrophic events, such as if a datacenter is
hit by a hurricane, earthquake, fire, or if there is a large-scale hardware unit failure. In addition, you might
encounter failures due to application or data issues.
To help protect your IaaS workloads from outages, you should plan for redundancy and have backups to enable
recovery. For disaster recovery, you should back up in a different geographic location away from the primary site.
This approach helps ensure your backup is not affected by the same event that originally affected the VM or disks.
For more information, see Disaster recovery for Azure applications.
Your DR considerations might include the following aspects:
High availability: The ability of the application to continue running in a healthy state, without significant
downtime. By healthy state, this state means that the application is responsive, and users can connect to the
application and interact with it. Certain mission-critical applications and databases might be required to
always be available, even when there are failures in the platform. For these workloads, you might need to
plan redundancy for the application, as well as the data.
Data durability: In some cases, the main consideration is ensuring that the data is preserved if a disaster
happens. Therefore, you might need a backup of your data in a different site. For such workloads, you might
not need full redundancy for the application, but only a regular backup of the disks.
Unmanaged locally redundant storage Local (locally redundant storage) Azure Backup
disks
High availability is best met by using managed disks in an availability set along with Azure Backup. If you use
unmanaged disks, you can still use Azure Backup for DR. If you are unable to use Azure Backup, then taking
consistent snapshots, as described in a later section, is an alternative solution for backup and DR.
Your choices for high availability, backup, and DR at application or infrastructure levels can be represented as
follows:
NOTE
Only having the disks in a geo-redundant storage or read-access geo-redundant storage account does not protect the VM
from disasters. You must also create coordinated snapshots or use Azure Backup. This is required to recover a VM to a
consistent state.
If you use locally redundant storage, you must copy the snapshots to a different storage account immediately after
creating the snapshot. The copy target might be a locally redundant storage account in a different region, resulting
in the copy being in a remote region. You can also copy the snapshot to a read-access geo-redundant storage
account in the same region. In this case, the snapshot is lazily replicated to the remote secondary region. Your
backup is protected from disasters at the primary site after the copying and replication is complete.
To copy your incremental snapshots for DR efficiently, review the instructions in Back up Azure unmanaged VM
disks with incremental snapshots.
Other options
SQL Server
SQL Server running in a VM has its own built-in capabilities to back up your SQL Server database to Azure Blob
storage or a file share. If the storage account is geo-redundant storage or read-access geo-redundant storage, you
can access those backups in the storage account’s secondary datacenter in the event of a disaster, with the same
restrictions as previously discussed. For more information, see Back up and restore for SQL Server in Azure virtual
machines. In addition to back up and restore, SQL Server AlwaysOn availability groups can maintain secondary
replicas of databases. This ability greatly reduces the disaster recovery time.
Other considerations
This article has discussed how to back up or take snapshots of your VMs and their disks to support disaster
recovery and how to use those backups or snapshots to recover your data. With the Azure Resource Manager
model, many people use templates to create their VMs and other infrastructures in Azure. You can use a template
to create a VM that has the same configuration every time. If you use custom images for creating your VMs, you
must also make sure that your images are protected by using a read-access geo-redundant storage account to
store them.
Consequently, your backup process can be a combination of two things:
Back up the data (disks).
Back up the configuration (templates and custom images).
Depending on the backup option you choose, you might have to handle the backup of both the data and the
configuration, or the backup service might handle all of that for you.
Next steps
See Back up Azure unmanaged Virtual Machine disks with incremental snapshots.
Azure shared disks
11/2/2020 • 8 minutes to read • Edit Online
Azure shared disks is a new feature for Azure managed disks that allows you to attach a managed disk to multiple
virtual machines (VMs) simultaneously. Attaching a managed disk to multiple VMs allows you to either deploy new
or migrate existing clustered applications to Azure.
How it works
VMs in the cluster can read or write to their attached disk based on the reservation chosen by the clustered
application using SCSI Persistent Reservations (SCSI PR). SCSI PR is an industry standard leveraged by applications
running on Storage Area Network (SAN) on-premises. Enabling SCSI PR on a managed disk allows you to migrate
these applications to Azure as-is.
Shared managed disks offer shared block storage that can be accessed from multiple VMs, these are exposed as
logical unit numbers (LUNs). LUNs are then presented to an initiator (VM) from a target (disk). These LUNs look like
direct-attached-storage (DAS) or a local drive to the VM.
Shared managed disks do not natively offer a fully managed file system that can be accessed using SMB/NFS. You
need to use a cluster manager, like Windows Server Failover Cluster (WSFC) or Pacemaker, that handles cluster
node communication and write locking.
Limitations
Enabling shared disks is only available to a subset of disk types. Currently only ultra disks and premium SSDs can
enable shared disks. Each managed disk that have shared disks enabled are subject to the following limitations,
organized by disk type:
Ultra disks
Ultra disks have their own separate list of limitations, unrelated to shared disks. For ultra disk limitations, refer to
Using Azure ultra disks.
When sharing ultra disks, they have the following additional limitations:
Currently limited to Azure Resource Manager or SDK support.
Only basic disks can be used with some versions of Windows Server Failover Cluster, for details see Failover
clustering hardware requirements and storage options.
Shared ultra disks are available in all regions that support ultra disks by default, and do not require you to sign up
for access to use them.
Premium SSDs
Currently limited to Azure Resource Manager or SDK support.
Can only be enabled on data disks, not OS disks.
ReadOnly host caching is not available for premium SSDs with maxShares>1 .
Disk bursting is not available for premium SSDs with maxShares>1 .
When using Availability sets and virtual machine scale sets with Azure shared disks, storage fault domain
alignment with virtual machine fault domain is not enforced for the shared data disk.
When using proximity placement groups (PPG), all virtual machines sharing a disk must be part of the same
PPG.
Only basic disks can be used with some versions of Windows Server Failover Cluster, for details see Failover
clustering hardware requirements and storage options.
Azure Backup and Azure Site Recovery support is not yet available.
Regional availability
Shared premium SSDs are available in all regions that managed disks are available.
Operating system requirements
Shared disks support several operating systems. See the Windows or Linux sections for the supported operating
systems.
Disk sizes
For now, only ultra disks and premium SSDs can enable shared disks. Different disk sizes may have a different
maxShares limit, which you cannot exceed when setting the maxShares value. For premium SSDs, the disk sizes that
support sharing their disks are P15 and greater.
For each disk, you can define a maxShares value that represents the maximum number of nodes that can
simultaneously share the disk. For example, if you plan to set up a 2-node failover cluster, you would set
maxShares=2 . The maximum value is an upper bound. Nodes can join or leave the cluster (mount or unmount the
disk) as long as the number of nodes is lower than the specified maxShares value.
NOTE
The maxShares value can only be set or edited when the disk is detached from all nodes.
P15, P20 2
The IOPS and bandwidth limits for a disk are not affected by the maxShares value. For example, the max IOPS of a
P15 disk is 1100 whether maxShares = 1 or maxShares > 1.
Ultra disk ranges
The minimum maxShares value is 1, while the maximum maxShares value is 5. There are no size restrictions on
ultra disks, any size ultra disk can use any value for maxShares , up to and including the maximum value.
Sample workloads
Windows
Azure shared disks are supported on Windows Server 2008 and newer. Most Windows-based clustering builds on
WSFC, which handles all core infrastructure for cluster node communication, allowing your applications to take
advantage of parallel access patterns. WSFC enables both CSV and non-CSV-based options depending on your
version of Windows Server. For details, refer to Create a failover cluster.
Some popular applications running on WSFC include:
Create an FCI with Azure shared disks (SQL Server on Azure VMs)
Scale-out File Server (SoFS) [template] (https://aka.ms/azure-shared-disk-sofs-template)
SAP ASCS/SCS [template] (https://aka.ms/azure-shared-disk-sapacs-template)
File Server for General Use (IW workload)
Remote Desktop Server User Profile Disk (RDS UPD)
Linux
Azure shared disks are supported on:
SUSE SLE for SAP and SUSE SLE HA 15 SP1 and above
Ubuntu 18.04 and above
RHEL developer preview on any RHEL 8 version
Oracle Enterprise Linux
Linux clusters can leverage cluster managers such as Pacemaker. Pacemaker builds on Corosync, enabling cluster
communications for applications deployed in highly available environments. Some common clustered filesystems
include ocfs2 and gfs2. You can use SCSI Persistent Reservation (SCSI PR) and/or STONITH Block Device (SBD)
based clustering models for arbitrating access to the disk. When using SCSI PR, you can manipulate reservations
and registrations using utilities such as fence_scsi and sg_persist.
DiskIOPSReadWrite The total number of IOPS allowed across all VMs mounting
the share disk with write access.
DiskMBpsReadWrite The total throughput (MB/s) allowed across all VMs mounting
the shared disk with write access.
DiskIOPSReadOnly* The total number of IOPS allowed across all VMs mounting
the shared disk as ReadOnly .
DiskMBpsReadOnly* The total throughput (MB/s) allowed across all VMs mounting
the shared disk as ReadOnly .
The following is an example of a 2-node WSFC using clustered shared volumes. With this configuration, both VMs
have simultaneous write-access to the disk, which results in the ReadWrite throttle being split across the two VMs
and the ReadOnly throttle not being used.
T w o n o d e c l u st e r w i t h o u t c l u st e r sh a r e v o l u m e s
The following is an example of a 2-node WSFC that isn't using clustered shared volumes. With this configuration,
only one VM has write-access to the disk. This results in the ReadWrite throttle being used exclusively for the
primary VM and the ReadOnly throttle only being used by the secondary.
F o u r n o d e L i n u x c l u st e r
The following is an example of a 4-node Linux cluster with a single writer and three scale-out readers. With this
configuration, only one VM has write-access to the disk. This results in the ReadWrite throttle being used
exclusively for the primary VM and the ReadOnly throttle being split by the secondary VMs.
Ultra pricing
Ultra shared disks are priced based on provisioned capacity, total provisioned IOPS (diskIOPSReadWrite +
diskIOPSReadOnly) and total provisioned Throughput MBps (diskMBpsReadWrite + diskMBpsReadOnly). There is
no extra charge for each additional VM mount. For example, an ultra shared disk with the following configuration
(diskSizeGB: 1024, DiskIOPSReadWrite: 10000, DiskMBpsReadWrite: 600, DiskIOPSReadOnly: 100,
DiskMBpsReadOnly: 1) is charged with 1024 GiB, 10100 IOPS, and 601 MBps regardless of whether it is mounted
to two VMs or five VMs.
Next steps
If you're interested in enabling and using shared disks for your managed disks, proceed to our article Enable shared
disk.
Ephemeral OS disks for Azure VMs
11/2/2020 • 7 minutes to read • Edit Online
Ephemeral OS disks are created on the local virtual machine (VM) storage and not saved to the remote Azure
Storage. Ephemeral OS disks work well for stateless workloads, where applications are tolerant of individual VM
failures, but are more affected by VM deployment time or reimaging the individual VM instances. With Ephemeral
OS disk, you get lower read/write latency to the OS disk and faster VM reimage.
The key features of ephemeral disks are:
Ideal for stateless applications.
They can be used with both Marketplace and custom images.
Ability to fast reset or reimage VMs and scale set instances to the original boot state.
Lower latency, similar to a temporary disk.
Ephemeral OS disks are free, you incur no storage cost for OS disk.
They are available in all Azure regions.
Ephemeral OS Disk is supported by Shared Image Gallery.
Key differences between persistent and ephemeral OS disks:
Size limit for OS disk 2 TiB Cache size for the VM size or 2TiB,
whichever is smaller. For the cache size
in GiB , see DS, ES, M, FS, and GS
Disk type suppor t Managed and unmanaged OS disk Managed OS disk only
Data persistence OS disk data written to OS disk are Data written to OS disk is stored to the
stored in Azure Storage local VM storage and is not persisted to
Azure Storage.
Stop-deallocated state VMs and scale set instances can be VMs and scale set instances cannot be
stop-deallocated and restarted from the stop-deallocated
stop-deallocated state
OS disk resize Supported during VM creation and Supported during VM creation only
after VM is stop-deallocated
Resizing to a new VM size OS disk data is preserved Data on the OS disk is deleted, OS is re-
provisioned
P ERSIST EN T O S DISK EP H EM ERA L O S DISK
Page file placement For Windows, page file is stored on the For Windows, page file is stored on the
resource disk OS disk
Size requirements
You can deploy VM and instance images up to the size of the VM cache. For example, Standard Windows Server
images from the marketplace are about 127 GiB, which means that you need a VM size that has a cache larger than
127 GiB. In this case, the Standard_DS2_v2 has a cache size of 86 GiB, which is not large enough. The
Standard_DS3_v2 has a cache size of 172 GiB, which is large enough. In this case, the Standard_DS3_v2 is the
smallest size in the DSv2 series that you can use with this image. Basic Linux images in the Marketplace and
Windows Server images that are denoted by [smallsize] tend to be around 30 GiB and can use most of the
available VM sizes.
Ephemeral disks also require that the VM size supports Premium storage. The sizes usually (but not always) have
an s in the name, like DSv2 and EsV3. For more information, see Azure VM sizes for details around which sizes
support Premium storage.
PowerShell
To use an ephemeral disk for a PowerShell VM deployment, use Set-AzVMOSDisk in your VM configuration. Set the
-DiffDiskSetting to Local and -Caching to ReadOnly .
For scale set deployments, use the Set-AzVmssStorageProfile cmdlet in your configuration. Set the
-DiffDiskSetting to Local and -Caching to ReadOnly .
CLI
To use an ephemeral disk for a CLI VM deployment, set the --ephemeral-os-disk parameter in az vm create to
true and the --os-disk-caching parameter to ReadOnly .
az vm create \
--resource-group myResourceGroup \
--name myVM \
--image UbuntuLTS \
--ephemeral-os-disk true \
--os-disk-caching ReadOnly \
--admin-username azureuser \
--generate-ssh-keys
For scale sets, you use the same --ephemeral-os-disk true parameter for az-vmss-create and set the
--os-disk-caching parameter to ReadOnly .
Portal
In the Azure portal, you can choose to use ephemeral disks when deploying a VM by opening the Advanced
section of the Disks tab. For Use ephemeral OS disk select Yes .
If the option for using an ephemeral disk is greyed out, you might have selected a VM size that does not have a
cache size larger than the OS image or that doesn't support Premium storage. Go back to the Basics page and try
choosing another VM size.
You can also create scale-sets with ephemeral OS disks using the portal. Just make sure you select a VM size with a
large enough cache size and then in Use ephemeral OS disk select Yes .
Scale set template deployment
The process to create a scale set that uses an ephemeral OS disk is to add the diffDiskSettings property to the
Microsoft.Compute/virtualMachineScaleSets/virtualMachineProfile resource type in the template. Also, the caching
policy must be set to ReadOnly for the ephemeral OS disk.
{
"type": "Microsoft.Compute/virtualMachineScaleSets",
"name": "myScaleSet",
"location": "East US 2",
"apiVersion": "2018-06-01",
"sku": {
"name": "Standard_DS2_v2",
"capacity": "2"
},
"properties": {
"upgradePolicy": {
"mode": "Automatic"
},
"virtualMachineProfile": {
"storageProfile": {
"osDisk": {
"diffDiskSettings": {
"option": "Local"
},
"caching": "ReadOnly",
"createOption": "FromImage"
},
"imageReference": {
"publisher": "Canonical",
"offer": "UbuntuServer",
"sku": "16.04-LTS",
"version": "latest"
}
},
"osProfile": {
"computerNamePrefix": "myvmss",
"adminUsername": "azureuser",
"adminPassword": "P@ssw0rd!"
}
}
}
}
VM template deployment
You can deploy a VM with an ephemeral OS disk using a template. The process to create a VM that uses ephemeral
OS disks is to add the diffDiskSettings property to the Microsoft.Compute/virtualMachines resource type in the
template. Also, the caching policy must be set to ReadOnly for the ephemeral OS disk.
{
"type": "Microsoft.Compute/virtualMachines",
"name": "myVirtualMachine",
"location": "East US 2",
"apiVersion": "2018-06-01",
"properties": {
"storageProfile": {
"osDisk": {
"diffDiskSettings": {
"option": "Local"
},
"caching": "ReadOnly",
"createOption": "FromImage"
},
"imageReference": {
"publisher": "MicrosoftWindowsServer",
"offer": "WindowsServer",
"sku": "2016-Datacenter-smalldisk",
"version": "latest"
},
"hardwareProfile": {
"vmSize": "Standard_DS2_v2"
}
},
"osProfile": {
"computerNamePrefix": "myvirtualmachine",
"adminUsername": "azureuser",
"adminPassword": "P@ssw0rd!"
}
}
}
POST https://management.azure.com/subscriptions/{sub-
id}/resourceGroups/{rgName}/providers/Microsoft.Compute/VirtualMachines/{vmName}/reimage?a pi-version=2018-06-
01"
foreach($vmSize in $vmSizes)
{
foreach($capability in $vmSize.capabilities)
{
if($capability.Name -eq 'EphemeralOSDiskSupported' -and $capability.Value -eq 'true')
{
$vmSize
}
}
}
Q: Can the ephemeral OS disk be applied to existing VMs and scale sets?
A: No, ephemeral OS disk can only be used during VM and scale set creation.
Q: Can you mix ephemeral and normal OS disks in a scale set?
A: No, you can't have a mix of ephemeral and persistent OS disk instances within the same scale set.
Q: Can the ephemeral OS disk be created using Powershell or CLI?
A: Yes, you can create VMs with Ephemeral OS Disk using REST, Templates, PowerShell and CLI.
Q: What features are not suppor ted with ephemeral OS disk?
A: Ephemeral disks do not support:
Capturing VM images
Disk snapshots
Azure Disk Encryption
Azure Backup
Azure Site Recovery
OS Disk Swap
Next steps
You can create a VM with an ephemeral OS disk using the Azure CLI.
Virtual networks and virtual machines in Azure
11/2/2020 • 14 minutes to read • Edit Online
When you create an Azure virtual machine (VM), you must create a virtual network (VNet) or use an existing VNet.
You also need to decide how your VMs are intended to be accessed on the VNet. It is important to plan before
creating resources and make sure that you understand the limits of networking resources.
In the following figure, VMs are represented as web servers and database servers. Each set of VMs are assigned to
separate subnets in the VNet.
You can create a VNet before you create a VM or you can as you create a VM. You create these resources to support
communication with a VM:
Network interfaces
IP addresses
Virtual network and subnets
In addition to those basic resources, you should also consider these optional resources:
Network security groups
Load balancers
Network interfaces
A network interface (NIC) is the interconnection between a VM and a virtual network (VNet). A VM must have at
least one NIC, but can have more than one, depending on the size of the VM you create. Learn about how many
NICs each VM size supports, see VM sizes.
You can create a VM with multiple NICs, and add or remove NICs through the lifecycle of a VM. Multiple NICs allow
a VM to connect to different subnets and send or receive traffic over the most appropriate interface. VMs with any
number of network interfaces can exist in the same availability set, up to the number supported by the VM size.
Each NIC attached to a VM must exist in the same location and subscription as the VM. Each NIC must be
connected to a VNet that exists in the same Azure location and subscription as the NIC. You can change the subnet
a VM is connected to after it's created, but you cannot change the VNet. Each NIC attached to a VM is assigned a
MAC address that doesn't change until the VM is deleted.
This table lists the methods that you can use to create a network interface.
M ET H O D DESC RIP T IO N
Azure CLI To provide the identifier of the public IP address that you
previously created, use az network nic create with the --
public-ip-address parameter.
IP addresses
You can assign these types of IP addresses to a NIC in Azure:
Public IP addresses - Used to communicate inbound and outbound (without network address translation
(NAT)) with the Internet and other Azure resources not connected to a VNet. Assigning a public IP address to a
NIC is optional. Public IP addresses have a nominal charge, and there's a maximum number that can be used per
subscription.
Private IP addresses - Used for communication within a VNet, your on-premises network, and the Internet
(with NAT). You must assign at least one private IP address to a VM. To learn more about NAT in Azure, read
Understanding outbound connections in Azure.
You can assign public IP addresses to VMs or internet-facing load balancers. You can assign private IP addresses to
VMs and internal load balancers. You assign IP addresses to a VM using a network interface.
There are two methods in which an IP address is allocated to a resource - dynamic or static. The default allocation
method is dynamic, where an IP address is not allocated when it's created. Instead, the IP address is allocated when
you create a VM or start a stopped VM. The IP address is released when you stop or delete the VM.
To ensure the IP address for the VM remains the same, you can set the allocation method explicitly to static. In this
case, an IP address is assigned immediately. It is released only when you delete the VM or change its allocation
method to dynamic.
This table lists the methods that you can use to create an IP address.
M ET H O D DESC RIP T IO N
Azure portal By default, public IP addresses are dynamic and the address
associated to them may change when the VM is stopped or
deleted. To guarantee that the VM always uses the same
public IP address, create a static public IP address. By default,
the portal assigns a dynamic private IP address to a NIC when
creating a VM. You can change this IP address to static after
the VM is created.
Azure CLI You use az network public-ip create with the --allocation-
method parameter as Dynamic or Static.
After you create a public IP address, you can associate it with a VM by assigning it to a NIC.
M ET H O D DESC RIP T IO N
M ET H O D DESC RIP T IO N
Azure portal If you let Azure create a VNet when you create a VM, the
name is a combination of the resource group name that
contains the VNet and -vnet . The address space is
10.0.0.0/24, the required subnet name is default , and the
subnet address range is 10.0.0.0/24.
Azure CLI The subnet and the VNet are created at the same time.
Provide a --subnet-name parameter to az network vnet
create with the subnet name.
M ET H O D DESC RIP T IO N
Azure CLI Use az network nsg create to initially create the NSG. Use az
network nsg rule create to add rules to the NSG. Use az
network vnet subnet update to add the NSG to the subnet.
Load balancers
Azure Load Balancer delivers high availability and network performance to your applications. A load balancer can
be configured to balance incoming Internet traffic to VMs or balance traffic between VMs in a VNet. A load balancer
can also balance traffic between on-premises computers and VMs in a cross-premises network, or forward external
traffic to a specific VM.
The load balancer maps incoming and outgoing traffic between the public IP address and port on the load balancer
and the private IP address and port of the VM.
When you create a load balancer, you must also consider these configuration elements:
Front-end IP configuration – A load balancer can include one or more front-end IP addresses. These IP
addresses serve as ingress for the traffic.
Back-end address pool – IP addresses that are associated with the NIC to which load is distributed.
Por t For warding - Defines how inbound traffic flows through the front-end IP and distributed to the back-end
IP utilizing inbound NAT rules.
Load balancer rules - Maps a given front-end IP and port combination to a set of back-end IP addresses and
port combination. A single load balancer can have multiple load balancing rules. Each rule is a combination of a
front-end IP and port and back-end IP and port associated with VMs.
Probes - Monitors the health of VMs. When a probe fails to respond, the load balancer stops sending new
connections to the unhealthy VM. The existing connections are not affected, and new connections are sent to
healthy VMs.
Outbound rules - An outbound rule configures outbound Network Address Translation (NAT) for all virtual
machines or instances identified by the backend pool of your Standard Load Balancer to be translated to the
frontend.
This table lists the methods that you can use to create an internet-facing load balancer.
M ET H O D DESC RIP T IO N
Azure portal You can load balance internet traffic to VMs using the Azure
portal.
M ET H O D DESC RIP T IO N
Azure PowerShell To provide the identifier of the public IP address that you
previously created, use New-AzLoadBalancerFrontendIpConfig
with the -PublicIpAddress parameter. Use New-
AzLoadBalancerBackendAddressPoolConfig to create the
configuration of the back-end address pool. Use New-
AzLoadBalancerInboundNatRuleConfig to create inbound NAT
rules associated with the front-end IP configuration that you
created. Use New-AzLoadBalancerProbeConfig to create the
probes that you need. Use New-AzLoadBalancerRuleConfig to
create the load balancer configuration. Use New-
AzLoadBalancer to create the load balancer.
Azure CLI Use az network lb create to create the initial load balancer
configuration. Use az network lb frontend-ip create to add the
public IP address that you previously created. Use az network
lb address-pool create to add the configuration of the back-
end address pool. Use az network lb inbound-nat-rule create
to add NAT rules. Use az network lb rule create to add the
load balancer rules. Use az network lb probe create to add the
probes.
Template Use 2 VMs in a Load Balancer and configure NAT rules on the
LB as a guide for deploying a load balancer using a template.
This table lists the methods that you can use to create an internal load balancer.
M ET H O D DESC RIP T IO N
Azure portal You can balance internal traffic load with a load balancer in the
Azure portal.
Azure CLI Use the az network lb create command to create the initial
load balancer configuration. To define the private IP address,
use az network lb frontend-ip create with the --private-ip-
address parameter. Use az network lb address-pool create to
add the configuration of the back-end address pool. Use az
network lb inbound-nat-rule create to add NAT rules. Use az
network lb rule create to add the load balancer rules. Use az
network lb probe create to add the probes.
Template Use 2 VMs in a Load Balancer and configure NAT rules on the
LB as a guide for deploying a load balancer using a template.
VMs
VMs can be created in the same VNet and they can connect to each other using private IP addresses. They can
connect even if they are in different subnets without the need to configure a gateway or use public IP addresses. To
put VMs into a VNet, you create the VNet and then as you create each VM, you assign it to the VNet and subnet.
VMs acquire their network settings during deployment or startup.
VMs are assigned an IP address when they are deployed. If you deploy multiple VMs into a VNet or subnet, they
are assigned IP addresses as they boot up. You can also allocate a static IP to a VM. If you allocate a static IP, you
should consider using a specific subnet to avoid accidentally reusing a static IP for another VM.
If you create a VM and later want to migrate it into a VNet, it is not a simple configuration change. You must
redeploy the VM into the VNet. The easiest way to redeploy is to delete the VM, but not any disks attached to it, and
then re-create the VM using the original disks in the VNet.
This table lists the methods that you can use to create a VM in a VNet.
M ET H O D DESC RIP T IO N
Azure portal Uses the default network settings that were previously
mentioned to create a VM with a single NIC. To create a VM
with multiple NICs, you must use a different method.
Azure CLI Create and connect a VM to a Vnet, subnet, and NIC that
build as individual steps.
Next steps
For VM-specific steps on how to manage Azure virtual networks for VMs, see the Windows or Linux tutorials.
There are also tutorials on how to load balance VMs and create highly available applications for Windows or Linux.
Learn how to configure user-defined routes and IP forwarding.
Learn how to configure VNet to VNet connections.
Learn how to Troubleshoot routes.
Learn more about Virtual machine network bandwidth.
What are virtual machine scale sets?
11/2/2020 • 4 minutes to read • Edit Online
Azure virtual machine scale sets let you create and manage a group of load balanced VMs. The number of VM
instances can automatically increase or decrease in response to demand or a defined schedule. Scale sets provide
high availability to your applications, and allow you to centrally manage, configure, and update a large number of
VMs. With virtual machine scale sets, you can build large-scale services for areas such as compute, big data, and
container workloads.
Add additional VM instances Manual process to create, configure, Automatically create from central
and ensure compliance configuration
Traffic balancing and distribution Manual process to create and configure Can automatically create and integrate
Azure load balancer or Application with Azure load balancer or Application
Gateway Gateway
High availability and redundancy Manually create Availability Set or Automatic distribution of VM instances
distribute and track VMs across across Availability Zones or Availability
Availability Zones Sets
Scaling of VMs Manual monitoring and Azure Autoscale based on host metrics, in-
Automation guest metrics, Application Insights, or
schedule
There is no additional cost to scale sets. You only pay for the underlying compute resources such as the VM
instances, load balancer, or Managed Disk storage. The management and automation features, such as autoscale
and redundancy, incur no additional charges over the use of VMs.
Next steps
To get started, create your first virtual machine scale set in the Azure portal.
Create a scale set in the Azure portal
Use infrastructure automation tools with virtual
machines in Azure
11/2/2020 • 7 minutes to read • Edit Online
To create and manage Azure virtual machines (VMs) in a consistent manner at scale, some form of automation is
typically desired. There are many tools and solutions that allow you to automate the complete Azure infrastructure
deployment and management lifecycle. This article introduces some of the infrastructure automation tools that you
can use in Azure. These tools commonly fit in to one of the following approaches:
Automate the configuration of VMs
Tools include Ansible, Chef, Puppet, and Azure Resource Manager template.
Tools specific to VM customization include cloud-init for Linux VMs, PowerShell Desired State
Configuration (DSC), and the Azure Custom Script Extension for all Azure VMs.
Automate infrastructure management
Tools include Packer to automate custom VM image builds, and Terraform to automate the infrastructure
build process.
Azure Automation can perform actions across your Azure and on-premises infrastructure.
Automate application deployment and delivery
Examples include Azure DevOps Services and Jenkins.
Ansible
Ansible is an automation engine for configuration management, VM creation, or application deployment. Ansible
uses an agent-less model, typically with SSH keys, to authenticate and manage target machines. Configuration tasks
are defined in playbooks, with a number of Ansible modules available to carry out specific tasks. For more
information, see How Ansible works.
Learn how to:
Install and configure Ansible on Linux for use with Azure.
Create a Linux virtual machine.
Manage a Linux virtual machine.
Chef
Chef is an automation platform that helps define how your infrastructure is configured, deployed, and managed.
Additional components included Chef Habitat for application lifecycle automation rather than the infrastructure,
and Chef InSpec that helps automate compliance with security and policy requirements. Chef Clients are installed
on target machines, with one or more central Chef Servers that store and manage the configurations. For more
information, see An Overview of Chef.
Learn how to:
Deploy Chef Automate from the Azure Marketplace.
Install Chef on Windows and create Azure VMs.
Puppet
Puppet is an enterprise-ready automation platform that handles the application delivery and deployment process.
Agents are installed on target machines to allow Puppet Master to run manifests that define the desired
configuration of the Azure infrastructure and VMs. Puppet can integrate with other solutions such as Jenkins and
GitHub for an improved devops workflow. For more information, see How Puppet works.
Learn how to:
Deploy Puppet from the Azure Marketplace.
Cloud-init
Cloud-init is a widely used approach to customize a Linux VM as it boots for the first time. You can use cloud-init to
install packages and write files, or to configure users and security. Because cloud-init is called during the initial boot
process, there are no additional steps or required agents to apply your configuration. For more information on how
to properly format your #cloud-config files, see the cloud-init documentation site. #cloud-config files are text files
encoded in base64.
Cloud-init also works across distributions. For example, you don't use apt-get install or yum install to install a
package. Instead you can define a list of packages to install. Cloud-init automatically uses the native package
management tool for the distro you select.
We are actively working with our endorsed Linux distro partners in order to have cloud-init enabled images
available in the Azure Marketplace. These images make your cloud-init deployments and configurations work
seamlessly with VMs and virtual machine scale sets. Learn more details about cloud-init on Azure:
Cloud-init support for Linux virtual machines in Azure
Try a tutorial on automated VM configuration using cloud-init.
PowerShell DSC
PowerShell Desired State Configuration (DSC) is a management platform to define the configuration of target
machines. DSC can also be used on Linux through the Open Management Infrastructure (OMI) server.
DSC configurations define what to install on a machine and how to configure the host. A Local Configuration
Manager (LCM) engine runs on each target node that processes requested actions based on pushed configurations.
A pull server is a web service that runs on a central host to store the DSC configurations and associated resources.
The pull server communicates with the LCM engine on each target host to provide the required configurations and
report on compliance.
Learn how to:
Create a basic DSC configuration.
Configure a DSC pull server.
Use DSC for Linux.
Packer
Packer automates the build process when you create a custom VM image in Azure. You use Packer to define the OS
and run post-configuration scripts that customize the VM for your specific needs. Once configured, the VM is then
captured as a Managed Disk image. Packer automates the process to create the source VM, network and storage
resources, run configuration scripts, and then create the VM image.
Learn how to:
Use Packer to create a Linux VM image in Azure.
Use Packer to create a Windows VM image in Azure.
Terraform
Terraform is an automation tool that allows you to define and create an entire Azure infrastructure with a single
template format language - the HashiCorp Configuration Language (HCL). With Terraform, you define templates
that automate the process to create network, storage, and VM resources for a given application solution. You can
use your existing Terraform templates for other platforms with Azure to ensure consistency and simplify the
infrastructure deployment without needing to convert to an Azure Resource Manager template.
Learn how to:
Install and configure Terraform with Azure.
Create an Azure infrastructure with Terraform.
Azure Automation
Azure Automation uses runbooks to process a set of tasks on the VMs you target. Azure Automation is used to
manage existing VMs rather than to create an infrastructure. Azure Automation can run across both Linux and
Windows VMs, as well as on-premises virtual or physical machines with a hybrid runbook worker. Runbooks can be
stored in a source control repository, such as GitHub. These runbooks can then run manually or on a defined
schedule.
Azure Automation also provides a Desired State Configuration (DSC) service that allows you to create definitions
for how a given set of VMs should be configured. DSC then ensures that the required configuration is applied and
the VM stays consistent. Azure Automation DSC runs on both Windows and Linux machines.
Learn how to:
Create a PowerShell runbook.
Use Hybrid Runbook Worker to manage on-premises resources.
Use Azure Automation DSC.
Next steps
There are many different options to use infrastructure automation tools in Azure. You have the freedom to use the
solution that best fits your needs and environment. To get started and try some of the tools built-in to Azure, see
how to automate the customization of a Linux or Windows VM.
Secure and use policies on virtual machines in Azure
11/2/2020 • 5 minutes to read • Edit Online
It's important to keep your virtual machine (VM) secure for the applications that you run. Securing your VMs can
include one or more Azure services and features that cover secure access to your VMs and secure storage of your
data. This article provides information that enables you to keep your VM and applications secure.
Antimalware
The modern threat landscape for cloud environments is dynamic, increasing the pressure to maintain effective
protection in order to meet compliance and security requirements. Microsoft Antimalware for Azure is a free real-
time protection capability that helps identify and remove viruses, spyware, and other malicious software. Alerts can
be configured to notify you when known malicious or unwanted software attempts to install itself or run on your
VM. It is not supported on VMs running Linux or Windows Server 2008.
Encryption
Two encryption methods are offered for managed disks. Encryption at the OS-level, which is Azure Disk Encryption,
and encryption at the platform-level, which is server-side encryption.
Server-side encryption
Azure managed disks automatically encrypt your data by default when persisting it to the cloud. Server-side
encryption protects your data and helps you meet your organizational security and compliance commitments. Data
in Azure managed disks is encrypted transparently using 256-bit AES encryption, one of the strongest block
ciphers available, and is FIPS 140-2 compliant.
Encryption does not impact the performance of managed disks. There is no additional cost for the encryption.
You can rely on platform-managed keys for the encryption of your managed disk, or you can manage encryption
using your own keys. If you choose to manage encryption with your own keys, you can specify a customer-
managed key to use for encrypting and decrypting all data in managed disks.
To learn more about server-side encryption, refer to either the articles for Windows or Linux.
Azure Disk Encryption
For enhanced Windows VM and Linux VM security and compliance, virtual disks in Azure can be encrypted. Virtual
disks on Windows VMs are encrypted at rest using BitLocker. Virtual disks on Linux VMs are encrypted at rest using
dm-crypt.
There is no charge for encrypting virtual disks in Azure. Cryptographic keys are stored in Azure Key Vault using
software-protection, or you can import or generate your keys in Hardware Security Modules (HSMs) certified to
FIPS 140-2 level 2 standards. These cryptographic keys are used to encrypt and decrypt virtual disks attached to
your VM. You retain control of these cryptographic keys and can audit their use. An Azure Active Directory service
principal provides a secure mechanism for issuing these cryptographic keys as VMs are powered on and off.
Policies
Azure policies can be used to define the desired behavior for your organization's Windows VMs and Linux VMs. By
using policies, an organization can enforce various conventions and rules throughout the enterprise. Enforcement
of the desired behavior can help mitigate risk while contributing to the success of the organization.
Next steps
Walk through the steps to monitor virtual machine security by using Azure Security Center for Linux or
Windows.
Azure Disk Encryption for Windows VMs
11/6/2020 • 4 minutes to read • Edit Online
Azure Disk Encryption helps protect and safeguard your data to meet your organizational security and
compliance commitments. It uses the Bitlocker feature of Windows to provide volume encryption for the OS and
data disks of Azure virtual machines (VMs), and is integrated with Azure Key Vault to help you control and
manage the disk encryption keys and secrets.
If you use Azure Security Center, you're alerted if you have VMs that aren't encrypted. The alerts show as High
Severity and the recommendation is to encrypt these VMs.
WARNING
If you have previously used Azure Disk Encryption with Azure AD to encrypt a VM, you must continue use this option
to encrypt your VM. See Azure Disk Encryption with Azure AD (previous release) for details.
Certain recommendations might increase data, network, or compute resource usage, resulting in additional license or
subscription costs. You must have a valid active Azure subscription to create resources in Azure in the supported
regions.
You can learn the fundamentals of Azure Disk Encryption for Windows in just a few minutes with the Create and
encrypt a Windows VM with Azure CLI quickstart or the Create and encrypt a Windows VM with Azure
Powershell quickstart.
NOTE
Windows Server 2008 R2 requires the .NET Framework 4.5 to be installed for encryption; install it from Windows Update
with the optional update Microsoft .NET Framework 4.5.2 for Windows Server 2008 R2 x64-based systems (KB2901983).
Windows Server 2012 R2 Core and Windows Server 2016 Core requires the bdehdcfg component to be installed on the
VM for encryption.
Networking requirements
To enable Azure Disk Encryption, the VMs must meet the following network endpoint configuration
requirements:
To get a token to connect to your key vault, the Windows VM must be able to connect to an Azure Active
Directory endpoint, [login.microsoftonline.com].
To write the encryption keys to your key vault, the Windows VM must be able to connect to the key vault
endpoint.
The Windows VM must be able to connect to an Azure storage endpoint that hosts the Azure extension
repository and an Azure storage account that hosts the VHD files.
If your security policy limits access from Azure VMs to the Internet, you can resolve the preceding URI and
configure a specific rule to allow outbound connectivity to the IPs. For more information, see Azure Key Vault
behind a firewall.
Terminology
The following table defines some of the common terms used in Azure disk encryption documentation:
T ERM IN O LO GY DEF IN IT IO N
Azure Key Vault Key Vault is a cryptographic, key management service that's
based on Federal Information Processing Standards (FIPS)
validated hardware security modules. These standards help
to safeguard your cryptographic keys and sensitive secrets.
For more information, see the Azure Key Vault
documentation and Creating and configuring a key vault for
Azure Disk Encryption.
Azure CLI The Azure CLI is optimized for managing and administering
Azure resources from the command line.
Key encryption key (KEK) The asymmetric key (RSA 2048) that you can use to protect
or wrap the secret. You can provide a hardware security
module (HSM)-protected key or software-protected key. For
more information, see the Azure Key Vault documentation
and Creating and configuring a key vault for Azure Disk
Encryption.
Next steps
Quickstart - Create and encrypt a Windows VM with Azure CLI
Quickstart - Create and encrypt a Windows VM with Azure Powershell
Azure Disk Encryption scenarios on Windows VMs
Azure Disk Encryption prerequisites CLI script
Azure Disk Encryption prerequisites PowerShell script
Creating and configuring a key vault for Azure Disk Encryption
Azure Policy Regulatory Compliance controls for
Azure Virtual Machines
11/2/2020 • 48 minutes to read • Edit Online
Regulatory Compliance in Azure Policy provides Microsoft created and managed initiative definitions, known as
built-ins, for the compliance domains and security controls related to different compliance standards. This
page lists the compliance domains and security controls for Azure Virtual Machines . You can assign the built-
ins for a security control individually to help make your Azure resources compliant with the specific standard.
The title of each built-in policy definition links to the policy definition in the Azure portal. Use the link in the Policy
Version column to view the source on the Azure Policy GitHub repo.
IMPORTANT
Each control below is associated with one or more Azure Policy definitions. These policies may help you assess compliance
with the control; however, there often is not a one-to-one or complete match between a control and one or more policies. As
such, Compliant in Azure Policy refers only to the policies themselves; this doesn't ensure you're fully compliant with all
requirements of a control. In addition, the compliance standard includes controls that aren't addressed by any Azure Policy
definitions at this time. Therefore, compliance in Azure Policy is only a partial view of your overall compliance status. The
associations between controls and Azure Policy Regulatory Compliance definitions for these compliance standards may
change over time.
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E ( A ZURE PO RTA L) ( GIT HUB)
Network Security 1.11 Use automated tools Deploy the Windows 1.0.0
to monitor network Guest Configuration
resource extension to enable
configurations and Guest Configuration
detect changes assignments on
Windows VMs
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E
Logging and 2.4 Collect security logs The Log Analytics 1.0.0
Monitoring from operating agent should be
systems installed on Virtual
Machine Scale Sets
Logging and 2.4 Collect security logs The Log Analytics 1.0.0
Monitoring from operating agent should be
systems installed on virtual
machines
Inventory and Asset 6.8 Use only approved Adaptive application 2.0.0
Management applications controls for defining
safe applications
should be enabled on
your machines
Inventory and Asset 6.9 Use only approved Virtual machines 1.0.0
Management Azure services should be migrated to
new Azure Resource
Manager resources
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E ( A ZURE PO RTA L) ( GIT HUB)
Virtual Machines 7.1 Ensure that 'OS disk' Disk encryption 2.0.0
are encrypted should be applied on
virtual machines
Virtual Machines 7.5 Ensure that the latest System updates 2.0.0
OS Patches for all should be installed on
Virtual Machines are your machines
applied
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E ( A ZURE PO RTA L) ( GIT HUB)
NIST SP 800-171 R2
To review how the available Azure Policy built-ins for all Azure services map to this compliance standard, see Azure
Policy Regulatory Compliance - NIST SP 800-171 R2. For more information about this compliance standard, see
NIST SP 800-171 R2.
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E ( A ZURE PO RTA L) ( GIT HUB)
Access Control 3.1.1 Limit system access to Audit Linux machines 1.0.0
authorized users, that allow remote
processes acting on connections from
behalf of authorized accounts without
users, and devices passwords
(including other
systems).
Access Control 3.1.1 Limit system access to Deploy the Windows 1.0.0
authorized users, Guest Configuration
processes acting on extension to enable
behalf of authorized Guest Configuration
users, and devices assignments on
(including other Windows VMs
systems).
Access Control 3.1.12 Monitor and control Audit Linux machines 1.0.0
remote access that allow remote
sessions. connections from
accounts without
passwords
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E
Access Control 3.1.12 Monitor and control Deploy the Windows 1.0.0
remote access Guest Configuration
sessions. extension to enable
Guest Configuration
assignments on
Windows VMs
Audit and 3.3.1 Create and retain [Preview]: Audit Log 1.0.0-preview
Accountability system audit logs and Analytics Agent
records to the extent Deployment - VM
needed to enable the Image (OS) unlisted
monitoring, analysis,
investigation, and
reporting of unlawful
or unauthorized
system activity.
Audit and 3.3.1 Create and retain Audit Log Analytics 1.0.1
Accountability system audit logs and agent deployment in
records to the extent virtual machine scale
needed to enable the sets - VM Image (OS)
monitoring, analysis, unlisted
investigation, and
reporting of unlawful
or unauthorized
system activity.
Audit and 3.3.1 Create and retain Audit Log Analytics 1.0.1
Accountability system audit logs and workspace for VM -
records to the extent Report Mismatch
needed to enable the
monitoring, analysis,
investigation, and
reporting of unlawful
or unauthorized
system activity.
Audit and 3.3.1 Create and retain The Log Analytics 1.0.0
Accountability system audit logs and agent should be
records to the extent installed on Virtual
needed to enable the Machine Scale Sets
monitoring, analysis,
investigation, and
reporting of unlawful
or unauthorized
system activity.
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E
Audit and 3.3.1 Create and retain The Log Analytics 1.0.0
Accountability system audit logs and agent should be
records to the extent installed on virtual
needed to enable the machines
monitoring, analysis,
investigation, and
reporting of unlawful
or unauthorized
system activity.
Audit and 3.3.2 Ensure that the [Preview]: Audit Log 1.0.0-preview
Accountability actions of individual Analytics Agent
system users can be Deployment - VM
uniquely traced to Image (OS) unlisted
those users, so they
can be held
accountable for their
actions.
Audit and 3.3.2 Ensure that the Audit Log Analytics 1.0.1
Accountability actions of individual agent deployment in
system users can be virtual machine scale
uniquely traced to sets - VM Image (OS)
those users, so they unlisted
can be held
accountable for their
actions.
Audit and 3.3.2 Ensure that the Audit Log Analytics 1.0.1
Accountability actions of individual workspace for VM -
system users can be Report Mismatch
uniquely traced to
those users, so they
can be held
accountable for their
actions.
Audit and 3.3.2 Ensure that the The Log Analytics 1.0.0
Accountability actions of individual agent should be
system users can be installed on Virtual
uniquely traced to Machine Scale Sets
those users, so they
can be held
accountable for their
actions.
Audit and 3.3.2 Ensure that the The Log Analytics 1.0.0
Accountability actions of individual agent should be
system users can be installed on virtual
uniquely traced to machines
those users, so they
can be held
accountable for their
actions.
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E
Identification and 3.5.10 Store and transmit Audit Linux machines 1.0.0
Authentication only that do not have the
cryptographically- passwd file
protected passwords. permissions set to
0644
Identification and 3.5.10 Store and transmit Deploy the Windows 1.0.0
Authentication only Guest Configuration
cryptographically- extension to enable
protected passwords. Guest Configuration
assignments on
Windows VMs
System and 3.13.1 Monitor, control, and All network ports 2.0.1
Communications protect should be restricted
Protection communications (i.e., on network security
information groups associated to
transmitted or your virtual machine
received by
organizational
systems) at the
external boundaries
and key internal
boundaries of
organizational
systems.
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E
System and 3.13.1 Monitor, control, and Audit Windows web 1.0.0
Communications protect servers that are not
Protection communications (i.e., using secure
information communication
transmitted or protocols
received by
organizational
systems) at the
external boundaries
and key internal
boundaries of
organizational
systems.
NIST SP 800-53 R4
To review how the available Azure Policy built-ins for all Azure services map to this compliance standard, see Azure
Policy Regulatory Compliance - NIST SP 800-53 R4. For more information about this compliance standard, see
NIST SP 800-53 R4.
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E ( A ZURE PO RTA L) ( GIT HUB)
Access Control AC-17 (1) Remote Access | Audit Linux machines 1.0.0
Automated that allow remote
Monitoring / Control connections from
accounts without
passwords
P O L IC Y P O L IC Y VERSIO N
DO M A IN C O N T RO L ID C O N T RO L T IT L E
Access Control AC-17 (1) Remote Access | Deploy the Linux 1.0.0
Automated Guest Configuration
Monitoring / Control extension to enable
Guest Configuration
assignments on Linux
VMs
Access Control AC-17 (1) Remote Access | Deploy the Windows 1.0.0
Automated Guest Configuration
Monitoring / Control extension to enable
Guest Configuration
assignments on
Windows VMs
Audit and AU-3 (2) Content of Audit [Preview]: Audit Log 1.0.0-preview
Accountability Records | Centralized Analytics Agent
Management of Deployment - VM
Planned Audit Record Image (OS) unlisted
Content
Audit and AU-3 (2) Content of Audit Audit Log Analytics 1.0.1
Accountability Records | Centralized agent deployment in
Management of virtual machine scale
Planned Audit Record sets - VM Image (OS)
Content unlisted
Audit and AU-3 (2) Content of Audit Audit Log Analytics 1.0.1
Accountability Records | Centralized workspace for VM -
Management of Report Mismatch
Planned Audit Record
Content
Audit and AU-6 (4) Audit Review, Analysis, [Preview]: Audit Log 1.0.0-preview
Accountability and Reporting | Analytics Agent
Central Review and Deployment - VM
Analysis Image (OS) unlisted
Audit and AU-6 (4) Audit Review, Analysis, Audit Log Analytics 1.0.1
Accountability and Reporting | agent deployment in
Central Review and virtual machine scale
Analysis sets - VM Image (OS)
unlisted
Audit and AU-6 (4) Audit Review, Analysis, Audit Log Analytics 1.0.1
Accountability and Reporting | workspace for VM -
Central Review and Report Mismatch
Analysis
Next steps
Learn more about Azure Policy Regulatory Compliance.
See the built-ins on the Azure Policy GitHub repo.
Azure security baseline for Windows Virtual Machines
11/2/2020 • 37 minutes to read • Edit Online
The Azure Security Baseline for Windows Virtual Machines contains recommendations that will help you improve
the security posture of your deployment.
The baseline for this service is drawn from the Azure Security Benchmark version 1.0, which provides
recommendations on how you can secure your cloud solutions on Azure with our best practices guidance.
For more information, see Azure Security Baselines overview.
Network security
For more information, see Security control: Network security.
1.1: Protect Azure resources within virtual networks
Guidance : When you create an Azure virtual machine (VM), you must create a virtual network (VNet) or use an
existing VNet and configure the VM with a subnet. Ensure that all deployed subnets have a Network Security Group
applied with network access controls specific to your applications trusted ports and sources.
Alternatively, if you have a specific use case for a centralized firewall, Azure Firewall can also be used to meet those
requirements.
Virtual networks and virtual machines in Azure
How to create a Virtual Network
How to create an NSG with a Security Config
How to deploy and configure Azure Firewall
Azure Security Center monitoring : Yes
Responsibility : Customer
1.2: Monitor and log the configuration and traffic of virtual networks, subnets, and network interfaces
Guidance : Use the Azure Security Center to identify and follow network protection recommendations to help
secure your Azure Virtual Machine (VM) resources in Azure. Enable NSG flow logs and send logs into a Storage
Account for traffic audit for the VMs for unusual activity.
How to Enable NSG Flow Logs
Understand Network Security provided by Azure Security Center
Azure Security Center monitoring : Yes
Responsibility : Customer
1.3: Protect critical web applications
Guidance : If using your virtual machine (VM) to host web applications, use a network security group (NSG) on the
VM's subnet to limit what network traffic, ports and protocols are allowed to communicate. Follow a least
privileged network approach when configuring your NSGs to only allow required traffic to your application.
You can also deploy Azure Web Application Firewall (WAF) in front of critical web applications for additional
inspection of incoming traffic. Enable Diagnostic Setting for WAF and ingest logs into a Storage Account, Event Hub,
or Log Analytics Workspace.
Create an application gateway with a Web Application Firewall using the Azure portal
Information on Network Security Groups
Azure Security Center monitoring : Yes
Responsibility : Customer
1.4: Deny communications with known-malicious IP addresses
Guidance : Enable Distributed Denial of Service (DDoS) Standard protection on the Virtual Networks to guard
against DDoS attacks. Using Azure Security Center Integrated Threat Intelligence, you can monitor communications
with known malicious IP addresses. Configure appropriately Azure Firewall on each of your Virtual Network
segments, with Threat Intelligence enabled and configured to "Alert and deny" for malicious network traffic.
You can use Azure Security Center's Just In Time Network access to limit exposure of Windows Virtual Machines to
the approved IP addresses for a limited period. Also, use Azure Security Center Adaptive Network Hardening to
recommend NSG configurations that limit ports and source IPs based on actual traffic and threat intelligence.
How to configure DDoS protection
How to deploy Azure Firewall
Understand Azure Security Center Integrated Threat Intelligence
Understand Azure Security Center Adaptive Network Hardening
Understand Azure Security Center Just In Time Network Access Control
Azure Security Center monitoring : Yes
Responsibility : Customer
1.5: Record network packets
Guidance : You can record NSG flow logs into a storage account to generate flow records for your Azure Virtual
Machines. When investigating anomalous activity, you could enable Network Watcher packet capture so that
network traffic can be reviewed for unusual and unexpected activity.
How to Enable NSG Flow Logs
How to enable Network Watcher
Azure Security Center monitoring : Yes
Responsibility : Customer
1.6: Deploy network-based intrusion detection/intrusion prevention systems (IDS/IPS )
Guidance : By combining packet captures provided by Network Watcher and an open-source IDS tool, you can
perform network intrusion detection for a wide range of threats. Also, you can deploy Azure Firewall on the Virtual
Network segments as appropriate, with Threat Intelligence enabled and configured to "Alert and deny" for
malicious network traffic.
Perform network intrusion detection with Network Watcher and open-source tools
How to deploy Azure Firewall
How to configure alerts with Azure Firewall
Azure Security Center monitoring : Yes
Responsibility : Customer
1.7: Manage traffic to web applications
Guidance : You may deploy Azure Application Gateway for web applications with HTTPS/SSL enabled for trusted
certificates. With Azure Application Gateway, you direct your application web traffic to specific resources by
assigning listeners to ports, creating rules, and adding resources to a backend pool like Windows Virtual machines.
How to deploy Application Gateway
How to configure Application Gateway to use HTTPS
Understand layer 7 load balancing with Azure web application gateways
Azure Security Center monitoring : Not Available
Responsibility : Customer
1.8: Minimize complexity and administrative overhead of network security rules
Guidance : Use Virtual Network Service Tags to define network access controls on Network Security Groups or
Azure Firewall configured for your Azure Virtual machines. You can use service tags in place of specific IP addresses
when creating security rules. By specifying the service tag name (e.g., ApiManagement) in the appropriate source or
destination field of a rule, you can allow or deny the traffic for the corresponding service. Microsoft manages the
address prefixes encompassed by the service tag and automatically updates the service tag as addresses change.
Understand and using Service Tags
Azure Security Center monitoring : Not Available
Responsibility : Customer
1.9: Maintain standard security configurations for network devices
Guidance : Define and implement standard security configurations for Azure Virtual Machines (VM) using Azure
Policy. You may also use Azure Blueprints to simplify large-scale Azure VM deployments by packaging key
environment artifacts, such as Azure Resource Manager templates, role assignments, and Azure Policy assignments,
in a single blueprint definition. You can apply the blueprint to subscriptions, and enable resource management
through blueprint versioning.
How to configure and manage Azure Policy
Azure Policy samples for networking
How to create an Azure Blueprint
Azure Security Center monitoring : Not Applicable
Responsibility : Customer
1.10: Document traffic configuration rules
Guidance : You may use tags for network security groups (NSG) and other resources related to network security
and traffic flow configured for your Windows Virtual machines. For individual NSG rules, use the "Description" field
to specify business need and/or duration for any rules that allow traffic to/from a network.
How to create and use tags
How to create a Virtual Network
How to create an NSG with a Security Config
Azure Security Center monitoring : Not Applicable
Responsibility : Customer
1.11: Use automated tools to monitor network resource configurations and detect changes
Guidance : Use the Azure Activity Log to monitor changes to network resource configurations related to your
virtual machines. Create alerts within Azure Monitor that will trigger when changes to critical network settings or
resources take place.
Use Azure Policy to validate (and/or remediate) configurations for network resource related to Windows Virtual
Machines.
How to view and retrieve Azure Activity Log events
How to create alerts in Azure Monitor
How to configure and manage Azure Policy
Azure Policy samples for networking
Azure Security Center monitoring : Not Available
Responsibility : Customer
Data protection
For more information, see Security control: Data protection.
4.1: Maintain an inventory of sensitive Information
Guidance : Use Tags to assist in tracking Azure virtual machines that store or process sensitive information.
How to create and use Tags
Azure Security Center monitoring : Yes
Responsibility : Customer
4.2: Isolate systems storing or processing sensitive information
Guidance : Implement separate subscriptions and/or management groups for development, test, and production.
Resources should be separated by virtual network/subnet, tagged appropriately, and secured within a network
security group (NSG) or by an Azure Firewall. For Virtual Machines storing or processing sensitive data, implement
policy and procedure(s) to turn them off when not in use.
How to create additional Azure subscriptions
How to create Management Groups
How to create and use Tags
How to create a Virtual Network
How to create an NSG with a Security Config
How to deploy Azure Firewall
How to configure alert or alert and deny with Azure Firewall
Azure Security Center monitoring : Not Applicable
Responsibility : Customer
4.3: Monitor and block unauthorized transfer of sensitive information
Guidance : Implement third-party solution on network perimeters that monitors for unauthorized transfer of
sensitive information and blocks such transfers while alerting information security professionals.
For the underlying platform which is managed by Microsoft, Microsoft treats all customer content as sensitive to
guard against customer data loss and exposure. To ensure customer data within Azure remains secure, Microsoft
has implemented and maintains a suite of robust data protection controls and capabilities.
Understand customer data protection in Azure
Azure Security Center monitoring : Not Available
Responsibility : Customer
4.4: Encrypt all sensitive information in transit
Guidance : Data in transit to, from, and between Virtual Machines(VM) that are running Windows is encrypted in a
number of ways, depending on the nature of the connection such as when connecting to a VM in a RDP or SSH
session.
Microsoft uses the Transport Layer Security (TLS) protocol to protect data when it's traveling between the cloud
services and customers.
In-transit encryption in VMs
Azure Security Center monitoring : Not Available
Responsibility : Shared
4.5: Use an active discovery tool to identify sensitive data
Guidance : Use a third-party active discovery tool to identify all sensitive information stored, processed, or
transmitted by the organization's technology systems, including those located onsite or at a remote service
provider and update the organization's sensitive information inventory.
Azure Security Center monitoring : Not Available
Responsibility : Customer
4.6: Use Azure RBAC to control access to resources
Guidance : Using Azure role-based access control (Azure RBAC), you can segregate duties within your team and
grant only the amount of access to users on your VM that they need to perform their jobs. Instead of giving
everybody unrestricted permissions on the VM, you can allow only certain actions. You can configure access control
for the VM in the Azure portal, using the Azure CLI, orAzure PowerShell.
Azure RBAC
Azure built-in roles
Azure Security Center monitoring : Not Available
Responsibility : Customer
4.7: Use host-based data loss prevention to enforce access control
Guidance : Implement a third-party tool, such as an automated host-based Data Loss Prevention solution, to
enforce access controls to mitigate the risk of data breaches.
Azure Security Center monitoring : Not Available
Responsibility : Customer
4.8: Encrypt sensitive information at rest
Guidance : Virtual disks on Windows Virtual Machines (VM) are encrypted at rest using either Server-side
encryption or Azure disk encryption (ADE). Azure Disk Encryption leverages the BitLocker feature of Windows to
encrypt managed disks with customer-managed keys within the guest VM. Server-side encryption with customer-
managed keys improves on ADE by enabling you to use any OS types and images for your VMs by encrypting data
in the Storage service.
Server side encryption of Azure-managed disks
Azure Disk Encryption for Windows VMs
Azure Security Center monitoring : Yes
Responsibility : Shared
4.9: Log and alert on changes to critical Azure resources
Guidance : Use Azure Monitor with the Azure Activity Log to create alerts for when changes take place to Virtual
machines and related resources.
How to create alerts for Azure Activity Log events
How to create alerts for Azure Activity Log events
Azure Storage analytics logging
Azure Security Center monitoring : Not Available
Responsibility : Customer
Vulnerability management
For more information, see Security control: Vulnerability management.
5.1: Run automated vulnerability scanning tools
Guidance : Follow recommendations from Azure Security Center on performing vulnerability assessments on your
Azure Virtual Machines. Use Azure Security recommended or third-party solution for performing vulnerability
assessments for your virtual machines.
How to implement Azure Security Center vulnerability assessment recommendations
Azure Security Center monitoring : Yes
Responsibility : Customer
5.2: Deploy automated operating system patch management solution
Guidance : Use the Azure Update Management solution to manage updates and patches for your virtual machines.
Update Management relies on the locally configured update repository to patch supported Windows systems. Tools
like System Center Updates Publisher (Updates Publisher) allow you to publish custom updates into Windows
Server Update Services (WSUS). This scenario allows Update Management to patch machines that use
Configuration Manager as their update repository with third-party software.
Update Management solution in Azure
Manage updates and patches for your VMs
Azure Security Center monitoring : Yes
Responsibility : Customer
5.3: Deploy automated patch management solution for third-party software titles
Guidance : You may use a third-party patch management solution. You can use the Azure Update Management
solution to manage updates and patches for your virtual machines. Update Management relies on the locally
configured update repository to patch supported Windows systems. Tools like System Center Updates Publisher
(Updates Publisher) allow you to publish custom updates into Windows Server Update Services (WSUS). This
scenario allows Update Management to patch machines that use Configuration Manager as their update repository
with third-party software.
Update Management solution in Azure
Manage updates and patches for your VMs
Azure Security Center monitoring : Not Available
Responsibility : Customer
5.4: Compare back-to -back vulnerability scans
Guidance : Export scan results at consistent intervals and compare the results to verify that vulnerabilities have
been remediated. When using vulnerability management recommendation suggested by Azure Security Center,
customer may pivot into the selected solution's portal to view historical scan data.
Azure Security Center monitoring : Not Applicable
Responsibility : Customer
5.5: Use a risk-rating process to prioritize the remediation of discovered vulnerabilities
Guidance : Use the default risk ratings (Secure Score) provided by Azure Security Center.
Understand Azure Security Center Secure Score
Azure Security Center monitoring : Yes
Responsibility : Customer
Secure configuration
For more information, see Security control: Secure configuration.
7.1: Establish secure configurations for all Azure resources
Guidance : Use Azure Policy or Azure Security Center to maintain security configurations for all Azure Resources.
Also, Azure Resource Manager has the ability to export the template in JavaScript Object Notation (JSON), which
should be reviewed to ensure that the configurations meet / exceed the security requirements for your company.
How to configure and manage Azure Policy
Information on how to download the VM template
Azure Security Center monitoring : Not Available
Responsibility : Customer
7.2: Establish secure operating system configurations
Guidance : Use Azure Security Center recommendation [Remediate Vulnerabilities in Security Configurations on
your Virtual Machines] to maintain security configurations on all compute resources.
How to monitor Azure Security Center recommendations
How to remediate Azure Security Center recommendations
Azure Security Center monitoring : Not Available
Responsibility : Customer
7.3: Maintain secure Azure resource configurations
Guidance : Use Azure Resource Manager templates and Azure Policies to securely configure Azure resources
associated with the Virtual machines. Azure Resource Manager templates are JSON-based files used to deploy
Virtual machine along with Azure resources and custom template will need to be maintained. Microsoft performs
the maintenance on the base templates. Use Azure policy [deny] and [deploy if not exist] to enforce secure settings
across your Azure resources.
Information on creating Azure Resource Manager templates
How to configure and manage Azure Policy
Understanding Azure Policy Effects
Azure Security Center monitoring : Not Available
Responsibility : Customer
7.4: Maintain secure operating system configurations
Guidance : There are several options for maintaining a secure configuration for Azure Virtual Machines(VM) for
deployment:
1- Azure Resource Manager templates: These are JSON-based files used to deploy a VM from the Azure portal, and
custom template will need to be maintained. Microsoft performs the maintenance on the base templates.
2- Custom Virtual hard disk (VHD): In some circumstances it may be required to have custom VHD files used such
as when dealing with complex environments that cannot be managed through other means.
3- Azure Automation State Configuration: Once the base OS is deployed, this can be used for more granular control
of the settings, and enforced through the automation framework.
For most scenarios, the Microsoft base VM templates combined with the Azure Automation Desired State
Configuration can assist in meeting and maintaining the security requirements.
Information on how to download the VM template
Information on creating ARM templates
How to upload a custom VM VHD to Azure
Azure Security Center monitoring : Yes
Responsibility : Customer
7.5: Securely store configuration of Azure resources
Guidance : Use Azure Repos to securely store and manage your code like custom Azure policies, Azure Resource
Manager templates, Desired State Configuration scripts etc. To access the resources you manage in Azure DevOps,
such as your code, builds, and work tracking, you must have permissions for those specific resources. Most
permissions are granted through built-in security groups as described in Permissions and access. You can grant or
deny permissions to specific users, built-in security groups, or groups defined in Azure Active Directory (Azure AD)
if integrated with Azure DevOps, or Active Directory if integrated with TFS.
How to store code in Azure DevOps
About permissions and groups in Azure DevOps
Azure Security Center monitoring : Not Applicable
Responsibility : Customer
7.6: Securely store custom operating system images
Guidance : If using custom images (e.g. Virtual Hard Disk), use Azure role-based access control (Azure RBAC) to
ensure only authorized users may access the images.
Understand Azure RBAC
How to configure Azure RBAC
Azure Security Center monitoring : Not Available
Responsibility : Customer
7.7: Deploy configuration management tools for Azure resources
Guidance : Leverage Azure Policy to alert, audit, and enforce system configurations for your virtual machines.
Additionally, develop a process and pipeline for managing policy exceptions.
How to configure and manage Azure Policy
Azure Security Center monitoring : Not Applicable
Responsibility : Customer
7.8: Deploy configuration management tools for operating systems
Guidance : Azure Automation State Configuration is a configuration management service for Desired State
Configuration (DSC) nodes in any cloud or on-premises datacenter. It enables scalability across thousands of
machines quickly and easily from a central, secure location. You can easily onboard machines, assign them
declarative configurations, and view reports showing each machine's compliance to the desired state you specified.
Onboarding machines for management by Azure Automation State Configuration
Azure Security Center monitoring : Not Applicable
Responsibility : Customer
7.9: Implement automated configuration monitoring for Azure resources
Guidance : Leverage Azure Security Center to perform baseline scans for your Azure Virtual machines. Additional
methods for automated configuration include using Azure Automation State Configuration.
How to remediate recommendations in Azure Security Center
Getting started with Azure Automation State Configuration
Azure Security Center monitoring : Not Available
Responsibility : Customer
7.10: Implement automated configuration monitoring for operating systems
Guidance : Azure Automation State Configuration is a configuration management service for Desired State
Configuration (DSC) nodes in any cloud or on-premises datacenter. It enables scalability across thousands of
machines quickly and easily from a central, secure location. You can easily onboard machines, assign them
declarative configurations, and view reports showing each machine's compliance to the desired state you specified.
Onboarding machines for management by Azure Automation State Configuration
Azure Security Center monitoring : Not Available
Responsibility : Customer
7.11: Manage Azure secrets securely
Guidance : Use Managed Service Identity in conjunction with Azure Key Vault to simplify and secure secret
management for your cloud applications.
How to integrate with Azure-Managed Identities
How to create a Key Vault
How to authenticate to Key Vault
How to assign a Key Vault access policy
Azure Security Center monitoring : Yes
Responsibility : Customer
7.12: Manage identities securely and automatically
Guidance : Use Managed Identities to provide Azure services with an automatically managed identity in Azure AD.
Managed Identities allows you to authenticate to any service that supports Azure AD authentication, including Key
Vault, without any credentials in your code.
How to configure Managed Identities
Azure Security Center monitoring : Not Available
Responsibility : Customer
7.13: Eliminate unintended credential exposure
Guidance : Implement Credential Scanner to identify credentials within code. Credential Scanner will also
encourage moving discovered credentials to more secure locations such as Azure Key Vault.
How to setup Credential Scanner
Azure Security Center monitoring : Not Applicable
Responsibility : Customer
Malware defense
For more information, see Security control: Malware defense.
8.1: Use centrally-managed anti-malware software
Guidance : Use Microsoft Antimalware for Azure Windows Virtual machines to continuously monitor and defend
your resources.
How to configure Microsoft Antimalware for Cloud Services and Virtual Machines
Azure Security Center monitoring : Yes
Responsibility : Customer
8.2: Pre -scan files to be uploaded to non-compute Azure resources
Guidance : Not applicable to Azure Virtual machines as it's a compute resource.
Azure Security Center monitoring : Not Applicable
Responsibility : Not applicable
8.3: Ensure anti-malware software and signatures are updated
Guidance : When deployed, Microsoft Antimalware for Azure will automatically install the latest signature,
platform, and engine updates by default. Follow recommendations in Azure Security Center: "Compute & Apps" to
ensure all endpoints are up to date with the latest signatures. The Windows OS can be further protected with
additional security to limit the risk of virus or malware-based attacks with the Microsoft Defender Advanced Threat
Protection service that integrates with Azure Security Center.
How to deploy Microsoft Antimalware for Azure Cloud Services and Virtual Machines
Microsoft Defender Advanced Threat Protection
Azure Security Center monitoring : Yes
Responsibility : Customer
Data recovery
For more information, see Security control: Data recovery.
9.1: Ensure regular automated back-ups
Guidance : Enable Azure Backup and configure the Azure Virtual machines (VM), as well as the desired frequency
and retention period for automatic backups.
An overview of Azure VM backup
Back up an Azure VM from the VM settings
Azure Security Center monitoring : Yes
Responsibility : Customer
9.2: Perform complete system backups and backup any customer-managed keys
Guidance : Create snapshots of your Azure virtual machines or the managed disks attached to those instances
using PowerShell or REST APIs. Back up any customer-managed keys within Azure Key Vault.
Enable Azure Backup and target Azure Virtual Machines (VM), as well as the desired frequency and retention
periods. This includes complete system state backup. If you are using Azure disk encryption, Azure VM backup
automatically handles the backup of customer-managed keys.
Backup on Azure VMs that use encryption
Overview of Azure VM backup
An overview of Azure VM backup
How to backup key vault keys in Azure
Azure Security Center monitoring : Yes
Responsibility : Customer
9.3: Validate all backups including customer-managed keys
Guidance : Ensure ability to periodically perform data restoration of content within Azure Backup. If necessary, test
restore content to an isolated virtual network or subscription. Customer to test restoration of backed up customer-
managed keys.
If you are using Azure disk encryption, you can restore the Azure VM with the disk encryption keys. When using
disk encryption, you can restore the Azure VM with the disk encryption keys.
Backup on Azure VMs that use encryption
How to recover files from Azure Virtual Machine backup
How to restore key vault keys in Azure
How to back up and restore an encrypted VM
Azure Security Center monitoring : Not Applicable
Responsibility : Customer
9.4: Ensure protection of backups and customer-managed keys
Guidance : When you back up Azure-managed disks with Azure Backup, VMs are encrypted at rest with Storage
Service Encryption (SSE). Azure Backup can also back up Azure VMs that are encrypted by using Azure Disk
Encryption. Azure Disk Encryption integrates with BitLocker encryption keys (BEKs), which are safeguarded in a key
vault as secrets. Azure Disk Encryption also integrates with Azure Key Vault key encryption keys (KEKs). Enable Soft-
Delete in Key Vault to protect keys against accidental or malicious deletion.
Soft delete for VMs
Azure Key Vault soft-delete overview
Azure Security Center monitoring : Yes
Responsibility : Customer
Incident response
For more information, see Security control: Incident response.
10.1: Create an incident response guide
Guidance : Build out an incident response guide for your organization. Ensure that there are written incident
response plans that define all roles of personnel as well as phases of incident handling/management from
detection to post-incident review.
Guidance on building your own security incident response process
Microsoft Security Response Center's Anatomy of an Incident
Customer may also leverage NIST's Computer Security Incident Handling Guide to aid in the creation of their
own incident response plan
Azure Security Center monitoring : Not Applicable
Responsibility : Customer
10.2: Create an incident scoring and prioritization procedure
Guidance : Security Center assigns a severity to each alert to help you prioritize which alerts should be investigated
first. The severity is based on how confident Security Center is in the finding or the analytic used to issue the alert
as well as the confidence level that there was malicious intent behind the activity that led to the alert.
Additionally, clearly mark subscriptions (for ex. production, non-prod) using tags and create a naming system to
clearly identify and categorize Azure resources, especially those processing sensitive data. It is your responsibility to
prioritize the remediation of alerts based on the criticality of the Azure resources and environment where the
incident occurred.
Security alerts in Azure Security Center
Use tags to organize your Azure resources
Azure Security Center monitoring : Yes
Responsibility : Customer
10.3: Test security response procedures
Guidance : Conduct exercises to test your systems’ incident response capabilities on a regular cadence to help
protect your Azure resources. Identify weak points and gaps and revise plan as needed.
Refer to NIST's publication: Guide to Test, Training, and Exercise Programs for IT Plans and Capabilities
Azure Security Center monitoring : Not Applicable
Responsibility : Customer
10.4: Provide security incident contact details and configure alert notifications for security incidents
Guidance : Security incident contact information will be used by Microsoft to contact you if the Microsoft Security
Response Center (MSRC) discovers that your data has been accessed by an unlawful or unauthorized party. Review
incidents after the fact to ensure that issues are resolved.
How to set the Azure Security Center Security Contact
Azure Security Center monitoring : Yes
Responsibility : Customer
10.5: Incorporate security alerts into your incident response system
Guidance : Export your Azure Security Center alerts and recommendations using the Continuous Export feature to
help identify risks to Azure resources. Continuous Export allows you to export alerts and recommendations either
manually or in an ongoing, continuous fashion. You may use the Azure Security Center data connector to stream
the alerts to Azure Sentinel.
How to configure continuous export
How to stream alerts into Azure Sentinel
Azure Security Center monitoring : Not Available
Responsibility : Customer
10.6: Automate the response to security alerts
Guidance : Use the Workflow Automation feature in Azure Security Center to automatically trigger responses via
"Logic Apps" on security alerts and recommendations to protect your Azure resources.
How to configure Workflow Automation and Logic Apps
Azure Security Center monitoring : Not Available
Responsibility : Customer
Next steps
See the Azure security benchmark
Learn more about Azure security baselines
What is Azure Bastion?
11/2/2020 • 7 minutes to read • Edit Online
Azure Bastion is a service you deploy that lets you connect to a virtual machine using your browser and the Azure
portal. The Azure Bastion service is a fully platform-managed PaaS service that you provision inside your virtual
network. It provides secure and seamless RDP/SSH connectivity to your virtual machines directly from the Azure
portal over TLS. When you connect via Azure Bastion, your virtual machines do not need a public IP address, agent,
or special client software.
Bastion provides secure RDP and SSH connectivity to all of the VMs in the virtual network in which it is
provisioned. Using Azure Bastion protects your virtual machines from exposing RDP/SSH ports to the outside
world, while still providing secure access using RDP/SSH.
Architecture
Azure Bastion deployment is per virtual network, not per subscription/account or virtual machine. Once you
provision an Azure Bastion service in your virtual network, the RDP/SSH experience is available to all your VMs in
the same virtual network.
RDP and SSH are some of the fundamental means through which you can connect to your workloads running in
Azure. Exposing RDP/SSH ports over the Internet isn't desired and is seen as a significant threat surface. This is
often due to protocol vulnerabilities. To contain this threat surface, you can deploy bastion hosts (also known as
jump-servers) at the public side of your perimeter network. Bastion host servers are designed and configured to
withstand attacks. Bastion servers also provide RDP and SSH connectivity to the workloads sitting behind the
bastion, as well as further inside the network.
This figure shows the architecture of an Azure Bastion deployment. In this diagram:
The Bastion host is deployed in the virtual network.
The user connects to the Azure portal using any HTML5 browser.
The user selects the virtual machine to connect to.
With a single click, the RDP/SSH session opens in the browser.
No public IP is required on the Azure VM.
Key features
The following features are available:
RDP and SSH directly in Azure por tal: You can directly get to the RDP and SSH session directly in the Azure
portal using a single click seamless experience.
Remote Session over TLS and firewall traversal for RDP/SSH: Azure Bastion uses an HTML5 based web
client that is automatically streamed to your local device, so that you get your RDP/SSH session over TLS on
port 443 enabling you to traverse corporate firewalls securely.
No Public IP required on the Azure VM: Azure Bastion opens the RDP/SSH connection to your Azure
virtual machine using private IP on your VM. You don't need a public IP on your virtual machine.
No hassle of managing NSGs: Azure Bastion is a fully managed platform PaaS service from Azure that is
hardened internally to provide you secure RDP/SSH connectivity. You don't need to apply any NSGs on Azure
Bastion subnet. Because Azure Bastion connects to your virtual machines over private IP, you can configure your
NSGs to allow RDP/SSH from Azure Bastion only. This removes the hassle of managing NSGs each time you
need to securely connect to your virtual machines.
Protection against por t scanning: Because you do not need to expose your virtual machines to public
Internet, your VMs are protected against port scanning by rogue and malicious users located outside your
virtual network.
Protect against zero-day exploits. Hardening in one place only: Azure Bastion is a fully platform-
managed PaaS service. Because it sits at the perimeter of your virtual network, you don’t need to worry about
hardening each of the virtual machines in your virtual network. The Azure platform protects against zero-day
exploits by keeping the Azure Bastion hardened and always up to date for you.
What's new?
Subscribe to the RSS feed and view the latest Azure Bastion feature updates on the Azure Updates page.
FAQ
Which regions are available?
NOTE
We are working hard to add additional regions. When a region is added, we will add it to this list.
Americas
Brazil South
Canada Central
Central US
East US
East US 2
North Central US
South Central US
West Central US
West US
West US 2
Europe
France Central
North Europe
Norway East
Norway West
Switzerland North
UK South
UK West
West Europe
Asia Pacific
Australia Central 2
Australia East
Australia Southeast
East Asia
Japan East
Japan West
Korea Central
Korea South
Southeast Asia
Central India
West India
Middle East and Africa
South Africa North
UAE Central
Azure Government
US DoD Central
US DoD East
US Gov Arizona
US Gov Texas
US Gov Virginia
Azure China
China East 2
China North 2
Do I need a public IP on my virtual machine to connect via Azure Bastion?
No. When you connect to a VM using Azure Bastion, you don't need a public IP on the Azure virtual machine that
you are connecting to. The Bastion service will open the RDP/SSH session/connection to your virtual machine over
the private IP of your virtual machine, within your virtual network.
Is IPv6 supported?
At this time, IPv6 is not supported. Azure Bastion supports IPv4 only.
Do I need an RDP or SSH client?
No. You don't need an RDP or SSH client to access the RDP/SSH to your Azure virtual machine in your Azure portal.
Use the Azure portal to let you get RDP/SSH access to your virtual machine directly in the browser.
Do I need an agent running in the Azure virtual machine?
No. You don't need to install an agent or any software on your browser or your Azure virtual machine. The Bastion
service is agentless and doesn't require any additional software for RDP/SSH.
How many concurrent RDP and SSH sessions does each Azure Bastion support?
Both RDP and SSH are a usage-based protocol. High usage of sessions will cause the bastion host to support a
lower total number of sessions. The numbers below assume normal day-to-day workflows.
RESO URC E L IM IT
*May vary due to other on-going RDP sessions or other on-going SSH sessions.
**May vary if there are existing RDP connections or usage from other on-going SSH sessions.
What features are supported in an RDP session?
At this time, only text copy/paste is supported. Features, such as file copy, are not supported. Feel free to share your
feedback about new features on the Azure Bastion Feedback page.
Does Bastion hardening work with AADJ VM extension-joined VMs?
This feature doesn't work with AADJ VM extension-joined machines using Azure AD users. For more information,
see Windows Azure VMs and Azure AD.
Which browsers are supported?
Use the Microsoft Edge browser or Google Chrome on Windows. For Apple Mac, use Google Chrome browser.
Microsoft Edge Chromium is also supported on both Windows and Mac, respectively.
Where does Azure Bastion store customer data?
Azure Bastion doesn't move or store customer data out of the region it is deployed in.
Are any roles required to access a virtual machine?
In order to make a connection, the following roles are required:
Reader role on the virtual machine
Reader role on the NIC with private IP of the virtual machine
Reader role on the Azure Bastion resource
What is the pricing?
For more information, see the pricing page.
Does Azure Bastion require an RDS CAL for administrative purposes on Azure -hosted VMs?
No, access to Windows Server VMs by Azure Bastion does not require an RDS CAL when used solely for
administrative purposes.
Which keyboard layouts are supported during the Bastion remote session?
Azure Bastion currently supports en-us-qwerty keyboard layout inside the VM. Support for other locales for
keyboard layout is work in progress.
Is user-defined routing (UDR ) supported on an Azure Bastion subnet?
No. UDR is not supported on an Azure Bastion subnet.
For scenarios that include both Azure Bastion and Azure Firewall/Network Virtual Appliance (NVA) in the same
virtual network, you don’t need to force traffic from an Azure Bastion subnet to Azure Firewall because the
communication between Azure Bastion and your VMs is private. For more information, see Accessing VMs behind
Azure Firewall with Bastion.
Why do I get "Your session has expired" error message before the Bastion session starts?
A session should be initiated only from the Azure portal. Sign in to the Azure portal and begin your session again.
If you go to the URL directly from another browser session or tab, this error is expected. It helps ensure that your
session is more secure and that the session can be accessed only through the Azure portal.
How do I handle deployment failures?
Review any error messages and raise a support request in the Azure portal as needed. Deployment failures may
result from Azure subscription limits, quotas, and constraints. Specifically, customers may encounter a limit on the
number of public IP addresses allowed per subscription that causes the Azure Bastion deployment to fail.
How do I incorporate Azure Bastion in my Disaster Recovery plan?
Azure Bastion is deployed within VNets or peered VNets, and is associated to an Azure region. You are responsible
for deploying Azure Bastion to a Disaster Recovery (DR) site VNet. In the event of an Azure region failure, perform
a failover operation for your VMs to the DR region. Then, use the Azure Bastion host that's deployed in the DR
region to connect to the VMs that are now deployed there.
Next steps
Tutorial: Create an Azure Bastion host and connect to a Windows VM.
Learn about some of the other key networking capabilities of Azure.
Virtual machines lifecycle and states
11/2/2020 • 3 minutes to read • Edit Online
Azure Virtual Machines (VMs) go through different states that can be categorized into provisioning and power
states. The purpose of this article is to describe these states and specifically highlight when customers are billed for
instance usage.
Power states
The power state represents the last known state of the VM.
The following table provides a description of each instance state and indicates whether it is billed for instance usage or
not.
State
Description
Instance usage billed
Star ting
VM is starting up.
"statuses": [
{
"code": "PowerState/starting",
"level": "Info",
"displayStatus": "VM starting"
}
]
Not billed
Running
Normal working state for a VM
"statuses": [
{
"code": "PowerState/running",
"level": "Info",
"displayStatus": "VM running"
}
]
Billed
Stopping
This is a transitional state. When completed, it will show as Stopped .
"statuses": [
{
"code": "PowerState/stopping",
"level": "Info",
"displayStatus": "VM stopping"
}
]
Billed
Stopped
The VM has been shut down from within the guest OS or using the PowerOff APIs.
Hardware is still allocated to the VM and it remains on the host.
"statuses": [
{
"code": "PowerState/stopped",
"level": "Info",
"displayStatus": "VM stopped"
}
]
Billed *
Deallocating
Transitional state. When completed, the VM will show as Deallocated .
"statuses": [
{
"code": "PowerState/deallocating",
"level": "Info",
"displayStatus": "VM deallocating"
}
]
Not billed *
Deallocated
The VM has been stopped successfully and removed from the host.
"statuses": [
{
"code": "PowerState/deallocated",
"level": "Info",
"displayStatus": "VM deallocated"
}
]
Not billed
* Some Azure resources, such as Disks and Networking, incur charges. Software licenses on the instance do not
incur charges.
Provisioning states
A provisioning state is the status of a user-initiated, control-plane operation on the VM. These states are separate
from the power state of a VM.
Create – VM creation.
Update – updates the model for an existing VM. Some non-model changes to VM such as Start/Restart also
fall under update.
Delete – VM deletion.
Deallocate – is where a VM is stopped and removed from the host. Deallocating a VM is considered an
update, so it will display provisioning states related to updating.
Here are the transitional operation states after the platform has accepted a user-initiated action:
State
Description
Creating
"statuses": [
{
"code": "ProvisioningState/creating",
"level": "Info",
"displayStatus": "Creating"
}
[
Updating
"statuses": [
{
"code": "ProvisioningState/updating",
"level": "Info",
"displayStatus": "Updating"
}
[
Deleting
"statuses": [
{
"code": "ProvisioningState/deleting",
"level": "Info",
"displayStatus": "Deleting"
}
[
OS provisioning states
Description
If a VM is created with an OS image and not with a specialized image, then following substates can be observed:
OSProvisioningInprogress
The VM is running, and installation of guest OS is in progress.
"statuses": [
{
"code": "ProvisioningState/creating/OSProvisioningInprogress",
"level": "Info",
"displayStatus": "OS Provisioning In progress"
}
[
OSProvisioningComplete
Short-lived state. The VM quickly transitions to Success unless any extensions need to be installed. Installing
extensions can take time.
"statuses": [
{
"code": "ProvisioningState/creating/OSProvisioningComplete",
"level": "Info",
"displayStatus": "OS Provisioning Complete"
}
[
Note : OS Provisioning can transition to Failed if there is an OS failure or the OS doesn't install in time. Customers
will be billed for the deployed VM on the infrastructure.
Once the operation is complete, the VM will transition into one of the following states:
Succeeded – the user-initiated actions have completed.
"statuses": [
{
"code": "ProvisioningState/succeeded",
"level": "Info",
"displayStatus": "Provisioning succeeded",
"time": "time"
}
]
Failed – represents a failed operation. Refer to the error codes to get more information and possible
solutions.
"statuses": [
{
"code": "ProvisioningState/failed/InternalOperationError",
"level": "Error",
"displayStatus": "Provisioning failed",
"message": "Operation abandoned due to internal error. Please try again later.",
"time": "time"
}
]
VM instance view
The instance view API provides VM running-state information. For more information, see the Virtual Machines -
Instance View API documentation.
Azure Resources explorer provides a simple UI for viewing the VM running state: Resource Explorer.
Provisioning states are visible on VM properties and instance view. Power states are available in instance view of
VM.
To retrieve the power state of all the VMs in your subscription, use the Virtual Machines - List All API with
parameter statusOnly set to true.
Next steps
To learn more about monitoring your VM, see Monitor virtual machines in Azure.
PUT calls for creation or updates on compute
resources
11/2/2020 • 2 minutes to read • Edit Online
Microsoft.Compute resources do not support the conventional definition of HTTP PUT semantics. Instead, these
resources use PATCH semantics for both the PUT and PATCH verbs.
Create operations apply default values when appropriate. However, resource updates done through PUT or
PATCH, do not apply any default properties. Update operations apply apply strict PATCH semantics.
For example, the disk caching property of a virtual machine defaults to ReadWrite if the resource is an OS disk.
"storageProfile": {
"osDisk": {
"name": "myVMosdisk",
"image": {
"uri": "http://{existing-storage-account-name}.blob.core.windows.net/{existing-container-
name}/{existing-generalized-os-image-blob-name}.vhd"
},
"osType": "Windows",
"createOption": "FromImage",
"caching": "ReadWrite",
"vhd": {
"uri": "http://{existing-storage-account-name}.blob.core.windows.net/{existing-container-
name}/myDisk.vhd"
}
}
},
However, for update operations when a property is left out or a null value is passed, it will remain unchanged and
there no defaulting values.
This is important when sending update operations to a resource with the intention of removing an association. If
that resource is a Microsoft.Compute resource, the corresponding property you want to remove needs to be
explicitly called out and a value assigned. To achieve this, users can pass an empty string such as " " . This will
instruct the platform to remove that association.
IMPORTANT
There is no support for "patching" an array element. Instead, the client has to do a PUT or PATCH request with the entire
contents of the updated array. For example, to detach a data disk from a VM, do a GET request to get the current VM model,
remove the disk to be detached from properties.storageProfile.dataDisks , and do a PUT request with the updated VM
entity.
Examples
Correct payload to remove a Proximity Placement Groups association
{ "location": "westus", "properties": { "platformFaultDomainCount": 2, "platformUpdateDomainCount": 20,
"proximityPlacementGroup": "" } }
Next Steps
Learn more about Create or Update calls for Virtual Machines and Virtual Machine Scale Sets
Monitoring Azure virtual machines with Azure
Monitor
11/2/2020 • 14 minutes to read • Edit Online
This article describes how to use Azure Monitor to collect and analyze monitoring data from Azure virtual
machines to maintain their health. Virtual machines can be monitored for availability and performance with Azure
Monitor like any other Azure resource, but they're unique from other resources since you also need to monitor the
guest operating and system and the workloads that run in it.
NOTE
This article provides a complete overview of the concepts and options for monitoring virtual machines in Azure Monitor. To
start monitoring your virtual machines quickly without focusing on the underlying concepts, see Quickstart: Monitor an
Azure virtual machine with Azure Monitor.
Monitoring data
Virtual machines in Azure generate logs and metrics as shown in the following diagram.
Virtual machine host
Virtual machines in Azure generate the following data for the virtual machine host the same as other Azure
resources as described in Monitoring data.
Platform metrics - Numerical values that are automatically collected at regular intervals and describe some
aspect of a resource at a particular time. Platform metrics are collected for the virtual machine host, but you
require the diagnostics extension to collect metrics for the guest operating system.
Activity log - Provides insight into the operations on each Azure resource in the subscription from the outside
(the management plane). For a virtual machine, this includes such information as when it was started and any
configuration changes.
Guest operating system
To collect data from the guest operating system of a virtual machine, you require an agent, which runs locally on
each virtual machine and sends data to Azure Monitor. Multiple agents are available for Azure Monitor with each
collecting different data and writing data to different locations. Get a detailed comparison of the different agents at
Overview of the Azure Monitor agents.
Log Analytics agent - Available for virtual machines in Azure, other cloud environments, and on-premises.
Collects data to Azure Monitor Logs. Supports Azure Monitor for VMs and monitoring solutions. This is the
same agent used for System Center Operations Manager.
Dependency agent - Collects data about the processes running on the virtual machine and their dependencies.
Relies on the Log Analytics agent to transmit data into Azure and supports Azure Monitor for VMs, Service
Map, and Wire Data 2.0 solutions.
Azure Diagnostic extension - Available for Azure Monitor virtual machines only. Can collect data to multiple
locations but primarily used to collect guest performance data into Azure Monitor Metrics for Windows virtual
machines.
Telegraf agent - Collect performance data from Linux VMs into Azure Monitor Metrics.
Configuration requirements
To enable all features of Azure Monitor for monitoring a virtual machine, you need to collect monitoring data from
the virtual machine host and guest operating system to both Azure Monitor Metrics and Azure Monitor Logs. The
following table lists the configuration that must be performed to enable this collection. You may choose to not
perform all of these steps depending on your particular requirements.
Enable Azure Monitor for VMs - Log Analytics agent installed. - Performance charts and workbooks
- Dependency agent installed. for guest performance data.
- Guest performance data collected to - Log queries for guest performance
Logs. data.
- Process and dependency details - Log alerts for guest performance
collected to Logs. data.
- Dependency map.
Install the diagnostics extension and - Guest performance data collected to - Metrics explorer for guest.
telegraf agent Metrics. - Metrics alerts for guest.
Configure Log Analytics workspace - Events collected from guest. - Log queries for guest events.
- Log alerts for guest events.
Create diagnostic setting for virtual - Platform metrics collected to Logs. - Log queries for host metrics.
machine - Activity log collected to Logs. - Log alerts for host metrics.
- Log queries for Activity log.
Select Advanced Settings from the workspace menu and then Data to configure data sources. For Windows
agents, select Windows Event Logs and add common event logs such as System and Application. For Linux
agents, select Syslog and add common facilities such as kern and daemon. See Agent data sources in Azure
Monitor for a list of the data sources available and details on configuring them.
NOTE
You can configure performance counters to be collected from the workspace configuration, but this may be redundant with
the performance counters collected by Azure Monitor for VMs. Azure Monitor for VMs collects the most common set of
counters at a frequency of once per minute. Only configure performance counters to be collected by the workspace if you
want to collect counters not already collected by Azure Monitor for VMs or if you have existing queries using performance
data.
See Install and configure Telegraf for details on configuring the Telegraf agents on Linux virtual machines. The
Diagnostic setting menu option is available for Linux, but it will only allow you to send data to Azure storage.
Collect platform metrics and Activity log
You can view the platform metrics and Activity log collected for each virtual machine host in the Azure portal.
Collect this data into the same Log Analytics workspace as Azure Monitor for VMs to analyze it with the other
monitoring data collected for the virtual machine. This collection is configured with a diagnostic setting. Collect the
Activity log with a diagnostic setting for the subscription.
Collect platform metrics with a diagnostic setting for the virtual machine. Unlike other Azure resources, you cannot
create a diagnostic setting for a virtual machine in the Azure portal but must use another method. The following
examples show how to collect metrics for a virtual machine using both PowerShell and CLI.
Overview Displays platform metrics for the virtual machine host. Click
on a graph to work with this data in metrics explorer.
Activity log Activity log entries filtered for the current virtual machine.
Insights Opens Azure Monitor for VMs with the map for the current
virtual machine selected.
Metrics Open metrics explorer with the scope set to the current
virtual machine.
Diagnostic settings Enable and configure diagnostics extension for the current
virtual machine.
Advisor recommendations Recommendations for the current virtual machine from Azure
Advisor.
Logs Open Log Analytics with the scope set to the current virtual
machine.
Virtual Machine Host Host metrics automatically collected for Collected automatically with no
all Azure virtual machines. Detailed list configuration required.
of metrics at
Microsoft.Compute/virtualMachines.
Guest (classic) Limited set of guest operating system Diagnostic extension installed. Data is
and application performance data. read from Azure storage.
Available in metrics explorer but not
other Azure Monitor features such as
metric alerts.
Virtual Machine Guest Guest operating system and application For Windows, diagnostic extension
performance data available to all Azure installed installed with Azure Monitor
Monitor features using metrics. sink enabled. For Linux, Telegraf agent
installed.
Analyzing log data
Azure virtual machines will collect the following data to Azure Monitor Logs.
Azure Monitor for VMs enables the collection of a predetermined set of performance counters that are written to
the InsightsMetrics table. This is the same table used by Azure Monitor for Containers.
Data sources from the guest operating Enable Log Analytics agent and See documentation for each data
system configure data sources. source.
NOTE
Performance data collected by the Log Analytics agent writes to the Perf table while Azure Monitor for VMs will collect it to
the InsightsMetrics table. This is the same data, but the tables have a different structure. If you have existing queries based
on Perf, the will need to be rewritten to use InsightsMetrics.
Alerts
Alerts in Azure Monitor proactively notify you when important conditions are found in your monitoring data and
potentially launch an action such as starting a Logic App or calling a webhook. Alert rules define the logic used to
determine when an alert should be created. Azure Monitor collects the data used by alert rules, but you need to
create rules to define alerting conditions in your Azure subscription.
The following sections describe the types of alert rules and recommendations on when you should use each. This
recommendation is based on the functionality and cost of the alert rule type. For details pricing of alerts, see Azure
Monitor pricing.
Activity log alert rules
Activity log alert rules fire when an entry matching particular criteria is created in the activity log. They have no
cost so they should be your first choice if the logic you require is in the activity log.
The target resource for activity log alerts can be a specific virtual machine, all virtual machines in a resource
group, or all virtual machines in a subscription.
For example, create an alert if a critical virtual machine is stopped by selecting the Power Off Virtual Machine for
the signal name.
Heartbeat
| where TimeGenerated < ago(10m)
| where ResourceGroup == "my-resource-group"
| summarize max(TimeGenerated) by Computer
To create an alert if an excessive number of failed logons have occurred on any Windows virtual machines in the
subscription, use the following query which returns a record for each failed logon event in the past hour. Use a
threshold set to the number of failed logons that you'll allow.
Event
| where TimeGenerated < ago(1hr)
| where EventID == 4625
Next steps
Learn how to analyze data in Azure Monitor logs using log queries.
Learn about alerts using metrics and logs in Azure Monitor.
Azure boot diagnostics
11/2/2020 • 2 minutes to read • Edit Online
Boot diagnostics is a debugging feature for Azure virtual machines (VM) that allows diagnosis of VM boot failures.
Boot diagnostics enables a user to observe the state of their VM as it is booting up by collecting serial log
information and screenshots.
IMPORTANT
Azure customers will not be charged for the storage costs associated with boot diagnostics using a managed storage account
through October 2020.
The boot diagnostics data blobs (which comprise of logs and snapshot images) are stored in a managed storage account.
Customers will be charged only on used GiBs by the blobs, not on the disk's provisioned size. The snapshot meters will be
used for billing of the managed storage account. Because the managed accounts are created on either Standard LRS or
Standard ZRS, customers will be charged at $0.05/GB per month for the size of their diagnostic data blobs only. For more
information on this pricing, see Managed disks pricing. Customers will see this charge tied to their VM resource URI.
Next steps
Learn more about the Azure Serial Console and how to use boot diagnostics to troubleshoot virtual machines in
Azure.
Backup and restore options for Linux virtual machines
in Azure
11/2/2020 • 2 minutes to read • Edit Online
You can protect your data by taking backups at regular intervals. There are several backup options available for
VMs, depending on your use-case.
Azure Backup
For backing up Azure VMs running production workloads, use Azure Backup. Azure Backup supports application-
consistent backups for both Windows and Linux VMs. Azure Backup creates recovery points that are stored in geo-
redundant recovery vaults. When you restore from a recovery point, you can restore the whole VM or just specific
files.
For a simple, hands-on introduction to Azure Backup for Azure VMs, see the "Back up Azure virtual machines"
tutorial for Linux or Windows.
For more information on how Azure Backup works, see Plan your VM backup infrastructure in Azure
Managed snapshots
In development and test environments, snapshots provide a quick and simple option for backing up VMs that use
Managed Disks. A managed snapshot is a read-only full copy of a managed disk. Snapshots exist independent of
the source disk and can be used to create new managed disks for rebuilding a VM. They are billed based on the
used portion of the disk. For example, if you create a snapshot of a managed disk with provisioned capacity of 64
GB and actual used data size of 10 GB, snapshot will be billed only for the used data size of 10 GB.
For more information on creating snapshots, see:
Create copy of VHD stored as a Managed Disk using Snapshots in Windows
Create copy of VHD stored as a Managed Disk using Snapshots in Linux
Next steps
You can try out Azure Backup by following the "Back up Windows virtual machines tutorial" for Linux or Windows.
Deploy VMs to dedicated hosts using the Azure
PowerShell
11/2/2020 • 6 minutes to read • Edit Online
This article guides you through how to create an Azure dedicated host to host your virtual machines (VMs).
Make sure that you have installed Azure PowerShell version 2.8.0 or later, and you are signed in to an Azure
account in with Connect-AzAccount .
Limitations
Virtual machine scale sets are not currently supported on dedicated hosts.
The sizes and hardware types available for dedicated hosts vary by region. Refer to the host pricing page to
learn more.
$rgName = "myDHResourceGroup"
$location = "EastUS"
Add the -SupportAutomaticPlacement true parameter to have your VMs and scale set instances automatically
placed on hosts, within a host group. For more information, see Manual vs. automatic placement .
IMPORTANT
Automatic placement is currently in public preview. To participate in the preview, complete the preview onboarding survey at
https://aka.ms/vmss-adh-preview. This preview version is provided without a service level agreement, and it's not
recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
For more information, see Supplemental Terms of Use for Microsoft Azure Previews.
Create a host
Now let's create a dedicated host in the host group. In addition to a name for the host, you are required to provide
the SKU for the host. Host SKU captures the supported VM series as well as the hardware generation for your
dedicated host.
For more information about the host SKUs and pricing, see Azure Dedicated Host pricing.
If you set a fault domain count for your host group, you will be asked to specify the fault domain for your host. In
this example, we set the fault domain for the host to 1.
$dHost = New-AzHost `
-HostGroupName $hostGroup.Name `
-Location $location -Name myHost `
-ResourceGroupName $rgName `
-Sku DSv3-Type1 `
-AutoReplaceOnFailure 1 `
-PlatformFaultDomain 1
Create a VM
Create a virtual machine on the dedicated host.
If you specified an availability zone when creating your host group, you are required to use the same zone when
creating the virtual machine. For this example, because our host group is in zone 1, we need to create the VM in
zone 1.
$cred = Get-Credential
New-AzVM `
-Credential $cred `
-ResourceGroupName $rgName `
-Location $location `
-Name myVM `
-HostId $dhost.Id `
-Image Win2016Datacenter `
-Zone 1 `
-Size Standard_D4s_v3
WARNING
If you create a virtual machine on a host which does not have enough resources, the virtual machine will be created in a
FAILED state.
Get-AzHost `
-ResourceGroupName $rgName `
-Name myHost `
-HostGroupName $hostGroup.Name `
-InstanceView
When you deploy a scale set, you specify the host group.
New-AzVmss `
-ResourceGroupName "myResourceGroup" `
-Location "EastUS" `
-VMScaleSetName "myDHScaleSet" `
-VirtualNetworkName "myVnet" `
-SubnetName "mySubnet" `
-PublicIpAddressName "myPublicIPAddress" `
-LoadBalancerName "myLoadBalancer" `
-UpgradePolicyMode "Automatic"`
-HostGroupId $hostGroup.Id
If you want to manually choose which host to deploy the scale set to, add --host and the name of the host.
Add an existing VM
You can add an existing VM to a dedicated host, but the VM must first be Stop\Deallocated. Before you move a VM
to a dedicated host, make sure that the VM configuration is supported:
The VM size must be in the same size family as the dedicated host. For example, if your dedicated host is DSv3,
then the VM size could be Standard_D4s_v3, but it could not be a Standard_A4_v2.
The VM needs to be located in same region as the dedicated host.
The VM can't be part of a proximity placement group. Remove the VM from the proximity placement group
before moving it to a dedicated host. For more information, see Move a VM out of a proximity placement group
The VM can't be in an availability set.
If the VM is in an availability zone, it must be the same availability zone as the host group. The availability zone
settings for the VM and the host group must match.
Replace the values of the variables with your own information.
$vmRGName = "movetohost"
$vmName = "myVMtoHost"
$dhRGName = "myDHResourceGroup"
$dhGroupName = "myHostGroup"
$dhName = "myHost"
$myDH = Get-AzHost `
-HostGroupName $dhGroupName `
-ResourceGroupName $dhRGName `
-Name $dhName
$myVM = Get-AzVM `
-ResourceGroupName $vmRGName `
-Name $vmName
$myVM.Host.Id = "$myDH.Id"
Stop-AzVM `
-ResourceGroupName $vmRGName `
-Name $vmName -Force
Update-AzVM `
-ResourceGroupName $vmRGName `
-VM $myVM -Debug
Start-AzVM `
-ResourceGroupName $vmRGName `
-Name $vmName
Clean up
You are being charged for your dedicated hosts even when no virtual machines are deployed. You should delete
any hosts you are currently not using to save costs.
You can only delete a host when there are no any longer virtual machines using it. Delete the VMs using Remove-
AzVM.
After deleting the VMs, you can delete the host using Remove-AzHost.
Once you have deleted all of your hosts, you may delete the host group using Remove-AzHostGroup.
You can also delete the entire resource group in a single command using Remove-AzResourceGroup. This will
delete all resources created in the group, including all of the VMs, hosts and host groups.
Next steps
There is sample template, found here, that uses both zones and fault domains for maximum resiliency in a
region.
You can also deploy dedicated hosts using the Azure portal.
Deploy VMs and scale sets to dedicated hosts using
the portal
11/2/2020 • 6 minutes to read • Edit Online
This article guides you through how to create an Azure dedicated host to host your virtual machines (VMs).
IMPORTANT
This article also covers Automatic placement of VMs and scale set instances. Automatic placement is currently in public
preview. To participate in the preview, complete the preview onboarding survey at https://aka.ms/vmss-adh-preview. To acces
the preview feature in the Azure portal, you must use this URL: https://aka.ms/vmssadh. This preview version is provided
without a service level agreement, and it's not recommended for production workloads. Certain features might not be
supported or might have constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure
Previews.
Limitations
The sizes and hardware types available for dedicated hosts vary by region. Refer to the host pricing page to
learn more.
Create a VM
1. Choose Create a resource in the upper left corner of the Azure portal.
2. In the search box above the list of Azure Marketplace resources, search for and select the image you want use,
then choose Create .
3. In the Basics tab, under Project details , make sure the correct subscription is selected and then select
myDedicatedHostsRG as the Resource group .
4. Under Instance details , type myVM for the Vir tual machine name and choose East US for your Location .
5. In Availability options select Availability zone , select 1 from the drop-down.
6. For the size, select Change size . In the list of available sizes, choose one from the Esv3 series, like Standard
E2s v3 . You may need to clear the filter in order to see all of the available sizes.
7. Complete the rest of the fields on the Basics tab as needed.
8. At the top of the page, select the Advanced tab and in the Host section, select myHostGroup for Host group
and myHost for the Host .
9. Leave the remaining defaults and then select the Review + create button at the bottom of the page.
10. When you see the message that validation has passed, select Create .
It will take a few minutes for your VM to be deployed.
Create a scale set (preview)
IMPORTANT
Virtual Machine Scale Sets on Dedicated Hosts is currently in public preview.
To participate in the preview, complete the preview onboarding survey at https://aka.ms/vmss-adh-preview.
To acces the preview feature in the Azure portal, you must use this URL: https://aka.ms/vmssadh.
This preview version is provided without a service level agreement, and it's not recommended for production workloads.
Certain features might not be supported or might have constrained capabilities.
For more information, see Supplemental Terms of Use for Microsoft Azure Previews.
When you deploy a scale set, you specify the host group.
1. Search for Scale set and select Vir tual machine scale sets from the list.
2. Select Add to create a new scale set.
3. Complete the fields on the Basics tab as you usually would, but make sure you select a VM size that is from the
series you chose for your dedicated host, like Standard E2s v3 .
4. On the Advanced tab, for Spreading algorithm select Max spreading .
5. In Host group , select the host group from the drop-down. If you recently created the group, it might take a
minute to get added to the list.
Add an existing VM
You can add an exiting VM to a dedicated host, but the VM must first be Stop\Deallocated. Before you move a VM
to a dedicated host, make sure that the VM configuration is supported:
The VM size must be in the same size family as the dedicated host. For example, if your dedicated host is DSv3,
then the VM size could be Standard_D4s_v3, but it could not be a Standard_A4_v2.
The VM needs to be located in same region as the dedicated host.
The VM can't be part of a proximity placement group. Remove the VM from the proximity placement group
before moving it to a dedicated host. For more information, see Move a VM out of a proximity placement group
The VM can't be in an availability set.
If the VM is in an availability zone, it must be the same availability zone as the host group. The availability zone
settings for the VM and the host group must match.
Move the VM to a dedicated host using the portal.
1. Open the page for the VM.
2. Select Stop to stop\deallocate the VM.
3. Select Configuration from the left menu.
4. Select a host group and a host from the drop-down menus.
5. When you are done, select Save at the top of the page.
6. After the VM has been added to the host, select Over view from the left menu.
7. At the top of the page, select Star t to restart the VM.
Next steps
For more information, see the Dedicated hosts overview.
There is sample template, found here, that uses both zones and fault domains for maximum resiliency in a
region.
You can also deploy a dedicated host using the Azure CLI or PowerShell.
Deploy Spot VMs using the Azure portal
11/2/2020 • 2 minutes to read • Edit Online
Using Spot VMs allows you to take advantage of our unused capacity at a significant cost savings. At any point in
time when Azure needs the capacity back, the Azure infrastructure will evict Spot VMs. Therefore, Spot VMs are
great for workloads that can handle interruptions like batch processing jobs, dev/test environments, large compute
workloads, and more.
Pricing for Spot VMs is variable, based on region and SKU. For more information, see VM pricing for Linux and
Windows. For more information about setting the max price, see Spot VMs - Pricing.
You have option to set a max price you are willing to pay, per hour, for the VM. The max price for a Spot VM can be
set in US dollars (USD), using up to 5 decimal places. For example, the value 0.05701 would be a max price of
$0.05701 USD per hour. If you set the max price to be -1 , the VM won't be evicted based on price. The price for
the VM will be the current price for spot or the price for a standard VM, which ever is less, as long as there is
capacity and quota available.
When the VM is evicted, you have the option to either delete the VM and the underlying disk or deallocate the VM
so that it can be restarted later.
Create the VM
When you are deploying a VM, you can choose to use an Azure spot instance.
On the Basics tab, in the Instance details section, No is the default for using an Azure spot instance.
If you select Yes , the section expands and you can choose your eviction type and eviction policy.
You can also compare the pricing and eviction rates with other similar regions by selecting View pricing histor y
and compare prices in nearby regions .
In this example, the Canada Central region is less expensive and has a lower eviction rate than the East US region.
You can change the region by selecting the choice that works the best for you and then selecting OK .
Simulate an eviction
You can simulate an eviction of a Spot VM, to testing how well your application will repond to a sudden eviction.
Replace the following with your information:
subscriptionId
resourceGroupName
vmName
POST
https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Micro
soft.Compute/virtualMachines/{vmName}/simulateEviction?api-version=2020-06-01
Next steps
You can also create Spot VMs using PowerShell, CLI, or a template.
Deploy Spot VMs using Azure PowerShell
11/2/2020 • 2 minutes to read • Edit Online
Using Spot VMs allows you to take advantage of our unused capacity at a significant cost savings. At any point in
time when Azure needs the capacity back, the Azure infrastructure will evict Spot VMs. Therefore, Spot VMs are
great for workloads that can handle interruptions like batch processing jobs, dev/test environments, large compute
workloads, and more.
Pricing for Spot VMs is variable, based on region and SKU. For more information, see VM pricing for Linux and
Windows. For more information about setting the max price, see Spot VMs - Pricing.
You have option to set a max price you are willing to pay, per hour, for the VM. The max price for a Spot VM can be
set in US dollars (USD), using up to 5 decimal places. For example, the value 0.98765 would be a max price of
$0.98765 USD per hour. If you set the max price to be -1 , the VM won't be evicted based on price. The price for
the VM will be the current price for spot or the price for a standard VM, which ever is less, as long as there is
capacity and quota available.
Create the VM
Create a spotVM using New-AzVmConfig to create the configuration. Include -Priority Spot and set -MaxPrice
to either:
-1 so the VM is not evicted based on price.
a dollar amount, up to 5 digits. For example, -MaxPrice .98765 means that the VM will be deallocated once the
price for a spotVM goes about $.98765 per hour.
This example creates a spotVM that will not be deallocated based on pricing (only when Azure needs the capacity
back). The eviction policy is set to deallocate the VM, so that it can be restarted at a later time. If you want to delete
the VM and the underlying disk when the VM is evicted, set -EvictionPolicy to Delete in New-AzVMConfig .
$resourceGroup = "mySpotRG"
$location = "eastus"
$vmName = "mySpotVM"
$cred = Get-Credential -Message "Enter a username and password for the virtual machine."
New-AzResourceGroup -Name $resourceGroup -Location $location
$subnetConfig = New-AzVirtualNetworkSubnetConfig `
-Name mySubnet -AddressPrefix 192.168.1.0/24
$vnet = New-AzVirtualNetwork -ResourceGroupName $resourceGroup `
-Location $location -Name MYvNET -AddressPrefix 192.168.0.0/16 `
-Subnet $subnetConfig
$pip = New-AzPublicIpAddress -ResourceGroupName $resourceGroup -Location $location `
-Name "mypublicdns$(Get-Random)" -AllocationMethod Static -IdleTimeoutInMinutes 4
$nsgRuleRDP = New-AzNetworkSecurityRuleConfig -Name myNetworkSecurityGroupRuleRDP -Protocol Tcp `
-Direction Inbound -Priority 1000 -SourceAddressPrefix * -SourcePortRange * -DestinationAddressPrefix * `
-DestinationPortRange 3389 -Access Allow
$nsg = New-AzNetworkSecurityGroup -ResourceGroupName $resourceGroup -Location $location `
-Name myNetworkSecurityGroup -SecurityRules $nsgRuleRDP
$nic = New-AzNetworkInterface -Name myNic -ResourceGroupName $resourceGroup -Location $location `
-SubnetId $vnet.Subnets[0].Id -PublicIpAddressId $pip.Id -NetworkSecurityGroupId $nsg.Id
$vmConfig = New-AzVMConfig -VMName $vmName -VMSize Standard_D1 -Priority "Spot" -MaxPrice -1 -EvictionPolicy
Deallocate | `
Set-AzVMOperatingSystem -Windows -ComputerName $vmName -Credential $cred | `
Set-AzVMSourceImage -PublisherName MicrosoftWindowsServer -Offer WindowsServer -Skus 2016-Datacenter -Version
latest | `
Add-AzVMNetworkInterface -Id $nic.Id
After the VM is created, you can query to see the max price for all of the VMs in the resource group.
Simulate an eviction
You can simulate an eviction of a Spot VM, to testing how well your application will repond to a sudden eviction.
Replace the following with your information:
subscriptionId
resourceGroupName
vmName
POST
https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Micro
soft.Compute/virtualMachines/{vmName}/simulateEviction?api-version=2020-06-01
Next steps
You can also create a Spot VM using the Azure CLI, portal or a template.
Query current pricing information using the Azure retail prices API for information about Spot pricing. The
meterName and skuName will both contain Spot .
Using Spot VMs allows you to take advantage of our unused capacity at a significant cost savings. At any point in
time when Azure needs the capacity back, the Azure infrastructure will evict Spot VMs. Therefore, Spot VMs are
great for workloads that can handle interruptions like batch processing jobs, dev/test environments, large
compute workloads, and more.
Pricing for Spot VMs is variable, based on region and SKU. For more information, see VM pricing for Linux and
Windows.
You have option to set a max price you are willing to pay, per hour, for the VM. The max price for a Spot VM can be
set in US dollars (USD), using up to 5 decimal places. For example, the value 0.98765 would be a max price of
$0.98765 USD per hour. If you set the max price to be -1 , the VM won't be evicted based on price. The price for
the VM will be the current price for Spot or the price for a standard VM, which ever is less, as long as there is
capacity and quota available. For more information about setting the max price, see Spot VMs - Pricing.
Use a template
For Spot template deployments, use "apiVersion": "2019-03-01" or later. Add the priority , evictionPolicy and
billingProfile properties to in your template:
"priority": "Spot",
"evictionPolicy": "Deallocate",
"billingProfile": {
"maxPrice": -1
}
Here is a sample template with the added properties for a Spot VM. Replace the resource names with your own
and <password> with a password for the local administrator account on the VM.
{
"$schema": "http://schema.management.azure.com/schemas/2019-03-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
},
"variables": {
"vnetId": "/subscriptions/ec9fcd04-e188-48b9-abfc-
abcd515f1836/resourceGroups/spotVM/providers/Microsoft.Network/virtualNetworks/spotVM",
"subnetName": "default",
"networkInterfaceName": "spotVMNIC",
"publicIpAddressName": "spotVM-ip",
"publicIpAddressType": "Dynamic",
"publicIpAddressSku": "Basic",
"virtualMachineName": "spotVM",
"osDiskType": "Premium_LRS",
"virtualMachineSize": "Standard_D2s_v3",
"adminUsername": "azureuser",
"adminPassword": "<password>",
"diagnosticsStorageAccountName": "diagstoragespot2019",
"diagnosticsStorageAccountId": "Microsoft.Storage/storageAccounts/diagstoragespot2019",
"diagnosticsStorageAccountType": "Standard_LRS",
"diagnosticsStorageAccountKind": "Storage",
"subnetRef": "[concat(variables('vnetId'), '/subnets/', variables('subnetName'))]"
"subnetRef": "[concat(variables('vnetId'), '/subnets/', variables('subnetName'))]"
},
"resources": [
{
"name": "spotVM",
"type": "Microsoft.Network/networkInterfaces",
"apiVersion": "2019-03-01",
"location": "eastus",
"dependsOn": [
"[concat('Microsoft.Network/publicIpAddresses/', variables('publicIpAddressName'))]"
],
"properties": {
"ipConfigurations": [
{
"name": "ipconfig1",
"properties": {
"subnet": {
"id": "[variables('subnetRef')]"
},
"privateIPAllocationMethod": "Dynamic",
"publicIpAddress": {
"id": "[resourceId(resourceGroup().name,
'Microsoft.Network/publicIpAddresses', variables('publicIpAddressName'))]"
}
}
}
]
}
},
{
"name": "[variables('publicIpAddressName')]",
"type": "Microsoft.Network/publicIpAddresses",
"apiVersion": "2019-02-01",
"location": "eastus",
"properties": {
"publicIpAllocationMethod": "[variables('publicIpAddressType')]"
},
"sku": {
"name": "[variables('publicIpAddressSku')]"
}
},
{
"name": "[variables('virtualMachineName')]",
"type": "Microsoft.Compute/virtualMachines",
"apiVersion": "2019-03-01",
"location": "eastus",
"dependsOn": [
"[concat('Microsoft.Network/networkInterfaces/', variables('networkInterfaceName'))]",
"[concat('Microsoft.Storage/storageAccounts/', variables('diagnosticsStorageAccountName'))]"
],
"properties": {
"hardwareProfile": {
"vmSize": "[variables('virtualMachineSize')]"
},
"storageProfile": {
"osDisk": {
"createOption": "fromImage",
"managedDisk": {
"storageAccountType": "[variables('osDiskType')]"
}
},
"imageReference": {
"publisher": "Canonical",
"offer": "UbuntuServer",
"sku": "18.04-LTS",
"version": "latest"
}
},
"networkProfile": {
"networkInterfaces": [
"networkInterfaces": [
{
"id": "[resourceId('Microsoft.Network/networkInterfaces',
variables('networkInterfaceName'))]"
}
]
},
"osProfile": {
"computerName": "[variables('virtualMachineName')]",
"adminUsername": "[variables('adminUsername')]",
"adminPassword": "[variables('adminPassword')]"
},
"diagnosticsProfile": {
"bootDiagnostics": {
"enabled": true,
"storageUri": "[concat('https://', variables('diagnosticsStorageAccountName'),
'.blob.core.windows.net/')]"
}
},
"priority": "Spot",
"evictionPolicy": "Deallocate",
"billingProfile": {
"maxPrice": -1
}
}
},
{
"name": "[variables('diagnosticsStorageAccountName')]",
"type": "Microsoft.Storage/storageAccounts",
"apiVersion": "2019-04-01",
"location": "eastus",
"properties": {},
"kind": "[variables('diagnosticsStorageAccountKind')]",
"sku": {
"name": "[variables('diagnosticsStorageAccountType')]"
}
}
],
"outputs": {
"adminUsername": {
"type": "string",
"value": "[variables('adminUsername')]"
}
}
}
Simulate an eviction
You can simulate an eviction of a Spot VM, to testing how well your application will repond to a sudden eviction.
Replace the following with your information:
subscriptionId
resourceGroupName
vmName
POST
https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Micro
soft.Compute/virtualMachines/{vmName}/simulateEviction?api-version=2020-06-01
Next steps
You can also create a Spot VM using Azure PowerShell or the Azure CLI.
Query current pricing information using the Azure retail prices API for information about Spot pricing. The
meterName and skuName will both contain Spot .
Here are some possible error codes you could receive when using Spot VMs and scale sets.
SkuNotAvailable The requested tier for resource There is not enough Azure Spot
'<resource>' is currently not available capacity in this location to create your
in location '<location>' for subscription VM or scale set instance.
'<subscriptionID>'. Please try another
tier or deploy to a different location.
EvictionPolicyCanBeSetOnlyOnAzureSp Eviction policy can be set only on Azure This VM is not a Spot VM, so you can't
otVirtualMachines Spot Virtual Machines. set the eviction policy.
AzureSpotVMNotSupportedInAvailabilit Azure Spot Virtual Machine is not You need to choose to either use a Spot
ySet supported in Availability Set. VM or use a VM in an availability set,
you can't choose both.
AzureSpotFeatureNotEnabledForSubscri Subscription not enabled with Azure Use a subscription that supports Spot
ption Spot feature. VMs.
VMPriorityCannotBeApplied The specified priority value '{0}' cannot Specify the priority when the VM is
be applied to the Virtual Machine '{1}' created.
since no priority was specified when the
Virtual Machine was created.
SpotPriceGreaterThanProvidedMaxPrice Unable to perform operation '{0}' since Select a higher max price. For more
the provided max price '{1} USD' is information, see pricing information for
lower than the current spot price '{2} Linux or Windows.
USD' for Azure Spot VM size '{3}'.
MaxPriceValueInvalid Invalid max price value. The only Enter a valid max price. For more
supported values for max price are -1 information, see pricing for Linux or
or a decimal greater than zero. Max Windows.
price value of -1 indicates the Azure
Spot virtual machine will not be evicted
for price reasons.
MaxPriceChangeNotAllowedForAllocate Max price change is not allowed when Stop\Deallocate the VM so that you can
dVMs the VM '{0}' is currently allocated. change the max price.
Please deallocate and try again.
MaxPriceChangeNotAllowed Max price change is not allowed. You cannot change the max price for
this VM.
AzureSpotIsNotSupportedForThisAPIVe Azure Spot is not supported for this The API version needs to be 2019-03-
rsion API version. 01.
AzureSpotIsNotSupportedForThisVMSiz Azure Spot is not supported for this Select another VM size. For more
e VM size {0}. information, see Spot Virtual Machines.
K EY M ESSA GE DESC RIP T IO N
MaxPriceIsSupportedOnlyForAzureSpot Max price is supported only for Azure For more information, see Spot Virtual
VirtualMachines Spot Virtual Machines. Machines.
MoveResourcesWithAzureSpotVMNotS The Move resources request contains You cannot move Spot VMs.
upported an Azure Spot virtual machine. This is
currently not supported. Please check
the error details for virtual machine Ids.
MoveResourcesWithAzureSpotVmssNot The Move resources request contains You cannot move Spot scale set
Supported an Azure Spot virtual machine scale set. instances.
This is currently not supported. Please
check the error details for virtual
machine scale set Ids.
AzureSpotVMNotSupportedInVmssWit Azure Spot Virtual Machine is not Set the orchestration mode to virtual
hVMOrchestrationMode supported in Virtual Machine Scale Set machine scale set in order to use Spot
with VM Orchestration mode. instances.
This article explains how to use the Azure Resource Manager REST API with Azure Resource Manager templates
(ARM templates) to deploy your resources to Azure.
You can either include your template in the request body or link to a file. When using a file, it can be a local file or
an external file that is available through a URI. When your template is in a storage account, you can restrict access
to the template and provide a shared access signature (SAS) token during deployment.
Deployment scope
You can target your deployment to a resource group, Azure subscription, management group, or tenant. Depending
on the scope of the deployment, you use different commands.
To deploy to a resource group , use Deployments - Create. The request is sent to:
PUT
https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers
/Microsoft.Resources/deployments/{deploymentName}?api-version=2020-06-01
To deploy to a subscription , use Deployments - Create At Subscription Scope. The request is sent to:
PUT
https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Resources/deployments/{d
eploymentName}?api-version=2020-06-01
For more information about subscription level deployments, see Create resource groups and resources at
the subscription level.
To deploy to a management group , use Deployments - Create At Management Group Scope. The request
is sent to:
PUT
https://management.azure.com/providers/Microsoft.Management/managementGroups/{groupId}/providers/Microso
ft.Resources/deployments/{deploymentName}?api-version=2020-06-01
For more information about management group level deployments, see Create resources at the
management group level.
To deploy to a tenant , use Deployments - Create Or Update At Tenant Scope. The request is sent to:
PUT https://management.azure.com/providers/Microsoft.Resources/deployments/{deploymentName}?api-
version=2020-06-01
For more information about tenant level deployments, see Create resources at the tenant level.
The examples in this article use resource group deployments.
Deploy with the REST API
1. Set common parameters and headers, including authentication tokens.
2. If you're deploying to a resource group that doesn't exist, create the resource group. Provide your
subscription ID, the name of the new resource group, and location that you need for your solution. For more
information, see Create a resource group.
PUT
https://management.azure.com/subscriptions/<YourSubscriptionId>/resourcegroups/<YourResourceGroupName>?
api-version=2020-06-01
{
"location": "West US",
"tags": {
"tagname1": "tagvalue1"
}
}
3. Before deploying your template, you can preview the changes the template will make to your environment.
Use the what-if operation to verify that the template makes the changes that you expect. What-if also
validates the template for errors.
4. To deploy a template, provide your subscription ID, the name of the resource group, the name of the
deployment in the request URI.
PUT
https://management.azure.com/subscriptions/<YourSubscriptionId>/resourcegroups/<YourResourceGroupName>/p
roviders/Microsoft.Resources/deployments/<YourDeploymentName>?api-version=2019-10-01
In the request body, provide a link to your template and parameter file. For more information about the
parameter file, see Create Resource Manager parameter file.
Notice the mode is set to Incremental . To run a complete deployment, set mode to Complete . Be careful
when using the complete mode as you can inadvertently delete resources that aren't in your template.
{
"properties": {
"templateLink": {
"uri": "http://mystorageaccount.blob.core.windows.net/templates/template.json",
"contentVersion": "1.0.0.0"
},
"parametersLink": {
"uri": "http://mystorageaccount.blob.core.windows.net/templates/parameters.json",
"contentVersion": "1.0.0.0"
},
"mode": "Incremental"
}
}
If you want to log response content, request content, or both, include debugSetting in the request.
{
"properties": {
"templateLink": {
"uri": "http://mystorageaccount.blob.core.windows.net/templates/template.json",
"contentVersion": "1.0.0.0"
},
"parametersLink": {
"uri": "http://mystorageaccount.blob.core.windows.net/templates/parameters.json",
"contentVersion": "1.0.0.0"
},
"mode": "Incremental",
"debugSetting": {
"detailLevel": "requestContent, responseContent"
}
}
}
You can set up your storage account to use a shared access signature (SAS) token. For more information, see
Delegating Access with a Shared Access Signature.
If you need to provide a sensitive value for a parameter (such as a password), add that value to a key vault.
Retrieve the key vault during deployment as shown in the previous example. For more information, see Pass
secure values during deployment.
5. Instead of linking to files for the template and parameters, you can include them in the request body. The
following example shows the request body with the template and parameter inline:
{
"properties": {
"mode": "Incremental",
"template": {
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"storageAccountType": {
"type": "string",
"defaultValue": "Standard_LRS",
"allowedValues": [
"Standard_LRS",
"Standard_GRS",
"Standard_ZRS",
"Premium_LRS"
],
"metadata": {
"description": "Storage Account type"
}
},
"location": {
"type": "string",
"defaultValue": "[resourceGroup().location]",
"metadata": {
"description": "Location for all resources."
}
}
},
"variables": {
"storageAccountName": "[concat(uniquestring(resourceGroup().id), 'standardsa')]"
},
"resources": [
{
"type": "Microsoft.Storage/storageAccounts",
"apiVersion": "2018-02-01",
"name": "[variables('storageAccountName')]",
"location": "[parameters('location')]",
"sku": {
"name": "[parameters('storageAccountType')]"
},
"kind": "StorageV2",
"properties": {}
}
],
"outputs": {
"storageAccountName": {
"type": "string",
"value": "[variables('storageAccountName')]"
}
}
},
"parameters": {
"location": {
"value": "eastus2"
}
}
}
}
GET
https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers
/Microsoft.Resources/deployments/{deploymentName}?api-version=2019-10-01
Deployment name
You can give your deployment a name such as ExampleDeployment .
Every time you run a deployment, an entry is added to the resource group's deployment history with the
deployment name. If you run another deployment and give it the same name, the earlier entry is replaced with the
current deployment. If you want to maintain unique entries in the deployment history, give each deployment a
unique name.
To create a unique name, you can assign a random number. Or, add a date value.
If you run concurrent deployments to the same resource group with the same deployment name, only the last
deployment is completed. Any deployments with the same name that haven't finished are replaced by the last
deployment. For example, if you run a deployment named newStorage that deploys a storage account named
storage1 , and at the same time run another deployment named newStorage that deploys a storage account
named storage2 , you deploy only one storage account. The resulting storage account is named storage2 .
However, if you run a deployment named newStorage that deploys a storage account named storage1 , and
immediately after it completes you run another deployment named newStorage that deploys a storage account
named storage2 , then you have two storage accounts. One is named storage1 , and the other is named storage2 .
But, you only have one entry in the deployment history.
When you specify a unique name for each deployment, you can run them concurrently without conflict. If you run a
deployment named newStorage1 that deploys a storage account named storage1 , and at the same time run
another deployment named newStorage2 that deploys a storage account named storage2 , then you have two
storage accounts and two entries in the deployment history.
To avoid conflicts with concurrent deployments and to ensure unique entries in the deployment history, give each
deployment a unique name.
Next steps
To roll back to a successful deployment when you get an error, see Rollback on error to successful deployment.
To specify how to handle resources that exist in the resource group but aren't defined in the template, see Azure
Resource Manager deployment modes.
To learn about handling asynchronous REST operations, see Track asynchronous Azure operations.
To learn more about templates, see Understand the structure and syntax of ARM templates.
Create a VM from a VHD by using the Azure portal
11/2/2020 • 3 minutes to read • Edit Online
Copy a disk
Create a snapshot and then create a disk from the snapshot. This strategy allows you to keep the original VHD as a
fallback:
1. From the Azure portal, on the left menu, select All ser vices .
2. In the All ser vices search box, enter disks and then select Disks to display the list of available disks.
3. Select the disk that you would like to use. The Disk page for that disk appears.
4. From the menu at the top, select Create snapshot .
5. Enter a Name for the snapshot.
6. Choose a Resource group for the snapshot. You can use either an existing resource group or create a new
one.
7. For Account type , choose either Standard (HDD) or Premium (SSD) storage.
8. When you're done, select Create to create the snapshot.
9. After the snapshot has been created, select Create a resource in the left menu.
10. In the search box, enter managed disk and then select Managed Disks from the list.
11. On the Managed Disks page, select Create .
12. Enter a Name for the disk.
13. Choose a Resource group for the disk. You can use either an existing resource group or create a new one. This
selection will also be used as the resource group where you create the VM from the disk.
14. For Account type , choose either Standard (HDD) or Premium (SSD) storage.
15. In Source type , ensure Snapshot is selected.
16. In the Source snapshot drop-down, select the snapshot you want to use.
17. Make any other adjustments as needed and then select Create to create the disk.
Next steps
You can also use PowerShell to upload a VHD to Azure and create a specialized VM.
Create a Windows VM from a specialized disk by
using PowerShell
11/2/2020 • 6 minutes to read • Edit Online
Create a new VM by attaching a specialized managed disk as the OS disk. A specialized disk is a copy of a virtual
hard disk (VHD) from an existing VM that contains the user accounts, applications, and other state data from your
original VM.
When you use a specialized VHD to create a new VM, the new VM retains the computer name of the original VM.
Other computer-specific information is also kept and, in some cases, this duplicate information could cause issues.
When copying a VM, be aware of what types of computer-specific information your applications rely on.
You have several options:
Use an existing managed disk. This option is useful if you have a VM that isn't working correctly. You can delete
the VM and then reuse the managed disk to create a new VM.
Upload a VHD
Copy an existing Azure VM by using snapshots
You can also use the Azure portal to create a new VM from a specialized VHD.
This article shows you how to use managed disks. If you have a legacy deployment that requires using a storage
account, see Create a VM from a specialized VHD in a storage account.
We recommend that you limit the number of concurrent deployments to 20 VMs from a single VHD or snapshot.
$resourceGroupName = 'myResourceGroup'
$osDiskName = 'myOsDisk'
$osDisk = Get-AzDisk `
-ResourceGroupName $resourceGroupName `
-DiskName $osDiskName
You can now attach this disk as the OS disk to a new VM.
$resourceGroupName = 'myResourceGroup'
$vmName = 'myVM'
$location = 'westus'
$snapshotName = 'mySnapshot'
$snapshotConfig = New-AzSnapshotConfig `
-SourceUri $disk.Id `
-OsType Windows `
-CreateOption Copy `
-Location $location
$snapShot = New-AzSnapshot `
-Snapshot $snapshotConfig `
-SnapshotName $snapshotName `
-ResourceGroupName $resourceGroupName
To use this snapshot to create a VM that needs to be high-performing, add the parameter
-AccountType Premium_LRS to the New-AzSnapshotConfig command. This parameter creates the snapshot so that
it's stored as a Premium Managed Disk. Premium Managed Disks are more expensive than Standard, so be sure
you'll need Premium before using this parameter.
Create a new disk from the snapshot
Create a managed disk from the snapshot by using New-AzDisk. This example uses myOSDisk for the disk name.
Create a new resource group for the new VM.
$destinationResourceGroup = 'myDestinationResourceGroup'
New-AzResourceGroup -Location $location `
-Name $destinationResourceGroup
$osDiskName = 'myOsDisk'
$subnetName = 'mySubNet'
$singleSubnet = New-AzVirtualNetworkSubnetConfig `
-Name $subnetName `
-AddressPrefix 10.0.0.0/24
2. Create the virtual network. This example sets the virtual network name to myVnetName, the location to
West US , and the address prefix for the virtual network to 10.0.0.0/16.
$vnetName = "myVnetName"
$vnet = New-AzVirtualNetwork `
-Name $vnetName -ResourceGroupName $destinationResourceGroup `
-Location $location `
-AddressPrefix 10.0.0.0/16 `
-Subnet $singleSubnet
For more information about endpoints and NSG rules, see Opening ports to a VM in Azure by using PowerShell.
Create a public IP address and NIC
To enable communication with the virtual machine in the virtual network, you'll need a public IP address and a
network interface.
1. Create the public IP. In this example, the public IP address name is set to myIP.
$ipName = "myIP"
$pip = New-AzPublicIpAddress `
-Name $ipName -ResourceGroupName $destinationResourceGroup `
-Location $location `
-AllocationMethod Dynamic
2. Create the NIC. In this example, the NIC name is set to myNicName.
$nicName = "myNicName"
$nic = New-AzNetworkInterface -Name $nicName `
-ResourceGroupName $destinationResourceGroup `
-Location $location -SubnetId $vnet.Subnets[0].Id `
-PublicIpAddressId $pip.Id `
-NetworkSecurityGroupId $nsg.Id
$vmName = "myVM"
$vmConfig = New-AzVMConfig -VMName $vmName -VMSize "Standard_A2"
Complete the VM
Create the VM by using New-AzVM with the configurations that we just created.
New-AzVM -ResourceGroupName $destinationResourceGroup -Location $location -VM $vm
Next steps
Sign in to your new virtual machine. For more information, see How to connect and log on to an Azure virtual
machine running Windows.
Create and manage Windows VMs in Azure using
Java
11/2/2020 • 7 minutes to read • Edit Online
An Azure Virtual Machine (VM) needs several supporting Azure resources. This article covers creating, managing,
and deleting VM resources using Java. You learn how to:
Create a Maven project
Add dependencies
Create credentials
Create resources
Perform management tasks
Delete resources
Run the application
It takes about 20 minutes to do these steps.
mkdir java-azure-test
cd java-azure-test
Add dependencies
1. Under the testAzureApp folder, open the pom.xml file and add the build configuration to <project> to
enable the building of your application:
<build>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<configuration>
<mainClass>com.fabrikam.testAzureApp.App</mainClass>
</configuration>
</plugin>
</plugins>
</build>
2. Add the dependencies that are needed to access the Azure Java SDK.
<dependency>
<groupId>com.microsoft.azure</groupId>
<artifactId>azure</artifactId>
<version>1.1.0</version>
</dependency>
<dependency>
<groupId>com.microsoft.azure</groupId>
<artifactId>azure-mgmt-compute</artifactId>
<version>1.1.0</version>
</dependency>
<dependency>
<groupId>com.microsoft.azure</groupId>
<artifactId>azure-mgmt-resources</artifactId>
<version>1.1.0</version>
</dependency>
<dependency>
<groupId>com.microsoft.azure</groupId>
<artifactId>azure-mgmt-network</artifactId>
<version>1.1.0</version>
</dependency>
<dependency>
<groupId>com.squareup.okio</groupId>
<artifactId>okio</artifactId>
<version>1.13.0</version>
</dependency>
<dependency>
<groupId>com.nimbusds</groupId>
<artifactId>nimbus-jose-jwt</artifactId>
<version>3.6</version>
</dependency>
<dependency>
<groupId>net.minidev</groupId>
<artifactId>json-smart</artifactId>
<version>1.0.6.3</version>
</dependency>
<dependency>
<groupId>javax.mail</groupId>
<artifactId>mail</artifactId>
<version>1.4.5</version>
</dependency>
Create credentials
Before you start this step, make sure that you have access to an Active Directory service principal. You should also
record the application ID, the authentication key, and the tenant ID that you need in a later step.
Create the authorization file
1. Create a file named azureauth.properties and add these properties to it:
subscription=<subscription-id>
client=<application-id>
key=<authentication-key>
tenant=<tenant-id>
managementURI=https://management.core.windows.net/
baseURL=https://management.azure.com/
authURL=https://login.windows.net/
graphURL=https://graph.microsoft.com/
Replace <subscription-id> with your subscription identifier, <application-id> with the Active Directory
application identifier, <authentication-key> with the application key, and <tenant-id> with the tenant
identifier.
2. Save the file.
3. Set an environment variable named AZURE_AUTH_LOCATION in your shell with the full path to the
authentication file.
Create the management client
1. Open the App.java file under src\main\java\com\fabrikam and make sure this package statement is at the
top:
package com.fabrikam.testAzureApp;
import com.microsoft.azure.management.Azure;
import com.microsoft.azure.management.compute.AvailabilitySet;
import com.microsoft.azure.management.compute.AvailabilitySetSkuTypes;
import com.microsoft.azure.management.compute.CachingTypes;
import com.microsoft.azure.management.compute.InstanceViewStatus;
import com.microsoft.azure.management.compute.DiskInstanceView;
import com.microsoft.azure.management.compute.VirtualMachine;
import com.microsoft.azure.management.compute.VirtualMachineSizeTypes;
import com.microsoft.azure.management.network.PublicIPAddress;
import com.microsoft.azure.management.network.Network;
import com.microsoft.azure.management.network.NetworkInterface;
import com.microsoft.azure.management.resources.ResourceGroup;
import com.microsoft.azure.management.resources.fluentcore.arm.Region;
import com.microsoft.azure.management.resources.fluentcore.model.Creatable;
import com.microsoft.rest.LogLevel;
import java.io.File;
import java.util.Scanner;
3. To create the Active Directory credentials that you need to make requests, add this code to the main method
of the App class:
try {
final File credFile = new File(System.getenv("AZURE_AUTH_LOCATION"));
Azure azure = Azure.configure()
.withLogLevel(LogLevel.BASIC)
.authenticate(credFile)
.withDefaultSubscription();
} catch (Exception e) {
System.out.println(e.getMessage());
e.printStackTrace();
}
Create resources
Create the resource group
All resources must be contained in a Resource group.
To specify values for the application and create the resource group, add this code to the try block in the main
method:
System.out.println("Creating resource group...");
ResourceGroup resourceGroup = azure.resourceGroups()
.define("myResourceGroup")
.withRegion(Region.US_EAST)
.create();
NOTE
This tutorial creates a virtual machine running a version of the Windows Server operating system. To learn more about
selecting other images, see Navigate and select Azure virtual machine images with Windows PowerShell and the Azure CLI.
If you want to use an existing disk instead of a marketplace image, use this code:
azure.virtualMachines.define("myVM")
.withRegion(Region.US_EAST)
.withExistingResourceGroup("myResourceGroup")
.withExistingPrimaryNetworkInterface(networkInterface)
.withSpecializedOSDisk(managedDisk, OperatingSystemTypes.Windows)
.withExistingAvailabilitySet(availabilitySet)
.withSize(VirtualMachineSizeTypes.StandardDS1)
.create();
System.out.println("hardwareProfile");
System.out.println(" vmSize: " + vm.size());
System.out.println("storageProfile");
System.out.println(" imageReference");
System.out.println(" publisher: " + vm.storageProfile().imageReference().publisher());
System.out.println(" offer: " + vm.storageProfile().imageReference().offer());
System.out.println(" sku: " + vm.storageProfile().imageReference().sku());
System.out.println(" version: " + vm.storageProfile().imageReference().version());
System.out.println(" osDisk");
System.out.println(" osType: " + vm.storageProfile().osDisk().osType());
System.out.println(" name: " + vm.storageProfile().osDisk().name());
System.out.println(" createOption: " + vm.storageProfile().osDisk().createOption());
System.out.println(" caching: " + vm.storageProfile().osDisk().caching());
System.out.println("osProfile");
System.out.println(" computerName: " + vm.osProfile().computerName());
System.out.println(" adminUserName: " + vm.osProfile().adminUsername());
System.out.println(" provisionVMAgent: " + vm.osProfile().windowsConfiguration().provisionVMAgent());
System.out.println(" enableAutomaticUpdates: " +
vm.osProfile().windowsConfiguration().enableAutomaticUpdates());
System.out.println("networkProfile");
System.out.println(" networkInterface: " + vm.primaryNetworkInterfaceId());
System.out.println("vmAgent");
System.out.println(" vmAgentVersion: " + vm.instanceView().vmAgent().vmAgentVersion());
System.out.println(" statuses");
for(InstanceViewStatus status : vm.instanceView().vmAgent().statuses()) {
System.out.println(" code: " + status.code());
System.out.println(" displayStatus: " + status.displayStatus());
System.out.println(" message: " + status.message());
System.out.println(" time: " + status.time());
}
System.out.println("disks");
for(DiskInstanceView disk : vm.instanceView().disks()) {
System.out.println(" name: " + disk.name());
System.out.println(" statuses");
for(InstanceViewStatus status : disk.statuses()) {
System.out.println(" code: " + status.code());
System.out.println(" displayStatus: " + status.displayStatus());
System.out.println(" time: " + status.time());
}
}
System.out.println("VM general status");
System.out.println(" provisioningStatus: " + vm.provisioningState());
System.out.println(" id: " + vm.id());
System.out.println(" name: " + vm.name());
System.out.println(" type: " + vm.type());
System.out.println("VM instance status");
for(InstanceViewStatus status : vm.instanceView().statuses()) {
System.out.println(" code: " + status.code());
System.out.println(" displayStatus: " + status.displayStatus());
}
System.out.println("Press enter to continue...");
input.nextLine();
Stop the VM
You can stop a virtual machine and keep all its settings, but continue to be charged for it, or you can stop a virtual
machine and deallocate it. When a virtual machine is deallocated, all resources associated with it are also
deallocated and billing ends for it.
To stop the virtual machine without deallocating it, add this code to the try block in the main method:
System.out.println("Stopping vm...");
vm.powerOff();
System.out.println("Press enter to continue...");
input.nextLine();
If you want to deallocate the virtual machine, change the PowerOff call to this code:
vm.deallocate();
Start the VM
To start the virtual machine, add this code to the try block in the main method:
System.out.println("Starting vm...");
vm.start();
System.out.println("Press enter to continue...");
input.nextLine();
Resize the VM
Many aspects of deployment should be considered when deciding on a size for your virtual machine. For more
information, see VM sizes.
To change size of the virtual machine, add this code to the try block in the main method:
System.out.println("Resizing vm...");
vm.update()
.withSize(VirtualMachineSizeTypes.STANDARD_DS2)
.apply();
System.out.println("Press enter to continue...");
input.nextLine();
Delete resources
Because you are charged for resources used in Azure, it is always good practice to delete resources that are no
longer needed. If you want to delete the virtual machines and all the supporting resources, all you have to do is
delete the resource group.
1. To delete the resource group, add this code to the try block in the main method:
System.out.println("Deleting resources...");
azure.resourceGroups().deleteByName("myResourceGroup");
2. Before you press Enter to start deleting resources, you could take a few minutes to verify the creation of the
resources in the Azure portal. Click the deployment status to see information about the deployment.
Next steps
Learn more about using the Azure libraries for Java.
Create and manage Windows VMs in Azure using
Python
11/2/2020 • 10 minutes to read • Edit Online
An Azure Virtual Machine (VM) needs several supporting Azure resources. This article covers creating, managing,
and deleting VM resources using Python. You learn how to:
Create a Visual Studio project
Install packages
Create credentials
Create resources
Perform management tasks
Delete resources
Run the application
It takes about 20 minutes to do these steps.
Install packages
1. In Solution Explorer, under myPythonProject, right-click Python Environments , and then select Add vir tual
environment .
2. On the Add Virtual Environment screen, accept the default name of env, make sure that Python 3.6 (64-bit) is
selected for the base interpreter, and then click Create .
3. Right-click the env environment that you created, click Install Python Package , enter azure in the search box,
and then press Enter.
You should see in the output windows that the azure packages were successfully installed.
Create credentials
Before you start this step, make sure that you have an Active Directory service principal. You should also record the
application ID, the authentication key, and the tenant ID that you need in a later step.
1. Open myPythonProject.py file that was created, and then add this code to enable your application to run:
if __name__ == "__main__":
2. To import the code that is needed, add these statements to the top of the .py file:
from azure.common.credentials import ServicePrincipalCredentials
from azure.mgmt.resource import ResourceManagementClient
from azure.mgmt.compute import ComputeManagementClient
from azure.mgmt.network import NetworkManagementClient
from azure.mgmt.compute.models import DiskCreateOption
3. Next in the .py file, add variables after the import statements to specify common values used in the code:
SUBSCRIPTION_ID = 'subscription-id'
GROUP_NAME = 'myResourceGroup'
LOCATION = 'westus'
VM_NAME = 'myVM'
def get_credentials():
credentials = ServicePrincipalCredentials(
client_id = 'application-id',
secret = 'authentication-key',
tenant = 'tenant-id'
)
return credentials
Replace application-id , authentication-key , and tenant-id with the values that you previously collected
when you created your Azure Active Directory service principal.
5. To call the function that you previously added, add this code under the if statement at the end of the .py file:
credentials = get_credentials()
Create resources
Initialize management clients
Management clients are needed to create and manage resources using the Python SDK in Azure. To create the
management clients, add this code under the if statement at then end of the .py file:
resource_group_client = ResourceManagementClient(
credentials,
SUBSCRIPTION_ID
)
network_client = NetworkManagementClient(
credentials,
SUBSCRIPTION_ID
)
compute_client = ComputeManagementClient(
credentials,
SUBSCRIPTION_ID
)
def create_resource_group(resource_group_client):
resource_group_params = { 'location':LOCATION }
resource_group_result = resource_group_client.resource_groups.create_or_update(
GROUP_NAME,
resource_group_params
)
2. To call the function that you previously added, add this code under the if statement at the end of the .py file:
create_resource_group(resource_group_client)
input('Resource group created. Press enter to continue...')
Availability sets make it easier for you to maintain the virtual machines used by your application.
1. To create an availability set, add this function after the variables in the .py file:
def create_availability_set(compute_client):
avset_params = {
'location': LOCATION,
'sku': { 'name': 'Aligned' },
'platform_fault_domain_count': 3
}
availability_set_result = compute_client.availability_sets.create_or_update(
GROUP_NAME,
'myAVSet',
avset_params
)
2. To call the function that you previously added, add this code under the if statement at the end of the .py file:
create_availability_set(compute_client)
print("------------------------------------------------------")
input('Availability set created. Press enter to continue...')
def create_public_ip_address(network_client):
public_ip_addess_params = {
'location': LOCATION,
'public_ip_allocation_method': 'Dynamic'
}
creation_result = network_client.public_ip_addresses.create_or_update(
GROUP_NAME,
'myIPAddress',
public_ip_addess_params
)
return creation_result.result()
2. To call the function that you previously added, add this code under the if statement at the end of the .py file:
creation_result = create_public_ip_address(network_client)
print("------------------------------------------------------")
print(creation_result)
input('Press enter to continue...')
def create_vnet(network_client):
vnet_params = {
'location': LOCATION,
'address_space': {
'address_prefixes': ['10.0.0.0/16']
}
}
creation_result = network_client.virtual_networks.create_or_update(
GROUP_NAME,
'myVNet',
vnet_params
)
return creation_result.result()
2. To call the function that you previously added, add this code under the if statement at the end of the .py file:
creation_result = create_vnet(network_client)
print("------------------------------------------------------")
print(creation_result)
input('Press enter to continue...')
3. To add a subnet to the virtual network, add this function after the variables in the .py file:
def create_subnet(network_client):
subnet_params = {
'address_prefix': '10.0.0.0/24'
}
creation_result = network_client.subnets.create_or_update(
GROUP_NAME,
'myVNet',
'mySubnet',
subnet_params
)
return creation_result.result()
4. To call the function that you previously added, add this code under the if statement at the end of the .py file:
creation_result = create_subnet(network_client)
print("------------------------------------------------------")
print(creation_result)
input('Press enter to continue...')
return creation_result.result()
2. To call the function that you previously added, add this code under the if statement at the end of the .py file:
creation_result = create_nic(network_client)
print("------------------------------------------------------")
print(creation_result)
input('Press enter to continue...')
Now that you created all the supporting resources, you can create a virtual machine.
1. To create the virtual machine, add this function after the variables in the .py file:
def create_vm(network_client, compute_client):
nic = network_client.network_interfaces.get(
GROUP_NAME,
'myNic'
)
avset = compute_client.availability_sets.get(
GROUP_NAME,
'myAVSet'
)
vm_parameters = {
'location': LOCATION,
'os_profile': {
'computer_name': VM_NAME,
'admin_username': 'azureuser',
'admin_password': 'Azure12345678'
},
'hardware_profile': {
'vm_size': 'Standard_DS1'
},
'storage_profile': {
'image_reference': {
'publisher': 'MicrosoftWindowsServer',
'offer': 'WindowsServer',
'sku': '2012-R2-Datacenter',
'version': 'latest'
}
},
'network_profile': {
'network_interfaces': [{
'id': nic.id
}]
},
'availability_set': {
'id': avset.id
}
}
creation_result = compute_client.virtual_machines.create_or_update(
GROUP_NAME,
VM_NAME,
vm_parameters
)
return creation_result.result()
NOTE
This tutorial creates a virtual machine running a version of the Windows Server operating system. To learn more
about selecting other images, see Navigate and select Azure virtual machine images with Windows PowerShell and
the Azure CLI.
2. To call the function that you previously added, add this code under the if statement at the end of the .py file:
def get_vm(compute_client):
vm = compute_client.virtual_machines.get(GROUP_NAME, VM_NAME, expand='instanceView')
print("hardwareProfile")
print(" vmSize: ", vm.hardware_profile.vm_size)
print("\nstorageProfile")
print(" imageReference")
print(" publisher: ", vm.storage_profile.image_reference.publisher)
print(" offer: ", vm.storage_profile.image_reference.offer)
print(" sku: ", vm.storage_profile.image_reference.sku)
print(" version: ", vm.storage_profile.image_reference.version)
print(" osDisk")
print(" osType: ", vm.storage_profile.os_disk.os_type.value)
print(" name: ", vm.storage_profile.os_disk.name)
print(" createOption: ", vm.storage_profile.os_disk.create_option.value)
print(" caching: ", vm.storage_profile.os_disk.caching.value)
print("\nosProfile")
print(" computerName: ", vm.os_profile.computer_name)
print(" adminUsername: ", vm.os_profile.admin_username)
print(" provisionVMAgent: {0}".format(vm.os_profile.windows_configuration.provision_vm_agent))
print(" enableAutomaticUpdates:
{0}".format(vm.os_profile.windows_configuration.enable_automatic_updates))
print("\nnetworkProfile")
for nic in vm.network_profile.network_interfaces:
print(" networkInterface id: ", nic.id)
print("\nvmAgent")
print(" vmAgentVersion", vm.instance_view.vm_agent.vm_agent_version)
print(" statuses")
for stat in vm_result.instance_view.vm_agent.statuses:
print(" code: ", stat.code)
print(" displayStatus: ", stat.display_status)
print(" message: ", stat.message)
print(" time: ", stat.time)
print("\ndisks");
for disk in vm.instance_view.disks:
print(" name: ", disk.name)
print(" statuses")
for stat in disk.statuses:
print(" code: ", stat.code)
print(" displayStatus: ", stat.display_status)
print(" time: ", stat.time)
print("\nVM general status")
print(" provisioningStatus: ", vm.provisioning_state)
print(" id: ", vm.id)
print(" name: ", vm.name)
print(" type: ", vm.type)
print(" location: ", vm.location)
print("\nVM instance status")
for stat in vm.instance_view.statuses:
print(" code: ", stat.code)
print(" displayStatus: ", stat.display_status)
2. To call the function that you previously added, add this code under the if statement at the end of the .py file:
get_vm(compute_client)
print("------------------------------------------------------")
input('Press enter to continue...')
Stop the VM
You can stop a virtual machine and keep all its settings, but continue to be charged for it, or you can stop a virtual
machine and deallocate it. When a virtual machine is deallocated, all resources associated with it are also
deallocated and billing ends for it.
1. To stop the virtual machine without deallocating it, add this function after the variables in the .py file:
def stop_vm(compute_client):
compute_client.virtual_machines.power_off(GROUP_NAME, VM_NAME)
If you want to deallocate the virtual machine, change the power_off call to this code:
compute_client.virtual_machines.deallocate(GROUP_NAME, VM_NAME)
2. To call the function that you previously added, add this code under the if statement at the end of the .py file:
stop_vm(compute_client)
input('Press enter to continue...')
Start the VM
1. To start the virtual machine, add this function after the variables in the .py file:
def start_vm(compute_client):
compute_client.virtual_machines.start(GROUP_NAME, VM_NAME)
2. To call the function that you previously added, add this code under the if statement at the end of the .py file:
start_vm(compute_client)
input('Press enter to continue...')
Resize the VM
Many aspects of deployment should be considered when deciding on a size for your virtual machine. For more
information, see VM sizes.
1. To change the size of the virtual machine, add this function after the variables in the .py file:
def update_vm(compute_client):
vm = compute_client.virtual_machines.get(GROUP_NAME, VM_NAME)
vm.hardware_profile.vm_size = 'Standard_DS3'
update_result = compute_client.virtual_machines.create_or_update(
GROUP_NAME,
VM_NAME,
vm
)
return update_result.result()
2. To call the function that you previously added, add this code under the if statement at the end of the .py file:
update_result = update_vm(compute_client)
print("------------------------------------------------------")
print(update_result)
input('Press enter to continue...')
def add_datadisk(compute_client):
disk_creation = compute_client.disks.create_or_update(
GROUP_NAME,
'myDataDisk1',
{
'location': LOCATION,
'disk_size_gb': 1,
'creation_data': {
'create_option': DiskCreateOption.empty
}
}
)
data_disk = disk_creation.result()
vm = compute_client.virtual_machines.get(GROUP_NAME, VM_NAME)
add_result = vm.storage_profile.data_disks.append({
'lun': 1,
'name': 'myDataDisk1',
'create_option': DiskCreateOption.attach,
'managed_disk': {
'id': data_disk.id
}
})
add_result = compute_client.virtual_machines.create_or_update(
GROUP_NAME,
VM_NAME,
vm)
return add_result.result()
2. To call the function that you previously added, add this code under the if statement at the end of the .py file:
add_result = add_datadisk(compute_client)
print("------------------------------------------------------")
print(add_result)
input('Press enter to continue...')
Delete resources
Because you are charged for resources used in Azure, it's always a good practice to delete resources that are no
longer needed. If you want to delete the virtual machines and all the supporting resources, all you have to do is
delete the resource group.
1. To delete the resource group and all resources, add this function after the variables in the .py file:
def delete_resources(resource_group_client):
resource_group_client.resource_groups.delete(GROUP_NAME)
2. To call the function that you previously added, add this code under the if statement at the end of the .py file:
delete_resources(resource_group_client)
3. Save myPythonProject.py.
Next steps
If there were issues with the deployment, a next step would be to look at Troubleshooting resource group
deployments with Azure portal
Learn more about the Azure Python Library
Create a Windows virtual machine from a Resource
Manager template
11/2/2020 • 4 minutes to read • Edit Online
Learn how to create a Windows virtual machine by using an Azure Resource Manager template and Azure
PowerShell from the Azure Cloud shell. The template used in this article deploys a single virtual machine running
Windows Server in a new virtual network with a single subnet. For creating a Linux virtual machine, see How to
create a Linux virtual machine with Azure Resource Manager templates.
An alternative is to deploy the template from the Azure portal. To open the template in the portal, select the
Deploy to Azure button.
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"adminUsername": {
"type": "string",
"metadata": {
"description": "Username for the Virtual Machine."
}
},
"adminPassword": {
"type": "securestring",
"minLength": 12,
"metadata": {
"description": "Password for the Virtual Machine."
}
},
"dnsLabelPrefix": {
"type": "string",
"defaultValue": "[toLower(concat(parameters('vmName'),'-', uniqueString(resourceGroup().id,
parameters('vmName'))))]",
"metadata": {
"description": "Unique DNS Name for the Public IP used to access the Virtual Machine."
}
},
"publicIpName": {
"type": "string",
"defaultValue": "myPublicIP",
"metadata": {
"description": "Name for the Public IP used to access the Virtual Machine."
}
},
"publicIPAllocationMethod": {
"publicIPAllocationMethod": {
"type": "string",
"defaultValue": "Dynamic",
"allowedValues": [
"Dynamic",
"Static"
],
"metadata": {
"description": "Allocation method for the Public IP used to access the Virtual Machine."
}
},
"publicIpSku": {
"type": "string",
"defaultValue": "Basic",
"allowedValues": [
"Basic",
"Standard"
],
"metadata": {
"description": "SKU for the Public IP used to access the Virtual Machine."
}
},
"OSVersion": {
"type": "string",
"defaultValue": "2019-Datacenter",
"allowedValues": [
"2008-R2-SP1",
"2012-Datacenter",
"2012-R2-Datacenter",
"2016-Nano-Server",
"2016-Datacenter-with-Containers",
"2016-Datacenter",
"2019-Datacenter",
"2019-Datacenter-Core",
"2019-Datacenter-Core-smalldisk",
"2019-Datacenter-Core-with-Containers",
"2019-Datacenter-Core-with-Containers-smalldisk",
"2019-Datacenter-smalldisk",
"2019-Datacenter-with-Containers",
"2019-Datacenter-with-Containers-smalldisk"
],
"metadata": {
"description": "The Windows version for the VM. This will pick a fully patched image of this given
Windows version."
}
},
"vmSize": {
"type": "string",
"defaultValue": "Standard_D2_v3",
"metadata": {
"description": "Size of the virtual machine."
}
},
"location": {
"type": "string",
"defaultValue": "[resourceGroup().location]",
"metadata": {
"description": "Location for all resources."
}
},
"vmName": {
"type": "string",
"defaultValue": "simple-vm",
"metadata": {
"description": "Name of the virtual machine."
}
}
},
"variables": {
"variables": {
"storageAccountName": "[concat('bootdiags', uniquestring(resourceGroup().id))]",
"nicName": "myVMNic",
"addressPrefix": "10.0.0.0/16",
"subnetName": "Subnet",
"subnetPrefix": "10.0.0.0/24",
"virtualNetworkName": "MyVNET",
"subnetRef": "[resourceId('Microsoft.Network/virtualNetworks/subnets', variables('virtualNetworkName'),
variables('subnetName'))]",
"networkSecurityGroupName": "default-NSG"
},
"resources": [
{
"type": "Microsoft.Storage/storageAccounts",
"apiVersion": "2019-06-01",
"name": "[variables('storageAccountName')]",
"location": "[parameters('location')]",
"sku": {
"name": "Standard_LRS"
},
"kind": "Storage",
"properties": {}
},
{
"type": "Microsoft.Network/publicIPAddresses",
"apiVersion": "2020-06-01",
"name": "[parameters('publicIPName')]",
"location": "[parameters('location')]",
"sku": {
"name": "[parameters('publicIpSku')]"
},
"properties": {
"publicIPAllocationMethod": "[parameters('publicIPAllocationMethod')]",
"dnsSettings": {
"domainNameLabel": "[parameters('dnsLabelPrefix')]"
}
}
},
{
"type": "Microsoft.Network/networkSecurityGroups",
"apiVersion": "2020-06-01",
"name": "[variables('networkSecurityGroupName')]",
"location": "[parameters('location')]",
"properties": {
"securityRules": [
{
"name": "default-allow-3389",
"properties": {
"priority": 1000,
"access": "Allow",
"direction": "Inbound",
"destinationPortRange": "3389",
"protocol": "Tcp",
"sourcePortRange": "*",
"sourceAddressPrefix": "*",
"destinationAddressPrefix": "*"
}
}
]
}
},
{
"type": "Microsoft.Network/virtualNetworks",
"apiVersion": "2020-06-01",
"name": "[variables('virtualNetworkName')]",
"location": "[parameters('location')]",
"dependsOn": [
"[resourceId('Microsoft.Network/networkSecurityGroups', variables('networkSecurityGroupName'))]"
],
"properties": {
"properties": {
"addressSpace": {
"addressPrefixes": [
"[variables('addressPrefix')]"
]
},
"subnets": [
{
"name": "[variables('subnetName')]",
"properties": {
"addressPrefix": "[variables('subnetPrefix')]",
"networkSecurityGroup": {
"id": "[resourceId('Microsoft.Network/networkSecurityGroups',
variables('networkSecurityGroupName'))]"
}
}
}
]
}
},
{
"type": "Microsoft.Network/networkInterfaces",
"apiVersion": "2020-06-01",
"name": "[variables('nicName')]",
"location": "[parameters('location')]",
"dependsOn": [
"[resourceId('Microsoft.Network/publicIPAddresses', parameters('publicIPName'))]",
"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]"
],
"properties": {
"ipConfigurations": [
{
"name": "ipconfig1",
"properties": {
"privateIPAllocationMethod": "Dynamic",
"publicIPAddress": {
"id": "[resourceId('Microsoft.Network/publicIPAddresses', parameters('publicIPName'))]"
},
"subnet": {
"id": "[variables('subnetRef')]"
}
}
}
]
}
},
{
"type": "Microsoft.Compute/virtualMachines",
"apiVersion": "2020-06-01",
"name": "[parameters('vmName')]",
"location": "[parameters('location')]",
"dependsOn": [
"[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]",
"[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]"
],
"properties": {
"hardwareProfile": {
"vmSize": "[parameters('vmSize')]"
},
"osProfile": {
"computerName": "[parameters('vmName')]",
"adminUsername": "[parameters('adminUsername')]",
"adminPassword": "[parameters('adminPassword')]"
},
"storageProfile": {
"imageReference": {
"publisher": "MicrosoftWindowsServer",
"offer": "WindowsServer",
"sku": "[parameters('OSVersion')]",
"version": "latest"
"version": "latest"
},
"osDisk": {
"createOption": "FromImage",
"managedDisk": {
"storageAccountType": "StandardSSD_LRS"
}
},
"dataDisks": [
{
"diskSizeGB": 1023,
"lun": 0,
"createOption": "Empty"
}
]
},
"networkProfile": {
"networkInterfaces": [
{
"id": "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]"
}
]
},
"diagnosticsProfile": {
"bootDiagnostics": {
"enabled": true,
"storageUri": "[reference(resourceId('Microsoft.Storage/storageAccounts',
variables('storageAccountName'))).primaryEndpoints.blob]"
}
}
}
}
],
"outputs": {
"hostname": {
"type": "string",
"value": "[reference(parameters('publicIPName')).dnsSettings.fqdn]"
}
}
}
To run the PowerShell script, Select Tr y it to open the Azure Cloud shell. To paste the script, right-click the shell,
and then select Paste :
If you choose to install and use the PowerShell locally instead of from the Azure Cloud shell, this tutorial requires
the Azure PowerShell module. Run Get-Module -ListAvailable Az to find the version. If you need to upgrade, see
Install Azure PowerShell module. If you're running PowerShell locally, you also need to run Connect-AzAccount to
create a connection with Azure.
In the previous example, you specified a template stored in GitHub. You can also download or create a template
and specify the local path with the --template-file parameter.
Here are some additional resources:
To learn how to develop Resource Manager templates, see Azure Resource Manager documentation.
To see the Azure virtual machine schemas, see Azure template reference.
To see more virtual machine template samples, see Azure Quickstart templates.
Next Steps
If there were issues with the deployment, you might take a look at Troubleshoot common Azure deployment
errors with Azure Resource Manager.
Learn how to create and manage a virtual machine in Create and manage Windows VMs with the Azure
PowerShell module.
To learn more about creating templates, view the JSON syntax and properties for the resources types you
deployed:
Microsoft.Network/publicIPAddresses
Microsoft.Network/virtualNetworks
Microsoft.Network/networkInterfaces
Microsoft.Compute/virtualMachines
How to connect and sign on to an Azure virtual
machine running Windows
11/2/2020 • 2 minutes to read • Edit Online
You'll use the Connect button in the Azure portal to start a Remote Desktop (RDP) session from a Windows
desktop. First you connect to the virtual machine, and then you sign on.
To connect to a Windows VM from a Mac, you will need to install an RDP client for Mac such as Microsoft Remote
Desktop.
6. In the Windows Security window, select More choices and then Use a different account . Enter the
credentials for an account on the virtual machine and then select OK .
Local account : This is usually the local account user name and password that you specified when you
created the virtual machine. In this case, the domain is the name of the virtual machine and it is entered as
vmname\username.
Domain joined VM : If the VM belongs to a domain, enter the user name in the format Domain\Username.
The account also needs to either be in the Administrators group or have been granted remote access
privileges to the VM.
Domain controller : If the VM is a domain controller, enter the user name and password of a domain
administrator account for that domain.
7. Select Yes to verify the identity of the virtual machine and finish logging on.
TIP
If the Connect button in the portal is grayed-out and you are not connected to Azure via an Express Route or Site-
to-Site VPN connection, you will need to create and assign your VM a public IP address before you can use RDP. For
more information, see Public IP addresses in Azure.
This example will immediately launch the RDP connection, taking you through similar prompts as above.
You may also save the RDP file for future use.
Next steps
If you have difficulty connecting, see Troubleshoot Remote Desktop connections.
Generate and store SSH keys in the Azure portal
11/2/2020 • 3 minutes to read • Edit Online
If you frequently use the portal to deploy Linux VMs, you can make using SSH keys simpler by creating them
directly in the portal, or uploading them from your computer.
You can create a SSH keys when you first create a VM, and reuse them for other VMs. Or, you can create SSH keys
separately, so that you have a set of keys stored in Azure to fit your organizations needs.
If you have existing keys and you want to simplify using them in the portal, you can upload them and store them in
Azure for reuse.
For more detailed information about creating and using SSH keys with Linux VMs, see Use SSH keys to connect to
Linux VMs.
4. In Resource group select Create new to create a new resource group to store your keys. Type a name for
your resource group and select OK .
5. In Region select a region to store your keys. You can use the keys in any region, this is just the region where
they will be stored.
6. Type a name for your key in Key pair name .
7. In SSH public key source , select Generate public key source .
8. When you are done, select Review + create .
9. After it passes validation, select Create .
10. You will then get a pop-up window to, select Download private key and create resource . This will
download the SSH key as a .pem file.
11. Once the .pem file is downloaded, you might want to move it somewhere on your computer where it is easy
to point to from your SSH client.
Connect to the VM
On your local computer, open a PowerShell prompt and type:
List keys
SSH keys created in the portal are stored as resources, so you can filter your resources view to see all of them.
1. In the portal, select All resource .
2. In the filters, select Type , unselect the Select all option to clear the list.
3. Type SSH in the filter and select SSH key .
Get the public key
If you need your public key, you can easily copy it from the portal page for the key. Just list your keys (using the
process in the last section) then select a key from the list. The page for your key will open and you can click the
Copy to clipboard icon next to the key to copy it.
Next steps
To learn more about using SSH keys with Azure VMs, see Use SSH keys to connect to Linux VMs.
How to use SSH keys with Windows on Azure
11/2/2020 • 4 minutes to read • Edit Online
This article is for Windows users who want to create and use secure shell (SSH) keys to connect to Linux virtual
machines (VMs) in Azure. You can also generate and store SSH keys in the Azure portal to use when creating VMs
in the portal.
To use SSH keys from a Linux or macOS client, see the quick steps. For a more detailed overview of SSH, see
Detailed steps: Create and manage SSH keys for authentication to a Linux VM in Azure.
SSH clients
Recent versions of Windows 10 include OpenSSH client commands to create and use SSH keys and make SSH
connections from PowerShell or a command prompt. This is the easiest way to create an SSH connection to your
Linux VM, from a Windows computer.
You can also use Bash in the Azure Cloud Shell to connect to your VM. You can use Cloud Shell in a web browser,
from the Azure portal, or as a terminal in Visual Studio Code using the Azure Account extension.
You can also install the Windows Subsystem for Linux to connect to your VM over SSH and use other native Linux
tools within a Bash shell.
az vm create \
--resource-group myResourceGroup \
--name myVM \
--image UbuntuLTS\
--admin-username azureuser \
--ssh-key-value ~/.ssh/id_rsa.pub
With PowerShell, use New-AzVM and add the SSH key to the VM configuration using`. For an example, see
Quickstart: Create a Linux virtual machine in Azure with PowerShell.
If you do a lot of deployments using the portal, you might want to upload your public key to Azure, where it can
be easily selected when creating a VM from the portal. For more information, see Upload an SSH key.
Connect to your VM
With the public key deployed on your Azure VM, and the private key on your local system, SSH to your VM using
the IP address or DNS name of your VM. Replace azureuser and 10.111.12.123 in the following command with the
administrator user name, the IP address (or fully qualified domain name), and the path to your private key:
If you configured a passphrase when you created your key pair, enter the passphrase when prompted.
If the VM is using the just-in-time access policy, you need to request access before you can connect to the VM. For
more information about the just-in-time policy, see Manage virtual machine access using the just in time policy.
Next steps
For information about SSH keys in the Azure portal, see Generate and store SSH keys in the Azure portal to
use when creating VMs in the portal.
For detailed steps, options, and advanced examples of working with SSH keys, see Detailed steps to create
SSH key pairs.
You can also use PowerShell in Azure Cloud Shell to generate SSH keys and make SSH connections to Linux
VMs. See the PowerShell quickstart.
If you have difficulty using SSH to connect to your Linux VMs, see Troubleshoot SSH connections to an
Azure Linux VM.
Azure Hybrid Benefit for Windows Server
11/2/2020 • 5 minutes to read • Edit Online
For customers with Software Assurance, Azure Hybrid Benefit for Windows Server allows you to use your on-
premises Windows Server licenses and run Windows virtual machines on Azure at a reduced cost. You can use
Azure Hybrid Benefit for Windows Server to deploy new virtual machines with Windows OS. This article goes over
the steps on how to deploy new VMs with Azure Hybrid Benefit for Windows Server and how you can update
existing running VMs. For more information about Azure Hybrid Benefit for Windows Server licensing and cost
savings, see the Azure Hybrid Benefit for Windows Server licensing page.
Each 2-processor license or each set of 16-core licenses are entitled to two instances of up to 8 cores, or one
instance of up to 16 cores. The Azure Hybrid Benefit for Standard Edition licenses can only be used once either on-
premises or in Azure. Datacenter Edition benefits allow for simultaneous usage both on-premises and in Azure.
Using Azure Hybrid Benefit for Windows Server with any VMs running Windows Server OS are now supported in
all regions, including VMs with additional software such as SQL Server or third-party marketplace software.
Classic VMs
For classic VMs, only deploying new VM from on premises custom images is supported. To take advantage of the
capabilities supported in this article, you must first migrate classic VMs to Resource Manager model.
IMPORTANT
Classic VMs will be retired on March 1, 2023.
If you use IaaS resources from ASM, please complete your migration by March 1, 2023. We encourage you to make the
switch sooner to take advantage of the many feature enhancements in Azure Resource Manager.
For more information, see Migrate your IaaS resources to Azure Resource Manager by March 1, 2023.
CLI
az vm create \
--resource-group myResourceGroup \
--name myVM \
--location eastus \
--license-type Windows_Server
Template
Within your Resource Manager templates, an additional parameter licenseType must be specified. You can read
more about authoring Azure Resource Manager templates
"properties": {
"licenseType": "Windows_Server",
"hardwareProfile": {
"vmSize": "[variables('vmSize')]"
}
NOTE
Changing the license type on the VM does not cause the system to reboot or cause a service interuption. It is simply an
update to a metadata flag.
Portal
From portal VM blade, you can update the VM to use Azure Hybrid Benefit by selecting "Configuration" option and
toggle the "Azure hybrid benefit" option
PowerShell
Convert existing Windows Server VMs to Azure Hybrid Benefit for Windows Server
CLI
Convert existing Windows Server VMs to Azure Hybrid Benefit for Windows Server
Output:
Type : Microsoft.Compute/virtualMachines
Location : westus
LicenseType : Windows_Server
This output contrasts with the following VM deployed without Azure Hybrid Benefit for Windows Server licensing:
Type : Microsoft.Compute/virtualMachines
Location : westus
LicenseType :
CLI
NOTE
Changing the license type on the VM does not cause the system to reboot or cause a service interuption. It is a metadata
licensing flag only.
List all VMs with Azure Hybrid Benefit for Windows Server in a
subscription
To see and count all virtual machines deployed with Azure Hybrid Benefit for Windows Server, you can run the
following command from your subscription:
Portal
From the Virtual Machine or Virtual machine scale sets resource blade, you can view a list of all your VM(s) and
licensing type by configuring the table column to include "Azure Hybrid Benefit". The VM setting can either be in
"Enabled", "Not enabled" or "Not supported" state.
PowerShell
$vms = Get-AzVM
$vms | ?{$_.LicenseType -like "Windows_Server"} | select ResourceGroupName, Name, LicenseType
CLI
Deploy a Virtual Machine Scale Set with Azure Hybrid Benefit for
Windows Server
Within your virtual machine scale set Resource Manager templates, an additional parameter licenseType must be
specified within your VirtualMachineProfile property. You can do this during create or update for your scale set
through ARM template, PowerShell, Azure CLI or REST.
The following example uses ARM template with a Windows Server 2016 Datacenter image:
"virtualMachineProfile": {
"storageProfile": {
"osDisk": {
"createOption": "FromImage"
},
"imageReference": {
"publisher": "MicrosoftWindowsServer",
"offer": "WindowsServer",
"sku": "2016-Datacenter",
"version": "latest"
}
},
"licenseType": "Windows_Server",
"osProfile": {
"computerNamePrefix": "[parameters('vmssName')]",
"adminUsername": "[parameters('adminUsername')]",
"adminPassword": "[parameters('adminPassword')]"
}
You can also learn more about how to Modify a virtual machine scale set for more ways to update your scale set.
Next steps
Read more about How to save money with the Azure Hybrid Benefit
Read more about Frequently asked questions for Azure Hybrid Benefit
Learn more about Azure Hybrid Benefit for Windows Server licensing detailed guidance
Learn more about Azure Hybrid Benefit for Windows Server and Azure Site Recovery make migrating
applications to Azure even more cost-effective
Learn more about Windows 10 on Azure with Multitenant Hosting Right
Learn more about Using Resource Manager templates
How to deploy Windows 10 on Azure with
Multitenant Hosting Rights
11/2/2020 • 3 minutes to read • Edit Online
For customers with Windows 10 Enterprise E3/E5 per user or Windows Virtual Desktop Access per user (User
Subscription Licenses or Add-on User Subscription Licenses), Multitenant Hosting Rights for Windows 10 allows
you to bring your Windows 10 Licenses to the cloud and run Windows 10 Virtual Machines on Azure without
paying for another license. For more information, please see Multitenant Hosting for Windows 10.
NOTE
This article shows you to implement the licensing benefit for Windows 10 Pro Desktop images on Azure Marketplace.
For Windows 7, 8.1, 10 Enterprise (x64) images on Azure Marketplace for MSDN Subscriptions, please refer to Windows
client in Azure for dev/test scenarios
For Windows Server licensing benefits, please refer to Azure Hybrid use benefits for Windows Server images.
OS P UB L ISH ERN A M E O F F ER SK U
The following powershell snippet is to mark all administrator accounts as active, including the built-in
administrator. This example is useful if the built-in administrator username is unknown.
$adminAccount = Get-WmiObject Win32_UserAccount -filter "LocalAccount=True" | ? {$_.SID -Like "S-1-5-21-*-
500"}
if($adminAccount.Disabled)
{
$adminAccount.Disabled = $false
$adminAccount.Put()
}
Deploy using Azure Resource Manager Template Deployment Within your Resource Manager templates, an
additional parameter for licenseType can be specified. You can read more about authoring Azure Resource
Manager templates. Once you have your VHD uploaded to Azure, edit you Resource Manager template to include
the license type as part of the compute provider and deploy your template as normal:
"properties": {
"licenseType": "Windows_Client",
"hardwareProfile": {
"vmSize": "[variables('vmSize')]"
}
Deploy via PowerShell When deploying your Windows Server VM via PowerShell, you have an additional
parameter for -LicenseType . Once you have your VHD uploaded to Azure, you create a VM using New-AzVM and
specify the licensing type as follows:
New-AzVM -ResourceGroupName "myResourceGroup" -Location "West US" -VM $vm -LicenseType "Windows_Client"
The output is similar to the following example for Windows 10 with correct license type:
Type : Microsoft.Compute/virtualMachines
Location : westus
LicenseType : Windows_Client
This output contrasts with the following VM deployed without Azure Hybrid Use Benefit licensing, such as a VM
deployed straight from the Azure Gallery:
Type : Microsoft.Compute/virtualMachines
Location : westus
LicenseType :
Next Steps
Learn more about Configuring VDA for Windows 10
Learn more about Multitenant Hosting for Windows 10
Migrate your IaaS resources to Azure Resource
Manager by March 1, 2023
11/2/2020 • 3 minutes to read • Edit Online
In 2014, we launched infrastructure as a service (IaaS) on Azure Resource Manager. We've been enhancing
capabilities ever since. Because Azure Resource Manager now has full IaaS capabilities and other advancements,
we deprecated the management of IaaS virtual machines (VMs) through Azure Service Manager (ASM) on
February 28, 2020. This functionality will be fully retired on March 1, 2023.
Today, about 90 percent of the IaaS VMs are using Azure Resource Manager. If you use IaaS resources through
ASM, start planning your migration now. Complete it by March 1, 2023, to take advantage of Azure Resource
Manager.
VMs created using the classic deployment model will follow the Modern Lifecycle Policy for retirement.
IMPORTANT
Today, about 90% of IaaS VMs are using Azure Resource Manager. As of February 28, 2020, classic VMs have been
deprecated and will be fully retired on March 1, 2023. Learn more about this deprecation and how it affects you.
This article describes how to migrate infrastructure as a service (IaaS) resources from the Classic to Resource
Manager deployment models and details how to connect resources from the two deployment models that
coexist in your subscription by using virtual network site-to-site gateways. You can read more about Azure
Resource Manager features and benefits.
SERVIC E C O N F IGURAT IO N
Azure AD Domain Services Virtual networks that contain Azure AD Domain services
Supported scopes of migration
There are four different ways to complete migration of compute, network, and storage resources:
Migration of virtual machines (NOT in a virtual network)
Migration of virtual machines (in a virtual network)
Migration of storage accounts
Migration of unattached resources
Migration of virtual machines (NOT in a virtual network)
In the Resource Manager deployment model, security is enforced for your applications by default. All VMs need
to be in a virtual network in the Resource Manager model. The Azure platform restarts ( Stop , Deallocate , and
Start ) the VMs as part of the migration. You have two options for the virtual networks that the Virtual
Machines will be migrated to:
You can request the platform to create a new virtual network and migrate the virtual machine into the new
virtual network.
You can migrate the virtual machine into an existing virtual network in Resource Manager.
NOTE
In this migration scope, both the management-plane operations and the data-plane operations may not be allowed for a
period of time during the migration.
NOTE
In this migration scope, the management plane may not be allowed for a period of time during the migration. For certain
configurations as described earlier, data-plane downtime occurs.
The following screenshots show how to upgrade a Classic storage account to an Azure Resource Manager
storage account using Azure portal:
1. Sign in to the Azure portal.
2. Navigate to your storage account.
3. In the Settings section, click Migrate to ARM .
4. Click on Validate to determine migration feasibility.
5. If validation passes, click on Prepare to create a migrated storage account.
6. Type yes to confirm migration and click Commit to finish the migration.
Migration of unattached resources
Storage Accounts with no associated disks or Virtual Machines data may be migrated independently.
Network Security Groups, Route Tables & Reserved IPs that are not attached to any Virtual Machines and Virtual
Networks can also be migrated independently.
Compute Unassociated virtual machine disks. The VHD blobs behind these disks will
get migrated when the Storage
Account is migrated
Compute Virtual machine images. The VHD blobs behind these disks will
get migrated when the Storage
Account is migrated
Network Virtual networks using VNet Peering. Migrate Virtual Network to Resource
Manager, then peer. Learn more about
VNet Peering.
Unsupported configurations
The following configurations are not currently supported.
SERVIC E C O N F IGURAT IO N REC O M M EN DAT IO N
Resource Manager Role-Based Access Control (RBAC) for Because the URI of the resources is
classic resources modified after migration, it is
recommended that you plan the RBAC
policy updates that need to happen
after migration.
Compute Virtual machines that belong to a You can optionally delete the VM.
virtual network but don't have an
explicit subnet assigned
Compute Virtual machines that have alerts, The migration goes through and these
Autoscale policies settings are dropped. It is highly
recommended that you evaluate your
environment before you do the
migration. Alternatively, you can
reconfigure the alert settings after
migration is complete.
Compute Boot diagnostics with Premium storage Disable Boot Diagnostics feature for
the VMs before continuing with
migration. You can re-enable boot
diagnostics in the Resource Manager
stack after the migration is complete.
Additionally, blobs that are being used
for screenshot and serial logs should
be deleted so you are no longer
charged for those blobs.
Compute Cloud services that contain more than This is currently not supported. Please
one availability set or multiple move the Virtual Machines to the
availability sets. same availability set before migrating.
SERVIC E C O N F IGURAT IO N REC O M M EN DAT IO N
Compute VM with Azure Site Recovery extension These extensions are installed on a
Virtual Machine configured with the
Azure Site Recovery service. While the
migration of storage used with Site
Recovery will work, current replication
will be impacted. You need to disable
and enable VM replication after
storage migration.
Network Virtual networks that contain virtual This is currently not supported. Please
machines and web/worker roles move the Web/Worker roles to their
own Virtual Network before migrating.
Once the classic Virtual Network is
migrated, the migrated Azure Resource
Manager Virtual Network can be
peered with the classic Virtual Network
to achieve similar configuration as
before.
Network Classic Express Route circuits This is currently not supported. These
circuits need to be migrated to Azure
Resource Manager before beginning
IaaS migration. To learn more, see
Moving ExpressRoute circuits from the
classic to the Resource Manager
deployment model.
Azure App Service Virtual networks that contain App This is currently not supported.
Service environments
Azure HDInsight Virtual networks that contain This is currently not supported.
HDInsight services
Microsoft Dynamics Lifecycle Services Virtual networks that contain virtual This is currently not supported.
machines that are managed by
Dynamics Lifecycle Services
SERVIC E C O N F IGURAT IO N REC O M M EN DAT IO N
Azure API Management Virtual networks that contain Azure This is currently not supported. To
API Management deployments migrate the IaaS VNET, change the
VNET of the API Management
deployment, which is a no downtime
operation.
Next steps
Technical deep dive on platform-supported migration from classic to Azure Resource Manager
Planning for migration of IaaS resources from classic to Azure Resource Manager
Use PowerShell to migrate IaaS resources from classic to Azure Resource Manager
Use CLI to migrate IaaS resources from classic to Azure Resource Manager
VPN Gateway classic to Resource Manager migration
Migrate ExpressRoute circuits and associated virtual networks from the classic to the Resource Manager
deployment model
Community tools for assisting with migration of IaaS resources from classic to Azure Resource Manager
Review most common migration errors
Review the most frequently asked questions about migrating IaaS resources from classic to Azure Resource
Manager
Technical deep dive on platform-supported
migration from classic to Azure Resource Manager
11/2/2020 • 14 minutes to read • Edit Online
IMPORTANT
Today, about 90% of IaaS VMs are using Azure Resource Manager. As of February 28, 2020, classic VMs have been
deprecated and will be fully retired on March 1, 2023. Learn more about this deprecation and how it affects you.
Let's take a deep-dive on migrating from the Azure classic deployment model to the Azure Resource Manager
deployment model. We look at resources at a resource and feature level to help you understand how the Azure
platform migrates resources between the two deployment models. For more information, please read the service
announcement article:
For Linux: Platform-supported migration of IaaS resources from classic to Azure Resource Manager.
For Windows: Platform-supported migration of IaaS resources from classic to Azure Resource Manager.
NOTE
The operations described in the following sections are all idempotent. If you have a problem other than an unsupported
feature or a configuration error, retry the prepare, abort, or commit operation. Azure tries the action again.
Validate
The validate operation is the first step in the migration process. The goal of this step is to analyze the state of the
resources you want to migrate in the classic deployment model. The operation evaluates whether the resources
are capable of migration (success or failure).
You select the virtual network or a cloud service (if it’s not in a virtual network) that you want to validate for
migration. If the resource is not capable of migration, Azure lists the reasons why.
Checks not done in the validate operation
The validate operation only analyzes the state of the resources in the classic deployment model. It can check for all
failures and unsupported scenarios due to various configurations in the classic deployment model. It is not
possible to check for all issues that the Azure Resource Manager stack might impose on the resources during
migration. These issues are only checked when the resources undergo transformation in the next step of
migration (the prepare operation). The following table lists all the issues not checked in the validate operation:
N ET W O RK IN G C H EC K S N OT IN T H E VA L IDAT E O P ERAT IO N
Azure Resource Manager quota checks for networking resources. For example: static public IP, dynamic public IPs, load balancer,
network security groups, route tables, and network interfaces.
All load balancer rules are valid across deployment and the virtual network.
Conflicting private IPs between stop-deallocated VMs in the same virtual network.
Prepare
The prepare operation is the second step in the migration process. The goal of this step is to simulate the
transformation of the IaaS resources from the classic deployment model to Resource Manager resources. Further,
the prepare operation presents this side-by-side for you to visualize.
NOTE
Your resources in the classic deployment model are not modified during this step. It's a safe step to run if you're trying out
migration.
You select the virtual network or the cloud service (if it’s not a virtual network) that you want to prepare for
migration.
If the resource is not capable of migration, Azure stops the migration process and lists the reason why the
prepare operation failed.
If the resource is capable of migration, Azure locks down the management-plane operations for the resources
under migration. For example, you are not able to add a data disk to a VM under migration.
Azure then starts the migration of metadata from the classic deployment model to Resource Manager for the
migrating resources.
After the prepare operation is complete, you have the option of visualizing the resources in both the classic
deployment model and Resource Manager. For every cloud service in the classic deployment model, the Azure
platform creates a resource group name that has the pattern cloud-service-name>-Migrated .
NOTE
It is not possible to select the name of a resource group created for migrated resources (that is, "-Migrated"). After
migration is complete, however, you can use the move feature of Azure Resource Manager to move resources to any
resource group you want. For more information, see Move resources to new resource group or subscription.
The following two screenshots show the result after a successful prepare operation. The first one shows a
resource group that contains the original cloud service. The second one shows the new "-Migrated" resource
group that contains the equivalent Azure Resource Manager resources.
Here is a behind-the-scenes look at your resources after the completion of the prepare phase. Note that the
resource in the data plane is the same. It's represented in both the management plane (classic deployment model)
and the control plane (Resource Manager).
NOTE
VMs that are not in a virtual network in the classic deployment model are stopped and deallocated in this phase of
migration.
Commit
After you finish the validation, you can commit the migration. Resources do not appear anymore in the classic
deployment model, and are available only in the Resource Manager deployment model. The migrated resources
can be managed only in the new portal.
NOTE
This is an idempotent operation. If it fails, retry the operation. If it continues to fail, create a support ticket or create a forum
on Microsoft Q&A
Migration flowchart
Here is a flowchart that shows how to proceed with migration:
Disk resources attached to VM Implicit disks attached to VM Disks are not modeled as top-level
resources in the Resource Manager
deployment model. They are migrated
as implicit disks under the VM. Only
disks that are attached to a VM are
currently supported. Resource Manager
VMs can now use storage accounts in
the classic deployment model, which
allows the disks to be easily migrated
without any updates.
Virtual machine certificates Certificates in Azure Key Vault If a cloud service contains service
certificates, the migration creates a new
Azure key vault per cloud service, and
moves the certificates into the key
vault. The VMs are updated to
reference the certificates from the key
vault.
Load-balanced endpoint set Load balancer In the classic deployment model, the
platform assigned an implicit load
balancer for every cloud service. During
migration, a new load-balancer
resource is created, and the load-
balancing endpoint set becomes load-
balancer rules.
Inbound NAT rules Inbound NAT rules Input endpoints defined on the VM are
converted to inbound network address
translation rules under the load
balancer during the migration.
VIP address Public IP address with DNS name The virtual IP address becomes a public
IP address, and is associated with the
load balancer. A virtual IP can only be
migrated if there is an input endpoint
assigned to it.
Virtual network Virtual network The virtual network is migrated, with all
its properties, to the Resource Manager
deployment model. A new resource
group is created with the name
-migrated .
Reserved IPs Public IP address with static allocation Reserved IPs associated with the load
method balancer are migrated, along with the
migration of the cloud service or the
virtual machine. Unassociated reserved
IPs can be migrated using Move-
AzureReservedIP.
Public IP address per VM Public IP address with dynamic The public IP address associated with
allocation method the VM is converted as a public IP
address resource, with the allocation
method set to static.
IP forwarding property on a VM's IP forwarding property on the NIC The IP forwarding property on a VM is
network configuration converted to a property on the
network interface during the migration.
Load balancer with multiple IPs Load balancer with multiple public IP Every public IP associated with the load
resources balancer is converted to a public IP
resource, and associated with the load
balancer after migration.
Internal DNS names on the VM Internal DNS names on the NIC During migration, the internal DNS
suffixes for the VMs are migrated to a
read-only property named
“InternalDomainNameSuffix” on the
NIC. The suffix remains unchanged after
migration, and VM resolution should
continue to work as previously.
Virtual network gateway Virtual network gateway Virtual network gateway properties are
migrated unchanged. The VIP
associated with the gateway does not
change either.
Local network site Local network gateway Local network site properties are
migrated unchanged to a new resource
called a local network gateway. This
represents on-premises address
prefixes and the remote gateway IP.
Next steps
For Linux:
Overview of platform-supported migration of IaaS resources from classic to Azure Resource Manager
Planning for migration of IaaS resources from classic to Azure Resource Manager
Use PowerShell to migrate IaaS resources from classic to Azure Resource Manager
Use CLI to migrate IaaS resources from classic to Azure Resource Manager
Community tools for assisting with migration of IaaS resources from classic to Azure Resource Manager
Review most common migration errors
Review the most frequently asked questions about migrating IaaS resources from classic to Azure Resource
Manager
For Windows:
Overview of platform-supported migration of IaaS resources from classic to Azure Resource Manager
Planning for migration of IaaS resources from classic to Azure Resource Manager
Use PowerShell to migrate IaaS resources from classic to Azure Resource Manager
Use CLI to migrate IaaS resources from classic to Azure Resource Manager
VPN Gateway classic to Resource Manager migration
Migrate ExpressRoute circuits and associated virtual networks from the classic to the Resource Manager
deployment model
Community tools for assisting with migration of IaaS resources from classic to Azure Resource Manager
Review most common migration errors
Review the most frequently asked questions about migrating IaaS resources from classic to Azure Resource
Manager
Planning for migration of IaaS resources from classic
to Azure Resource Manager in Windows
11/2/2020 • 14 minutes to read • Edit Online
IMPORTANT
Today, about 90% of IaaS VMs are using Azure Resource Manager. As of February 28, 2020, classic VMs have been
deprecated and will be fully retired on March 1, 2023. Learn more about this deprecation and how it affects you.
While Azure Resource Manager offers many amazing features, it is critical to plan out your migration journey to
make sure things go smoothly. Spending time on planning will ensure that you do not encounter issues while
executing migration activities.
There are four general phases of the migration journey:
Plan
Technical considerations and tradeoffs
Depending on your technical requirements size, geographies and operational practices, you might want to
consider:
1. Why is Azure Resource Manager desired for your organization? What are the business reasons for a
migration?
2. What are the technical reasons for Azure Resource Manager? What (if any) additional Azure services would
you like to leverage?
3. Which application (or sets of virtual machines) is included in the migration?
4. Which scenarios are supported with the migration API? Review the unsupported features and configurations.
5. Will your operational teams now support applications/VMs in both Classic and Azure Resource Manager?
6. How (if at all) does Azure Resource Manager change your VM deployment, management, monitoring, and
reporting processes? Do your deployment scripts need to be updated?
7. What is the communications plan to alert stakeholders (end users, application owners, and infrastructure
owners)?
8. Depending on the complexity of the environment, should there be a maintenance period where the application
is unavailable to end users and to application owners? If so, for how long?
9. What is the training plan to ensure stakeholders are knowledgeable and proficient in Azure Resource
Manager?
10. What is the program management or project management plan for the migration?
11. What are the timelines for the Azure Resource Manager migration and other related technology road maps?
Are they optimally aligned?
Patterns of success
Successful customers have detailed plans where the preceding questions are discussed, documented and
governed. Ensure the migration plans are broadly communicated to sponsors and stakeholders. Equip yourself
with knowledge about your migration options; reading through this migration document set below is highly
recommended.
Overview of platform-supported migration of IaaS resources from classic to Azure Resource Manager
Technical deep dive on platform-supported migration from classic to Azure Resource Manager
Planning for migration of IaaS resources from classic to Azure Resource Manager
Use PowerShell to migrate IaaS resources from classic to Azure Resource Manager
Use CLI to migrate IaaS resources from classic to Azure Resource Manager
Community tools for assisting with migration of IaaS resources from classic to Azure Resource Manager
Review most common migration errors
Review the most frequently asked questions about migrating IaaS resources from classic to Azure Resource
Manager
Pitfalls to avoid
Failure to plan. The technology steps of this migration are proven and the outcome is predictable.
Assumption that the platform supported migration API will account for all scenarios. Read the unsupported
features and configurations to understand what scenarios are supported.
Not planning potential application outage for end users. Plan enough buffer to adequately warn end users of
potentially unavailable application time.
Lab Test
Replicate your environment and do a test migration
NOTE
Exact replication of your existing environment is executed by using a community-contributed tool which is not officially
supported by Microsoft Support. Therefore, it is an optional step but it is the best way to find out issues without touching
your production environments. If using a community-contributed tool is not an option, then read about the
Validate/Prepare/Abort Dry Run recommendation below.
Conducting a lab test of your exact scenario (compute, networking, and storage) is the best way to ensure a
smooth migration. This will help ensure:
A wholly separate lab or an existing non-production environment to test. We recommend a wholly
separate lab that can be migrated repeatedly and can be destructively modified. Scripts to collect/hydrate
metadata from the real subscriptions are listed below.
It's a good idea to create the lab in a separate subscription. The reason is that the lab will be torn down
repeatedly, and having a separate, isolated subscription will reduce the chance that something real will get
accidentally deleted.
This can be accomplished by using the AsmMetadataParser tool. Read more about this tool here
Patterns of success
The following were issues discovered in many of the larger migrations. This is not an exhaustive list and you
should refer to the unsupported features and configurations for more detail. You may or may not encounter these
technical issues but if you do solving these before attempting migration will ensure a smoother experience.
Do a Validate/Prepare/Abor t Dr y Run - This is perhaps the most important step to ensure Classic to
Azure Resource Manager migration success. The migration API has three main steps: Validate, Prepare and
Commit. Validate will read the state of your classic environment and return a result of all issues. However,
because some issues might exist in the Azure Resource Manager stack, Validate will not catch everything.
The next step in migration process, Prepare will help expose those issues. Prepare will move the metadata
from Classic to Azure Resource Manager, but will not commit the move, and will not remove or change
anything on the Classic side. The dry run involves preparing the migration, then aborting (not
committing ) the migration prepare. The goal of validate/prepare/abort dry run is to see all of the
metadata in the Azure Resource Manager stack, examine it (programmatically or in Portal), and verify that
everything migrates correctly, and work through technical issues. It will also give you a sense of migration
duration so you can plan for downtime accordingly. A validate/prepare/abort does not cause any user
downtime; therefore, it is non-disruptive to application usage.
The items below will need to be solved before the dry run, but a dry run test will also safely flush out
these preparation steps if they are missed. During enterprise migration, we've found the dry run to be a
safe and invaluable way to ensure migration readiness.
When prepare is running, the control plane (Azure management operations) will be locked for the
whole virtual network, so no changes can be made to VM metadata during validate/prepare/abort. But
otherwise any application function (RD, VM usage, etc.) will be unaffected. Users of the VMs will not
know that the dry run is being executed.
Express Route Circuits and VPN . Currently Express Route Gateways with authorization links cannot be
migrated without downtime. For the workaround, see Migrate ExpressRoute circuits and associated virtual
networks from the classic to the Resource Manager deployment model.
VM Extensions - Virtual Machine extensions are potentially one of the biggest roadblocks to migrating
running VMs. Remediation of VM Extensions could take upwards of 1-2 days, so plan accordingly. A
working Azure agent is needed to report back VM Extension status of running VMs. If the status comes
back as bad for a running VM, this will halt migration. The agent itself does not need to be in working order
to enable migration, but if extensions exist on the VM, then both a working agent AND outbound internet
connectivity (with DNS) will be needed for migration to move forward.
If connectivity to a DNS server is lost during migration, all VM Extensions except BGInfo version 1.*
need to first be removed from every VM before migration prepare, and subsequently re-added back to
the VM after Azure Resource Manager migration. This is only for VMs that are running. If the VMs
are stopped deallocated, VM Extensions do not need to be removed.
NOTE
Many extensions like Azure diagnostics and security center monitoring will reinstall themselves after migration, so
removing them is not a problem.
In addition, make sure Network Security Groups are not restricting outbound internet access. This
can happen with some Network Security Groups configurations. Outbound internet access (and
DNS) is needed for VM Extensions to be migrated to Azure Resource Manager.
Two versions of the BGInfo extension exist and are called versions 1 and 2.
If the VM is using the BGInfo version 1 extension, you can leave this extension as is. The
migration API skips this extension. The BGInfo extension can be added after migration.
If the VM is using the JSON-based BGInfo version 2 extension, the VM was created using the
Azure portal. The migration API includes this extension in the migration to Azure Resource
Manager, provided the agent is working and has outbound internet access (and DNS).
Remediation Option 1 . If you know your VMs will not have outbound internet access, a working
DNS service, and working Azure agents on the VMs, then uninstall all VM extensions as part of the
migration before Prepare, then reinstall the VM Extensions after migration.
Remediation Option 2 . If VM extensions are too big of a hurdle, another option is to
shutdown/deallocate all VMs before migration. Migrate the deallocated VMs, then restart them on
the Azure Resource Manager side. The benefit here is that VM extensions will migrate. The downside
is that all public facing Virtual IPs will be lost (this may be a non-starter), and obviously the VMs will
shut down causing a much greater impact on working applications.
NOTE
If an Azure Security Center policy is configured against the running VMs being migrated, the security policy
needs to be stopped before removing extensions, otherwise the security monitoring extension will be
reinstalled automatically on the VM after removing it.
Availability Sets - For a virtual network (vNet) to be migrated to Azure Resource Manager, the Classic
deployment (i.e. cloud service) contained VMs must all be in one availability set, or the VMs must all not be
in any availability set. Having more than one availability set in the cloud service is not compatible with
Azure Resource Manager and will halt migration. Additionally, there cannot be some VMs in an availability
set, and some VMs not in an availability set. To resolve this, you will need to remediate or reshuffle your
cloud service. Plan accordingly as this might be time consuming.
Web/Worker Role Deployments - Cloud Services containing web and worker roles cannot migrate to
Azure Resource Manager. To migrate the contents of your web and worker roles, you will need to migrate
the code itself to newer PaaS App Services (this discussion is beyond the scope of this document). If you
want to leave the web/worker roles as is but migrate classic VMs to the Resource Manager deployment
model, the web/worker roles must first be removed from the virtual network before migration can start. A
typical solution is to just move web/worker role instances to a separate Classic virtual network that is also
linked to an ExpressRoute circuit. In the former redeploy case, create a new Classic virtual network,
move/redeploy the web/worker roles to that new virtual network, and then delete the deployments from
the virtual network being moved. No code changes required. The new Virtual Network Peering capability
can be used to peer together the classic virtual network containing the web/worker roles and other virtual
networks in the same Azure region such as the virtual network being migrated (after vir tual network
migration is completed as peered vir tual networks cannot be migrated ), hence providing the
same capabilities with no performance loss and no latency/bandwidth penalties. Given the addition of
Virtual Network Peering, web/worker role deployments can now easily be mitigated and not block the
migration to Azure Resource Manager.
Azure Resource Manager Quotas - Azure regions have separate quotas/limits for both Classic and
Azure Resource Manager. Even though in a migration scenario new hardware isn't being consumed (we're
swapping existing VMs from Classic to Azure Resource Manager), Azure Resource Manager quotas still
need to be in place with enough capacity before migration can start. Listed below are the major limits
we've seen cause problems. Open a quota support ticket to raise the limits.
NOTE
These limits need to be raised in the same region as your current environment to be migrated.
Network Interfaces
Load Balancers
Public IPs
Static Public IPs
Cores
Network Security Groups
Route Tables
You can check your current Azure Resource Manager quotas using the following commands with the
latest version of Azure PowerShell.
Compute (Cores, Availability Sets)
Network (Virtual Networks, Static Public IPs, Public IPs, Network Security Groups, Network
Interfaces, Load Balancers, Route Tables)
Get-AzUsage /subscriptions/<subscription-id>/providers/Microsoft.Network/locations/<azure-
region> -ApiVersion 2016-03-30 | Format-Table
Get-AzStorageUsage
Azure Resource Manager API throttling limits - If you have a large enough environment (eg. > 400
VMs in a VNET), you might hit the default API throttling limits for writes (currently 1200 writes/hour ) in
Azure Resource Manager. Before starting migration, you should raise a support ticket to increase this limit
for your subscription.
Provisioning Timed Out VM Status - If any VM has the status of provisioning timed out , this needs to
be resolved pre-migration. The only way to do this is with downtime by deprovisioning/reprovisioning the
VM (delete it, keep the disk, and recreate the VM).
RoleStateUnknown VM Status - If migration halts due to a role state unknown error message, inspect
the VM using the portal and ensure it is running. This error will typically go away on its own (no
remediation required) after a few minutes and is often a transient type often seen during a Virtual Machine
start , stop , restart operations. Recommended practice: re-try migration again after a few minutes.
Fabric Cluster does not exist - In some cases, certain VMs cannot be migrated for various odd reasons.
One of these known cases is if the VM was recently created (within the last week or so) and happened to
land an Azure cluster that is not yet equipped for Azure Resource Manager workloads. You will get an error
that says fabric cluster does not exist and the VM cannot be migrated. Waiting a couple of days will
usually resolve this particular problem as the cluster will soon get Azure Resource Manager enabled.
However, one immediate workaround is to stop-deallocate the VM, then continue forward with migration,
and start the VM back up in Azure Resource Manager after migrating.
Pitfalls to avoid
Do not take shortcuts and omit the validate/prepare/abort dry run migrations.
Most, if not all, of your potential issues will surface during the validate/prepare/abort steps.
Migration
Technical considerations and tradeoffs
Now you are ready because you have worked through the known issues with your environment.
For the real migrations, you might want to consider:
1. Plan and schedule the virtual network (smallest unit of migration) with increasing priority. Do the simple
virtual networks first, and progress with the more complicated virtual networks.
2. Most customers will have non-production and production environments. Schedule production last.
3. (OPTIONAL) Schedule a maintenance downtime with plenty of buffer in case unexpected issues arise.
4. Communicate with and align with your support teams in case issues arise.
Patterns of success
The technical guidance from the Lab Test section should be considered and mitigated prior to a real migration.
With adequate testing, the migration is actually a non-event. For production environments, it might be helpful to
have additional support, such as a trusted Microsoft partner or Microsoft Premier services.
Pitfalls to avoid
Not fully testing may cause issues and delay in the migration.
Beyond Migration
Technical considerations and tradeoffs
Now that you are in Azure Resource Manager, maximize the platform. Read the overview of Azure Resource
Manager to find out about additional benefits.
Things to consider:
Bundling the migration with other activities. Most customers opt for an application maintenance window. If so,
you might want to use this downtime to enable other Azure Resource Manager capabilities like encryption and
migration to Managed Disks.
Revisit the technical and business reasons for Azure Resource Manager; enable the additional services
available only on Azure Resource Manager that apply to your environment.
Modernize your environment with PaaS services.
Patterns of success
Be purposeful on what services you now want to enable in Azure Resource Manager. Many customers find the
below compelling for their Azure environments:
Azure role-based access control (Azure RBAC).
Azure Resource Manager templates for easier and more controlled deployment.
Tags.
Activity Control
Azure Policies
Pitfalls to avoid
Remember why you started this Classic to Azure Resource Manager migration journey. What were the original
business reasons? Did you achieve the business reason?
Next steps
Overview of platform-supported migration of IaaS resources from classic to Azure Resource Manager
Technical deep dive on platform-supported migration from classic to Azure Resource Manager
Use PowerShell to migrate IaaS resources from classic to Azure Resource Manager
Use CLI to migrate IaaS resources from classic to Azure Resource Manager
VPN Gateway classic to Resource Manager migration
Migrate ExpressRoute circuits and associated virtual networks from the classic to the Resource Manager
deployment model
Community tools for assisting with migration of IaaS resources from classic to Azure Resource Manager
Review most common migration errors
Review the most frequently asked questions about migrating IaaS resources from classic to Azure Resource
Manager
Migrate IaaS resources from classic to Azure
Resource Manager by using PowerShell
11/2/2020 • 10 minutes to read • Edit Online
IMPORTANT
Today, about 90% of IaaS VMs are using Azure Resource Manager. As of February 28, 2020, classic VMs have been
deprecated and will be fully retired on March 1, 2023. Learn more about this deprecation and how it affects you.
These steps show you how to use Azure PowerShell commands to migrate infrastructure as a service (IaaS)
resources from the classic deployment model to the Azure Resource Manager deployment model.
If you want, you can also migrate resources by using the Azure CLI.
For background on supported migration scenarios, see Platform-supported migration of IaaS resources from
classic to Azure Resource Manager.
For detailed guidance and a migration walkthrough, see Technical deep dive on platform-supported migration
from classic to Azure Resource Manager.
Review the most common migration errors.
Here's a flowchart to identify the order in which steps need to be executed during a migration process.
IMPORTANT
Application gateways aren't currently supported for migration from classic to Resource Manager. To migrate a virtual
network with an application gateway, remove the gateway before you run a Prepare operation to move the network. After
you complete the migration, reconnect the gateway in Azure Resource Manager.
Azure ExpressRoute gateways that connect to ExpressRoute circuits in another subscription can't be migrated automatically.
In such cases, remove the ExpressRoute gateway, migrate the virtual network, and re-create the gateway. For more
information, see Migrate ExpressRoute circuits and associated virtual networks from the classic to the Resource Manager
deployment model.
Connect-AzAccount
Set your Azure subscription for the current session. This example sets the default subscription name to My
Azure Subscription . Replace the example subscription name with your own.
Register with the migration resource provider by using the following command:
Wait five minutes for the registration to finish. Check the status of the approval by using the following command:
Add-AzureAccount
Set your Azure subscription for the current session. This example sets the default subscription to My Azure
Subscription . Replace the example subscription name with your own.
Step 5.1: Option 1 - Migrate virtual machines in a cloud service (not in a virtual network)
Get the list of cloud services by using the following command. Then pick the cloud service that you want to
migrate. If the VMs in the cloud service are in a virtual network or if they have web or worker roles, the
command returns an error message.
Get-AzureService | ft Servicename
Get the deployment name for the cloud service. In this example, the service name is My Ser vice . Replace the
example service name with your own service name.
Prepare the virtual machines in the cloud service for migration. You have two options to choose from.
Option 1: Migrate the VMs to a platform-created vir tual network .
First, validate that you can migrate the cloud service by using the following commands:
The following command displays any warnings and errors that block migration. If validation is successful,
you can move on to the Prepare step.
Option 2: Migrate to an existing vir tual network in the Resource Manager deployment model.
This example sets the resource group name to myResourceGroup , the virtual network name to
myVir tualNetwork , and the subnet name to mySubNet . Replace the names in the example with the
names of your own resources.
$existingVnetRGName = "myResourceGroup"
$vnetName = "myVirtualNetwork"
$subnetName = "mySubNet"
First, validate that you can migrate the virtual network by using the following command:
After the Prepare operation succeeds with either of the preceding options, query the migration state of the VMs.
Ensure that they're in the Prepared state.
This example sets the VM name to myVM . Replace the example name with your own VM name.
$vmName = "myVM"
$vm = Get-AzureVM -ServiceName $serviceName -Name $vmName
$vm.VM.MigrationState
Check the configuration for the prepared resources by using either PowerShell or the Azure portal. If you're not
ready for migration and you want to go back to the old state, use the following command:
If the prepared configuration looks good, you can move forward and commit the resources by using the
following command:
NOTE
Migrate a single virtual machine created using the classic deployment model by creating a new Resource Manager virtual
machine with Managed Disks by using the VHD (OS and data) files of the virtual machine.
NOTE
The virtual network name might be different from what is shown in the new portal. The new Azure portal displays the
name as [vnet-name] , but the actual virtual network name is of type Group [resource-group-name] [vnet-name] .
Before you start the migration, look up the actual virtual network name by using the command
Get-AzureVnetSite | Select -Property Name or view it in the old Azure portal.
This example sets the virtual network name to myVnet . Replace the example virtual network name with your
own.
$vnetName = "myVnet"
NOTE
If the virtual network contains web or worker roles, or VMs with unsupported configurations, you get a validation error
message.
First, validate that you can migrate the virtual network by using the following command:
The following command displays any warnings and errors that block migration. If validation is successful, you
can proceed with the following Prepare step:
Check the configuration for the prepared virtual machines by using either Azure PowerShell or the Azure portal.
If you're not ready for migration and you want to go back to the old state, use the following command:
If the prepared configuration looks good, you can move forward and commit the resources by using the
following command:
NOTE
If your storage account has no associated disks or VM data, you can skip directly to the "Validate storage accounts and
start migration" section.
Prerequisite checks if you migrated any VMs or your storage account has disk resources:
Migrate virtual machines whose disks are stored in the storage account.
The following command returns RoleName and DiskName properties of all the VM disks in the
storage account. RoleName is the name of the virtual machine to which a disk is attached. If this
command returns disks, then ensure that virtual machines to which these disks are attached are
migrated before you migrate the storage account.
$storageAccountName = 'yourStorageAccountName'
Get-AzureDisk | where-Object {$_.MediaLink.Host.Contains($storageAccountName)} | Select-
Object -ExpandProperty AttachedTo -Property `
DiskName | Format-List -Property RoleName, DiskName
If the previous command returns disks, delete these disks by using the following command:
The following command returns all the VM images with data disks stored in the storage account.
Delete all the VM images returned by the previous commands by using this command:
$storageAccountName = "myStorageAccount"
Move-AzureStorageAccount -Validate -StorageAccountName $storageAccountName
$storageAccountName = "myStorageAccount"
Move-AzureStorageAccount -Prepare -StorageAccountName $storageAccountName
Check the configuration for the prepared storage account by using either Azure PowerShell or the Azure
portal. If you're not ready for migration and you want to go back to the old state, use the following
command:
If the prepared configuration looks good, you can move forward and commit the resources by using the
following command:
Move-AzureStorageAccount -Commit -StorageAccountName $storageAccountName
Next steps
Overview of platform-supported migration of IaaS resources from classic to Azure Resource Manager
Technical deep dive on platform-supported migration from classic to Azure Resource Manager
Planning for migration of IaaS resources from classic to Azure Resource Manager
Use CLI to migrate IaaS resources from classic to Azure Resource Manager
Community tools for assisting with migration of IaaS resources from classic to Azure Resource Manager
Review most common migration errors
Review the most frequently asked questions about migrating IaaS resources from classic to Azure Resource
Manager
Common errors during Classic to Azure Resource
Manager migration
11/2/2020 • 9 minutes to read • Edit Online
IMPORTANT
Today, about 90% of IaaS VMs are using Azure Resource Manager. As of February 28, 2020, classic VMs have been
deprecated and will be fully retired on March 1, 2023. Learn more about this deprecation and how it affects you.
This article catalogs the most common errors and mitigations during the migration of IaaS resources from Azure
classic deployment model to the Azure Resource Manager stack.
NOTE
This article has been updated to use the new Azure PowerShell Az module. You can still use the AzureRM module, which will
continue to receive bug fixes until at least December 2020. To learn more about the new Az module and AzureRM
compatibility, see Introducing the new Azure PowerShell Az module. For Az module installation instructions, see Install
Azure PowerShell.
List of errors
ERRO R ST RIN G M IT IGAT IO N
Internal server error In some cases, this is a transient error that goes away with a
retry. If it continues to persist, contact Azure support as it
needs investigation of platform logs.
Migration is not supported for Deployment {deployment- This happens when a deployment contains a web/worker role.
name} in HostedService {hosted-service-name} because it is a Since migration is only supported for Virtual Machines,
PaaS deployment (Web/Worker). please remove the web/worker role from the deployment and
try migration again.
Template {template-name} deployment failed. CorrelationId= In the backend of migration service, we use Azure Resource
{guid} Manager templates to create resources in the Azure Resource
Manager stack. Since templates are idempotent, usually you
can safely retry the migration operation to get past this error.
If this error continues to persist, please contact Azure
support and give them the CorrelationId.
The virtual network {virtual-network-name} does not exist. This can happen if you created the Virtual Network in the
new Azure portal. The actual Virtual Network name follows
the pattern "Group * <VNET name>"
ERRO R ST RIN G M IT IGAT IO N
VM {vm-name} in HostedService {hosted-service-name} XML extensions such as BGInfo 1.* are not supported in
contains Extension {extension-name} which is not supported Azure Resource Manager. Therefore, these extensions cannot
in Azure Resource Manager. It is recommended to uninstall it be migrated. If these extensions are left installed on the
from the VM before continuing with migration. virtual machine, they are automatically uninstalled before
completing the migration.
VM {vm-name} in HostedService {hosted-service-name} This is the scenario where the virtual machine is configured
contains Extension VMSnapshot/VMSnapshotLinux, which is for Azure Backup. Since this is currently an unsupported
currently not supported for Migration. Uninstall it from the scenario, please follow the workaround at
VM and add it back using Azure Resource Manager after the https://aka.ms/vmbackupmigration
Migration is Complete
VM {vm-name} in HostedService {hosted-service-name} Azure guest agent & VM Extensions need outbound internet
contains Extension {extension-name} whose Status is not access to the VM storage account to populate their status.
being reported from the VM. Hence, this VM cannot be Common causes of status failure include
migrated. Ensure that the Extension status is being reported a Network Security Group that blocks outbound access to
or uninstall the extension from the VM and retry migration. the internet
If the VNET has on premises DNS servers and DNS
VM {vm-name} in HostedService {hosted-service-name} connectivity is lost
contains Extension {extension-name} reporting Handler
Status: {handler-status}. Hence, the VM cannot be migrated. If you continue to see an unsupported status, you can
Ensure that the Extension handler status being reported is uninstall the extensions to skip this check and move forward
{handler-status} or uninstall it from the VM and retry with migration.
migration.
Migration is not supported for Deployment {deployment- Currently, only hosted services that have 1 or less Availability
name} in HostedService {hosted-service-name} because it has sets can be migrated. To work around this problem, please
multiple Availability Sets. move the additional Availability sets and Virtual machines in
those Availability sets to a different hosted service.
Migration is not supported for Deployment {deployment- The workaround for this scenario is to either move all the
name} in HostedService {hosted-service-name because it has virtual machines into a single Availability set or remove all
VMs that are not part of the Availability Set even though the Virtual machines from the Availability set in the hosted
HostedService contains one. service.
Storage account/HostedService/Virtual Network {virtual- This error happens when the "Prepare" migration operation
network-name} is in the process of being migrated and hence has been completed on the resource and an operation that
cannot be changed would make a change to the resource is triggered. Because of
the lock on the management plane after "Prepare" operation,
any changes to the resource are blocked. To unlock the
management plane, you can run the "Commit" migration
operation to complete migration or the "Abort" migration
operation to roll back the "Prepare" operation.
Migration is not allowed for HostedService {hosted-service- The VM might be undergoing through a state transition,
name} because it has VM {vm-name} in State: which usually happens when during an update operation on
RoleStateUnknown. Migration is allowed only when the VM is the HostedService such as a reboot, extension installation etc.
in one of the following states - Running, Stopped, Stopped It is recommended for the update operation to complete on
Deallocated. the HostedService before trying migration.
ERRO R ST RIN G M IT IGAT IO N
Deployment {deployment-name} in HostedService {hosted- This error happens if you've resized the VHD blob without
service-name} contains a VM {vm-name} with Data Disk updating the size in the VM API model. Detailed mitigation
{data-disk-name} whose physical blob size {size-of-the-vhd- steps are outlined below.
blob-backing-the-data-disk} bytes does not match the VM
Data Disk logical size {size-of-the-data-disk-specified-in-the-
vm-api} bytes. Migration will proceed without specifying a
size for the data disk for the Azure Resource Manager VM.
A storage exception occurred while validating data disk {data This error can happen if the disks of the VM have been
disk name} with media link {data disk Uri} for VM {VM name} deleted or are not accessible anymore. Please make sure the
in Cloud Service {Cloud Service name}. Please ensure that the disks for the VM exist.
VHD media link is accessible for this virtual machine
VM {vm-name} in HostedService {cloud-service-name} This error occurs when the name of the blob has a "/" in it
contains Disk with MediaLink {vhd-uri} which has blob name which is not supported in Compute Resource Provider
{vhd-blob-name} that is not supported in Azure Resource currently.
Manager.
Migration is not allowed for Deployment {deployment-name} In 2014, Azure announced that networking resources will
in HostedService {cloud-service-name} as it is not in the move from a cluster level scope to regional scope. See
regional scope. Please refer to https://aka.ms/regionalscope https://aka.ms/regionalscope for more details . This error
for moving this deployment to regional scope. happens when the deployment being migrated has not had
an update operation, which automatically moves it to a
regional scope. Best workaround is to either add an endpoint
to a VM or a data disk to the VM and then retry migration.
See How to set up endpoints on a classic Windows virtual
machine in Azure or Attach a data disk to a Windows virtual
machine created with the classic deployment model
Migration is not supported for Virtual Network {vnet-name} This error occurs when you have non-gateway PaaS
because it has non-gateway PaaS deployments. deployments such as Application Gateway or API
Management services that are connected to the Virtual
Network.
Detailed mitigations
VM with Data Disk whose physical blob size bytes does not match the VM Data Disk logical size bytes.
This happens when the Data disk logical size can get out of sync with the actual VHD blob size. This can be easily
verified using the following commands:
Verifying the issue
# Store the VM details in the VM object
$vm = Get-AzureVM -ServiceName $servicename -Name $vmname
HostCaching : None
DiskLabel :
DiskName : coreosvm-coreosvm-0-201611230636240687
Lun : 0
LogicalDiskSizeInGB : 11
MediaLink : https://contosostorage.blob.core.windows.net/vhds/coreosvm-dd1.vhd
SourceMediaLink :
IOType : Standard
ExtensionData :
# Now get the properties of the blob backing the data disk above
# NOTE the size of the blob is about 15 GB which is different from LogicalDiskSizeInGB above
$blob = Get-AzStorageblob -Blob "coreosvm-dd1.vhd" -Container vhds
$blob
ICloudBlob : Microsoft.WindowsAzure.Storage.Blob.CloudPageBlob
BlobType : PageBlob
Length : 16106127872
ContentType : application/octet-stream
LastModified : 11/23/2016 7:16:22 AM +00:00
SnapshotTime :
ContinuationToken :
Context : Microsoft.WindowsAzure.Commands.Common.Storage.AzureStorageContext
Name : coreosvm-dd1.vhd
# Convert the blob size in bytes to GB into a variable which we'll use later
$newSize = [int]($blob.Length / 1GB)
15
# Store the disk name of the data disk as we'll use this to identify the disk to be updated
$diskName = $vm.VM.DataVirtualHardDisks[0].DiskName
# Now remove the data disk from the VM so that the disk isn't leased by the VM and it's size can be updated
Remove-AzureDataDisk -LUN $lunToRemove -VM $vm | Update-AzureVm -Name $vmname -ServiceName $servicename
AffinityGroup :
AttachedTo :
IsCorrupted : False
Label :
Location : East US
DiskSizeInGB : 11
DiskSizeInGB : 11
MediaLink : https://contosostorage.blob.core.windows.net/vhds/coreosvm-dd1.vhd
DiskName : coreosvm-coreosvm-0-201611230636240687
SourceImageName :
OS :
IOType : Standard
OperationDescription : Get-AzureDisk
OperationId : 0c56a2b7-a325-123b-7043-74c27d5a61fd
OperationStatus : Succeeded
# Now verify that the "DiskSizeInGB" property of the disk matches the size of the blob
Get-AzureDisk -DiskName $diskName
AffinityGroup :
AttachedTo :
IsCorrupted : False
Label : coreosvm-coreosvm-0-201611230636240687
Location : East US
DiskSizeInGB : 15
MediaLink : https://contosostorage.blob.core.windows.net/vhds/coreosvm-dd1.vhd
DiskName : coreosvm-coreosvm-0-201611230636240687
SourceImageName :
OS :
IOType : Standard
OperationDescription : Get-AzureDisk
OperationId : 1v53bde5-cv56-5621-9078-16b9c8a0bad2
OperationStatus : Succeeded
# Now we'll add the disk back to the VM as a data disk. First we need to get an updated VM object
$vm = Get-AzureVM -ServiceName $servicename -Name $vmname
Add-AzureDataDisk -Import -DiskName $diskName -LUN 0 -VM $vm -HostCaching ReadWrite | Update-AzureVm -Name
$vmname -ServiceName $servicename
Azure CLI
Next steps
Overview of platform-supported migration of IaaS resources from classic to Azure Resource Manager
Technical deep dive on platform-supported migration from classic to Azure Resource Manager
Planning for migration of IaaS resources from classic to Azure Resource Manager
Use PowerShell to migrate IaaS resources from classic to Azure Resource Manager
Use CLI to migrate IaaS resources from classic to Azure Resource Manager
Community tools for assisting with migration of IaaS resources from classic to Azure Resource Manager
Review the most frequently asked questions about migrating IaaS resources from classic to Azure Resource
Manager
Community tools to migrate IaaS resources from
classic to Azure Resource Manager
11/2/2020 • 2 minutes to read • Edit Online
IMPORTANT
Today, about 90% of IaaS VMs are using Azure Resource Manager. As of February 28, 2020, classic VMs have been
deprecated and will be fully retired on March 1, 2023. Learn more about this deprecation and how it affects you.
This article catalogs the tools that have been provided by the community to assist with migration of IaaS
resources from classic to the Azure Resource Manager deployment model.
NOTE
These tools are not officially supported by Microsoft Support. Therefore they are open sourced on GitHub and we're happy
to accept PRs for fixes or additional scenarios. To report an issue, use the GitHub issues feature.
Migrating with these tools will cause downtime for your classic Virtual Machine. If you're looking for platform supported
migration, visit
Platform supported migration of IaaS resources from Classic to Azure Resource Manager stack
Technical Deep Dive on Platform supported migration from Classic to Azure Resource Manager
Migrate IaaS resources from Classic to Azure Resource Manager using Azure PowerShell
AsmMetadataParser
This is a collection of helper tools created as part of enterprise migrations from Azure Service Management to
Azure Resource Manager. This tool allows you to replicate your infrastructure into another subscription which can
be used for testing migration and iron out any issues before running the migration on your Production
subscription.
Link to the tool documentation
migAz
migAz is an additional option to migrate a complete set of classic IaaS resources to Azure Resource Manager IaaS
resources. The migration can occur within the same subscription or between different subscriptions and
subscription types (ex: CSP subscriptions).
Link to the tool documentation
Next Steps
Overview of platform-supported migration of IaaS resources from classic to Azure Resource Manager
Technical deep dive on platform-supported migration from classic to Azure Resource Manager
Planning for migration of IaaS resources from classic to Azure Resource Manager
Use PowerShell to migrate IaaS resources from classic to Azure Resource Manager
Use CLI to migrate IaaS resources from classic to Azure Resource Manager
Review most common migration errors
Review the most frequently asked questions about migrating IaaS resources from classic to Azure Resource
Manager
Frequently asked questions about classic to Azure
Resource Manager migration
11/2/2020 • 7 minutes to read • Edit Online
IMPORTANT
Today, about 90% of IaaS VMs are using Azure Resource Manager. As of February 28, 2020, classic VMs have been
deprecated and will be fully retired on March 1, 2023. Learn more about this deprecation and how it affects you.
NOTE
You will be charged backup instance cost till you retain data. Backup copies will be pruned as per retention range. However,
last backup copy is always kept until you explicitly delete backup data. It is advised to check your retention range of the
Virtual machine and trigger "Delete Backup Data" on the protected item in the vault once the retention range is over.
What happens if I run into a quota error while preparing the IaaS
resources for migration?
We recommend that you abort your migration and then log a support request to increase the quotas in the
region where you are migrating the VMs. After the quota request is approved, you can start executing the
migration steps again.
What if I don't like the names of the resources that the platform chose
during migration?
All the resources that you explicitly provide names for in the classic deployment model are retained during
migration. In some cases, new resources are created. For example: a network interface is created for every VM.
We currently don't support the ability to control the names of these new resources created during migration. Log
your votes for this feature on the Azure feedback forum.
Can I migrate ExpressRoute circuits used across subscriptions with
authorization links?
ExpressRoute circuits which use cross-subscription authorization links cannot be migrated automatically without
downtime. We have guidance on how these can be migrated using manual steps. See Migrate ExpressRoute
circuits and associated virtual networks from the classic to the Resource Manager deployment model for steps
and more information.
I got the message "VM is reporting the overall agent status as Not
Ready. Hence, the VM cannot be migrated. Ensure that the VM Agent
is reporting overall agent status as Ready" or "VM contains Extension
whose Status is not being reported from the VM. Hence, this VM
cannot be migrated."
This message is received when the VM does not have outbound connectivity to the internet. The VM agent uses
outbound connectivity to reach the Azure storage account for updating the agent status every five minutes.
Next steps
For Linux:
Overview of platform-supported migration of IaaS resources from classic to Azure Resource Manager
Technical deep dive on platform-supported migration from classic to Azure Resource Manager
Planning for migration of IaaS resources from classic to Azure Resource Manager
Use PowerShell to migrate IaaS resources from classic to Azure Resource Manager
Use CLI to migrate IaaS resources from classic to Azure Resource Manager
Community tools for assisting with migration of IaaS resources from classic to Azure Resource Manager
Review most common migration errors
For Windows:
Overview of platform-supported migration of IaaS resources from classic to Azure Resource Manager
Technical deep dive on platform-supported migration from classic to Azure Resource Manager
Planning for migration of IaaS resources from classic to Azure Resource Manager
Use PowerShell to migrate IaaS resources from classic to Azure Resource Manager
Use CLI to migrate IaaS resources from classic to Azure Resource Manager
Community tools for assisting with migration of IaaS resources from classic to Azure Resource Manager
Review most common migration errors
Create a shared image gallery with Azure PowerShell
11/2/2020 • 3 minutes to read • Edit Online
A Shared Image Gallery simplifies custom image sharing across your organization. Custom images are like
marketplace images, but you create them yourself. Custom images can be used to bootstrap deployment tasks like
preloading applications, application configurations, and other OS configurations.
The Shared Image Gallery lets you share your custom VM images with others in your organization, within or across
regions, within an AAD tenant. Choose which images you want to share, which regions you want to make them
available in, and who you want to share them with. You can create multiple galleries so that you can logically group
shared images.
The gallery is a top-level resource that provides full role-based access control (RBAC). Images can be versioned,
and you can choose to replicate each image version to a different set of Azure regions. The gallery only works with
Managed Images.
The Shared Image Gallery feature has multiple resource types.
Image definition Image definitions are created within a gallery and carry
information about the image and requirements for using it
internally. This includes whether the image is Windows or
Linux, release notes, and minimum and maximum memory
requirements. It is a definition of a type of image.
Next steps
Create an image from a VM, a managed image, or an image in another gallery.
Azure Image Builder (preview) can help automate image version creation, you can even use it to update and create
a new image version from an existing image version.
You can also create Shared Image Gallery resource using templates. There are several Azure Quickstart Templates
available:
Create a Shared Image Gallery
Create an Image Definition in a Shared Image Gallery
Create an Image Version in a Shared Image Gallery
Create a VM from Image Version
Preview: Create an image from a VM
11/2/2020 • 4 minutes to read • Edit Online
If you have an existing VM that you would like to use to make multiple, identical VMs, you can use that VM to
create an image in a Shared Image Gallery using Azure PowerShell. You can also create an image from a VM using
the Azure CLI.
You can capture an image from both specialized and generalized VMs using Azure PowerShell.
Images in an image gallery have two components, which we will create in this example:
An Image definition carries information about the image and requirements for using it. This includes whether
the image is Windows or Linux, specialized or generalized, release notes, and minimum and maximum memory
requirements. It is a definition of a type of image.
An image version is what is used to create a VM when using a Shared Image Gallery. You can have multiple
versions of an image as needed for your environment. When you create a VM, the image version is used to
create new disks for the VM. Image versions can be used multiple times.
Once you find the right gallery and image definitions, create variables for them to use later. This example gets the
gallery named myGallery in the myResourceGroup resource group.
$gallery = Get-AzGallery `
-Name myGallery `
-ResourceGroupName myResourceGroup
Get the VM
You can see a list of VMs that are available in a resource group using Get-AzVM. Once you know the VM name and
what resource group it is in, you can use Get-AzVM again to get the VM object and store it in a variable to use later.
This example gets an VM named sourceVM from the "myResourceGroup" resource group and assigns it to the
variable $sourceVm.
$sourceVm = Get-AzVM `
-Name sourceVM `
-ResourceGroupName myResourceGroup
Stop-AzVM `
-ResourceGroupName $sourceVm.ResourceGroupName `
-Name $sourceVm.Name `
-Force
For more information about the values you can specify for an image definition, see Image definitions.
Create the image definition using New-AzGalleryImageDefinition.
In this example, the image definition is named myImageDefinition, and is for a specialized VM running Windows.
To create a definition for images using Linux, use -OsType Linux .
$imageDefinition = New-AzGalleryImageDefinition `
-GalleryName $gallery.Name `
-ResourceGroupName $gallery.ResourceGroupName `
-Location $gallery.Location `
-Name 'myImageDefinition' `
-OsState specialized `
-OsType Windows `
-Publisher 'myPublisher' `
-Offer 'myOffer' `
-Sku 'mySKU'
It can take a while to replicate the image to all of the target regions, so we have created a job so we can track the
progress. To see the progress of the job, type $job.State .
$job.State
NOTE
You need to wait for the image version to completely finish being built and replicated before you can use the same managed
image to create another image version.
You can also store your image in Premium storage by adding -StorageAccountType Premium_LRS , or Zone Redundant
Storage by adding -StorageAccountType Standard_ZRS when you create the image version.
Next steps
Once you have verified that you new image version is working correctly, you can create a VM. Create a VM from a
specialized image version or a generalized image version.
For information about how to supply purchase plan information, see Supply Azure Marketplace purchase plan
information when creating images.
Create an image from a Managed Disk or snapshot in
a Shared Image Gallery using PowerShell
11/2/2020 • 5 minutes to read • Edit Online
If you have an existing snapshot or Managed Disk that you would like to migrate into a Shared Image Gallery, you
can create a Shared Image Gallery image directly from the Managed Disk or snapshot. Once you have tested your
new image, you can delete the source Managed Disk or snapshot. You can also create an image from a Managed
Disk or snapshot in a Shared Image Gallery using the Azure CLI.
Images in an image gallery have two components, which we will create in this example:
An Image definition carries information about the image and requirements for using it. This includes whether
the image is Windows or Linux, specialized or generalized, release notes, and minimum and maximum memory
requirements. It is a definition of a type of image.
An image version is what is used to create a VM when using a Shared Image Gallery. You can have multiple
versions of an image as needed for your environment. When you create a VM, the image version is used to
create new disks for the VM. Image versions can be used multiple times.
Once you know the snapshot name and what resource group it is in, you can use Get-AzSnapshot again to get the
snapshot object and store it in a variable to use later. This example gets an snapshot named mySnapshot from the
"myResourceGroup" resource group and assigns it to the variable $source.
$source = Get-AzSnapshot `
-SnapshotName mySnapshot `
-ResourceGroupName myResourceGroup
You can also use a Managed Disk instead of a snapshot. To get a Managed Disk, use Get-AzDisk.
Then get the Managed Disk and assign it to the $source variable.
$source = Get-AzDisk `
-Name myDisk
-ResourceGroupName myResourceGroup
You can use the same cmdlets to get any data disks that you want to include in your image. Assign them to
variables, then use those variables later when you create the image version.
Once you find the right gallery, create a variable to use later. This example gets the gallery named myGallery in the
myResourceGroup resource group.
$gallery = Get-AzGallery `
-Name myGallery `
-ResourceGroupName myResourceGroup
For more information about the values you can specify for an image definition, see Image definitions.
Create the image definition using New-AzGalleryImageDefinition. In this example, the image definition is named
myImageDefinition, and is for a specialized Windows OS. To create a definition for images using a Linux OS, use
-OsType Linux .
$imageDefinition = New-AzGalleryImageDefinition `
-GalleryName $gallery.Name `
-ResourceGroupName $gallery.ResourceGroupName `
-Location $gallery.Location `
-Name 'myImageDefinition' `
-OsState specialized `
-OsType Windows `
-Publisher 'myPublisher' `
-Offer 'myOffer' `
-Sku 'mySKU'
It can take a while to replicate the image to all of the target regions, so we have created a job so we can track the
progress. To see the progress of the job, type $job.State .
$job.State
NOTE
You need to wait for the image version to completely finish being built and replicated before you can use the same snapshot
to create another image version.
You can also store your image version in Zone Redundant Storage by adding -StorageAccountType Standard_ZRS when
you create the image version.
Next steps
Once you have verified that replication is complete, you can create a VM from the specialized image.
For information about how to supply purchase plan information, see Supply Azure Marketplace purchase plan
information when creating images.
Migrate from a managed image to a Shared Image
Gallery image
11/2/2020 • 3 minutes to read • Edit Online
If you have an existing managed image that you would like to migrate into a Shared Image Gallery, you can create
a Shared Image Gallery image directly from the managed image. Once you have tested your new image, you can
delete the source managed image. You can also migrate from a managed image to a Shared Image Gallery using
the Azure CLI.
Images in an image gallery have two components, which we will create in this example:
An Image definition carries information about the image and requirements for using it. This includes whether
the image is Windows or Linux, specialized or generalized, release notes, and minimum and maximum memory
requirements. It is a definition of a type of image.
An image version is what is used to create a VM when using a Shared Image Gallery. You can have multiple
versions of an image as needed for your environment. When you create a VM, the image version is used to
create new disks for the VM. Image versions can be used multiple times.
Once you find the right gallery, create a variable to use later. This example gets the gallery named myGallery in the
myResourceGroup resource group.
$gallery = Get-AzGallery `
-Name myGallery `
-ResourceGroupName myResourceGroup
$imageDefinition = New-AzGalleryImageDefinition `
-GalleryName $gallery.Name `
-ResourceGroupName $gallery.ResourceGroupName `
-Location $gallery.Location `
-Name 'myImageDefinition' `
-OsState generalized `
-OsType Windows `
-Publisher 'myPublisher' `
-Offer 'myOffer' `
-Sku 'mySKU'
$managedImage = Get-AzImage `
-ImageName myImage `
-ResourceGroupName myResourceGroup
It can take a while to replicate the image to all of the target regions, so we have created a job so we can track the
progress. To see the progress, type $job.State .
$job.State
NOTE
You need to wait for the image version to completely finish being built and replicated before you can use the same managed
image to create another image version.
You can also store your image in Premium storage by adding -StorageAccountType Premium_LRS , or Zone Redundant
Storage by adding -StorageAccountType Standard_ZRS when you create the image version.
Remove-AzImage `
-ImageName $managedImage.Name `
-ResourceGroupName $managedImage.ResourceGroupName
Next steps
Once you have verified that replication is complete, you can create a VM from the generalized image.
For information about how to supply purchase plan information, see Supply Azure Marketplace purchase plan
information when creating images.
Copy an image from another gallery using
PowerShell
11/2/2020 • 3 minutes to read • Edit Online
If you have multiple galleries in your organization, you can create images from images stored in other galleries.
For example, you might have a development and test gallery for creating and testing new images. When they are
ready to be used in production, you can copy them into a production gallery using this example. You can also
create an image from an image in another gallery using the Azure CLI.
Get-AzResource `
-ResourceType Microsoft.Compute/galleries/images/versions | `
Format-Table -Property Name,ResourceGroupName
Once you have all of the information you need, you can get the ID of the source image version using Get-
AzGalleryImageVersion. In this example, we are getting the 1.0.0 image version, of the myImageDefinition
definition, in the myGallery source gallery, in the myResourceGroup resource group.
$sourceImgVer = Get-AzGalleryImageVersion `
-GalleryImageDefinitionName myImageDefinition `
-GalleryName myGallery `
-ResourceGroupName myResourceGroup `
-Name 1.0.0
{
"description": null,
"disallowed": null,
"endOfLifeDate": null,
"eula": null,
"hyperVgeneration": "V1",
"id": "/subscriptions/1111abcd-1a23-4b45-67g7-
1234567de123/resourceGroups/myGalleryRG/providers/Microsoft.Compute/galleries/myGallery/images/myImageDefiniti
on",
"identifier": {
"offer": "myOffer",
"publisher": "myPublisher",
"sku": "mySKU"
},
"location": "eastus",
"name": "myImageDefinition",
"osState": "Specialized",
"osType": "Windows",
"privacyStatementUri": null,
"provisioningState": "Succeeded",
"purchasePlan": null,
"recommended": null,
"releaseNoteUri": null,
"resourceGroup": "myGalleryRG",
"tags": null,
"type": "Microsoft.Compute/galleries/images"
}
Create a new image definition, in your destination gallery, using the New-AzGalleryImageDefinition cmdlet and
the information from the output above.
In this example, the image definition is named myDestinationImgDef in the gallery named myDestinationGallery.
$destinationImgDef = New-AzGalleryImageDefinition `
-GalleryName myDestinationGallery `
-ResourceGroupName myDestinationRG `
-Location WestUS `
-Name 'myDestinationImgDef' `
-OsState specialized `
-OsType Windows `
-HyperVGeneration v1 `
-Publisher 'myPublisher' `
-Offer 'myOffer' `
-Sku 'mySKU'
It can take a while to replicate the image to all of the target regions, so we have created a job so we can track the
progress. To see the progress of the job, type $job.State .
$job.State
NOTE
You need to wait for the image version to completely finish being built and replicated before you can use the same managed
image to create another image version.
You can also store your image in Premiun storage by a adding -StorageAccountType Premium_LRS , or Zone Redundant
Storage by adding -StorageAccountType Standard_ZRS when you create the image version.
Next steps
Create a VM from a generalized or a specialized image version.
Azure Image Builder (preview) can help automate image version creation, you can even use it to update and create
a new image version from an existing image version.
For information about how to supply purchase plan information, see Supply Azure Marketplace purchase plan
information when creating images.
Create a VM using a generalized image
11/2/2020 • 3 minutes to read • Edit Online
Create a VM from a generalized image stored in a Shared Image Gallery. If want to create a VM using a specialized
image, see Create a VM from a specialized image.
Once you have a generalized image version, you can create one or more new VMs. Using the New-AzVM cmdlet.
In this example, we are using the image definition ID to ensure your new VM will use the most recent version of an
image. You can also use a specific version by using the image version ID for Set-AzVMSourceImage -Id . For example, to
use image version 1.0.0 type:
Set-AzVMSourceImage -Id "/subscriptions/<subscription ID where the gallery is
located>/resourceGroups/myGalleryRG/providers/Microsoft.Compute/galleries/myGallery/images/myImageDefinition/versions/1.0.0"
.
Be aware that using a specific image version means automation could fail if that specific image version isn't available
because it was deleted or removed from the region. We recommend using the image definition ID for creating your
new VM, unless a specific image version is required.
Replace resource names as needed in these examples.
# Get the image. Replace the name of your resource group, gallery, and image definition. This will create the VM
from the latest image version available.
$imageDefinition = Get-AzGalleryImageDefinition `
-GalleryName myGallery `
-ResourceGroupName myResourceGroup `
-Name myImageDefinition
New-AzVM `
-ResourceGroupName $resourceGroup `
-Location $location `
-Name $vmName `
-Image $imageDefinition.Id
-Credential $cred
# Get the image. Replace the name of your resource group, gallery, and image definition. This will create the VM
from the latest image version available.
$imageDefinition = Get-AzGalleryImageDefinition `
-GalleryName myGallery `
-ResourceGroupName myResourceGroup `
-Name myImageDefinition
# Network pieces
$subnetConfig = New-AzVirtualNetworkSubnetConfig `
-Name mySubnet `
-AddressPrefix 192.168.1.0/24
$vnet = New-AzVirtualNetwork `
-ResourceGroupName $resourceGroup `
-Location $location `
-Name MYvNET `
-AddressPrefix 192.168.0.0/16 `
-Subnet $subnetConfig
$pip = New-AzPublicIpAddress `
-ResourceGroupName $resourceGroup `
-Location $location `
-Name "mypublicdns$(Get-Random)" `
-AllocationMethod Static `
-IdleTimeoutInMinutes 4
$nsgRuleRDP = New-AzNetworkSecurityRuleConfig `
-Name myNetworkSecurityGroupRuleRDP `
-Protocol Tcp `
-Direction Inbound `
-Priority 1000 `
-SourceAddressPrefix * `
-SourcePortRange * `
-DestinationAddressPrefix * `
-DestinationPortRange 3389 `
-Access Allow
$nsg = New-AzNetworkSecurityGroup `
-ResourceGroupName $resourceGroup `
-Location $location `
-Name myNetworkSecurityGroup `
-SecurityRules $nsgRuleRDP
$nic = New-AzNetworkInterface `
-Name myNic `
-ResourceGroupName $resourceGroup `
-Location $location `
-SubnetId $vnet.Subnets[0].Id `
-PublicIpAddressId $pip.Id `
-NetworkSecurityGroupId $nsg.Id
# Create a virtual machine configuration using $imageDefinition.Id to use the latest image version.
$vmConfig = New-AzVMConfig `
-VMName $vmName `
-VMSize Standard_D1_v2 | `
Set-AzVMOperatingSystem -Windows -ComputerName $vmName -Credential $cred | `
Set-AzVMSourceImage -Id $imageDefinition.Id | `
Add-AzVMNetworkInterface -Id $nic.Id
Next steps
Azure Image Builder (preview) can help automate image version creation, you can even use it to update and create a
new image version from an existing image version.
You can also create Shared Image Gallery resource using templates. There are several Azure Quickstart Templates
available:
Create a Shared Image Gallery
Create an Image Definition in a Shared Image Gallery
Create an Image Version in a Shared Image Gallery
Create a VM from Image Version
For more information about Shared Image Galleries, see the Overview. If you run into issues, see Troubleshooting
shared image galleries.
Create a VM using a specialized image
11/2/2020 • 2 minutes to read • Edit Online
Create a VM from a specialized image version stored in a Shared Image Gallery. If want to create a VM using a
generalized image version, see Create a VM using a generalized image.
Once you have a specialized image version, you can create one or more new VMs using the New-AzVM cmdlet.
In this example, we are using the image definition ID to ensure your new VM will use the most recent version of an
image. You can also use a specific version by using the image version ID for Set-AzVMSourceImage -Id . For example, to
use image version 1.0.0 type:
Set-AzVMSourceImage -Id "/subscriptions/<subscription ID where the gallery is
located>/resourceGroups/myGalleryRG/providers/Microsoft.Compute/galleries/myGallery/images/myImageDefinition/versions/1.0.0"
.
Be aware that using a specific image version means automation could fail if that specific image version isn't available
because it was deleted or removed from the region. We recommend using the image definition ID for creating your
new VM, unless a specific image version is required.
Replace resource names as needed in this example.
$resourceGroup = "mySIGSpecializedRG"
$location = "South Central US"
$vmName = "mySpecializedVM"
# Get the image. Replace the name of your resource group, gallery, and image definition. This will create the VM
from the latest image version available.
$imageDefinition = Get-AzGalleryImageDefinition `
-GalleryName myGallery `
-ResourceGroupName myResourceGroup `
-Name myImageDefinition
$subnetConfig = New-AzVirtualNetworkSubnetConfig `
-Name mySubnet `
-AddressPrefix 192.168.1.0/24
$vnet = New-AzVirtualNetwork `
-ResourceGroupName $resourceGroup `
-Location $location `
-Name MYvNET `
-AddressPrefix 192.168.0.0/16 `
-Subnet $subnetConfig
$pip = New-AzPublicIpAddress `
-ResourceGroupName $resourceGroup `
-Location $location `
-Name "mypublicdns$(Get-Random)" `
-AllocationMethod Static `
-IdleTimeoutInMinutes 4
$nsgRuleRDP = New-AzNetworkSecurityRuleConfig `
-Name myNetworkSecurityGroupRuleRDP `
-Protocol Tcp `
-Direction Inbound `
-Priority 1000 `
-SourceAddressPrefix * `
-SourceAddressPrefix * `
-SourcePortRange * `
-DestinationAddressPrefix * `
-DestinationPortRange 3389 -Access Allow
$nsg = New-AzNetworkSecurityGroup `
-ResourceGroupName $resourceGroup `
-Location $location `
-Name myNetworkSecurityGroup `
-SecurityRules $nsgRuleRDP
$nic = New-AzNetworkInterface `
-Name $vmName `
-ResourceGroupName $resourceGroup `
-Location $location `
-SubnetId $vnet.Subnets[0].Id `
-PublicIpAddressId $pip.Id `
-NetworkSecurityGroupId $nsg.Id
# Create a virtual machine configuration using Set-AzVMSourceImage -Id $imageDefinition.Id to use the latest
available image version.
$vmConfig = New-AzVMConfig `
-VMName $vmName `
-VMSize Standard_D1_v2 | `
Set-AzVMSourceImage -Id $imageDefinition.Id | `
Add-AzVMNetworkInterface -Id $nic.Id
$lun = $imageVersion.StorageProfile.DataDiskImages.Lun
Add-AzVMDataDisk `
-CreateOption FromImage `
-SourceImageUri $imageversion.Id `
-Lun $lun `
-Caching $imageVersion.StorageProfile.DataDiskImages.HostCaching `
-DiskSizeInGB $imageVersion.StorageProfile.DataDiskImages.SizeInGB `
-VM $vm
Next steps
Azure Image Builder (preview) can help automate image version creation, you can even use it to update and create a
new image version from an existing image version.
You can also create Shared Image Gallery resource using templates. There are several Azure Quickstart Templates
available:
Create a Shared Image Gallery
Create an Image Definition in a Shared Image Gallery
Create an Image Version in a Shared Image Gallery
Create a VM from Image Version
For more information about Shared Image Galleries, see the Overview. If you run into issues, see Troubleshooting
shared image galleries.
List, update, and delete image resources using
PowerShell
5/5/2020 • 2 minutes to read • Edit Online
You can manage your shared image gallery resources using Azure PowerShell.
List information
List all galleries by name.
Delete an image version. This example deletes the image version named 1.0.0.
Remove-AzGalleryImageVersion `
-GalleryImageDefinitionName myImageDefinition `
-GalleryName myGallery `
-Name 1.0.0 `
-ResourceGroupName myGalleryRG
Update resources
There are some limitations on what can be updated. The following items can be updated:
Shared image gallery:
Description
Image definition:
Recommended vCPUs
Recommended memory
Description
End of life date
Image version:
Regional replica count
Target regions
Exclusion from latest
End of life date
If you plan on adding replica regions, do not delete the source managed image. The source managed image is
needed for replicating the image version to additional regions.
To update the description of a gallery, use Update-AzGallery.
Update-AzGallery `
-Name $gallery.Name `
-ResourceGroupName $resourceGroup.Name
This example shows how to use Update-AzGalleryImageDefinition to update the end-of-life date for our image
definition.
Update-AzGalleryImageDefinition `
-GalleryName $gallery.Name `
-Name $galleryImage.Name `
-ResourceGroupName $resourceGroup.Name `
-EndOfLifeDate 01/01/2030
This example shows how to use Update-AzGalleryImageVersion to exclude this image version from being used as
the latest image.
Update-AzGalleryImageVersion `
-GalleryImageDefinitionName $galleryImage.Name `
-GalleryName $gallery.Name `
-Name $galleryVersion.Name `
-ResourceGroupName $resourceGroup.Name `
-PublishingProfileExcludeFromLatest
This example shows how to use Update-AzGalleryImageVersion to include this image version in being considered
for latest image.
Update-AzGalleryImageVersion `
-GalleryImageDefinitionName $galleryImage.Name `
-GalleryName $gallery.Name `
-Name $galleryVersion.Name `
-ResourceGroupName $resourceGroup.Name `
-PublishingProfileExcludeFromLatest:$false
Clean up resources
When deleting resources, you need to start with last item in the nested resources - the image version. Once
versions are deleted, you can delete the image definition. You can't delete the gallery until all resources beneath it
have been deleted.
$resourceGroup = "myResourceGroup"
$gallery = "myGallery"
$imageDefinition = "myImageDefinition"
$imageVersion = "myImageVersion"
Remove-AzGalleryImageVersion `
-GalleryImageDefinitionName $imageDefinition `
-GalleryName $gallery `
-Name $imageVersion `
-ResourceGroupName $resourceGroup
Remove-AzGalleryImageDefinition `
-ResourceGroupName $resourceGroup `
-GalleryName $gallery `
-GalleryImageDefinitionName $imageDefinition
Remove-AzGallery `
-Name $gallery `
-ResourceGroupName $resourceGroup
Next steps
Azure Image Builder (preview) can help automate image version creation, you can even use it to update and create
a new image version from an existing image version.
Create a Shared Image Gallery with the Azure CLI
11/2/2020 • 2 minutes to read • Edit Online
A Shared Image Gallery simplifies custom image sharing across your organization. Custom images are like
marketplace images, but you create them yourself. Custom images can be used to bootstrap configurations such as
preloading applications, application configurations, and other OS configurations.
The Shared Image Gallery lets you share your custom VM images with others. Choose which images you want to
share, which regions you want to make them available in, and who you want to share them with.
az sig show \
--resource-group myGalleryRG \
--gallery-name myGallery \
--query id
Use the object ID as a scope, along with an email address and az role assignment create to give a user access to the
shared image gallery. Replace <email-address> and <gallery iD> with your own information.
For more information about how to share resources using RBAC, see Manage access using RBAC and Azure CLI.
Next steps
Create an image version from a VM, or a managed image using the Azure CLI.
For more information about Shared Image Galleries, see the Overview. If you run into issues, see Troubleshooting
shared image galleries.
Create an image version from a VM in Azure using
the Azure CLI
11/2/2020 • 3 minutes to read • Edit Online
If you have an existing VM that you would like to use to make multiple, identical VMs, you can use that VM to
create an image in a Shared Image Gallery using the Azure CLI. You can also create an image from a VM using
Azure PowerShell.
An image version is what you use to create a VM when using a Shared Image Gallery. You can have multiple
versions of an image as needed for your environment. When you use an image version to create a VM, the image
version is used to create disks for the new VM. Image versions can be used multiple times.
Once you know the VM name and what resource group it is in, get the ID of the VM using az vm get-instance-view.
NOTE
You need to wait for the image version to completely finish being built and replicated before you can use the same managed
image to create another image version.
You can also store your image in Premium storage by adding --storage-account-type premium_lrs , or Zone Redundant
Storage by adding --storage-account-type standard_zrs when you create the image version.
Next steps
Create a VM from the generalized image using the Azure CLI.
For information about how to supply purchase plan information, see Supply Azure Marketplace purchase plan
information when creating images.
Create an image from a Managed Disk or snapshot in
a Shared Image Gallery using the Azure CLI
11/2/2020 • 4 minutes to read • Edit Online
If you have an existing snapshot or Managed Disk that you would like to migrate into a Shared Image Gallery, you
can create a Shared Image Gallery image directly from the Managed Disk or snapshot. Once you have tested your
new image, you can delete the source Managed Disk or snapshot. You can also create an image from a Managed
Disk or snapshot in a Shared Image Gallery using the Azure PowerShell.
Images in an image gallery have two components, which we will create in this example:
An Image definition carries information about the image and requirements for using it. This includes whether
the image is Windows or Linux, specialized or generalized, release notes, and minimum and maximum memory
requirements. It is a definition of a type of image.
An image version is what is used to create a VM when using a Shared Image Gallery. You can have multiple
versions of an image as needed for your environment. When you create a VM, the image version is used to
create new disks for the VM. Image versions can be used multiple times.
You can also use a Managed Disk instead of a snapshot. To get a Managed Disk, use az disk list.
Once you have the ID of the snapshot or Managed Disk and assign it to a variable called $source to be used later.
You can use the same process to get any data disks that you want to include in your image. Assign them to
variables, then use those variables later when you create the image version.
For more information about the values you can specify for an image definition, see Image definitions.
Create an image definition in the gallery using az sig image-definition create.
In this example, the image definition is named myImageDefinition, and is for a specialized Linux OS image. To
create a definition for images using a Windows OS, use --os-type Windows .
In this example, the gallery is named myGallery, it is in the myGalleryRG resource group, and the image definition
name will be mImageDefinition.
resourceGroup=myGalleryRG
gallery=myGallery
imageDef=myImageDefinition
az sig image-definition create \
--resource-group $resourceGroup \
--gallery-name $gallery \
--gallery-image-definition $imageDef \
--publisher myPublisher \
--offer myOffer \
--sku mySKU \
--os-type Linux \
--os-state specialized
If you want to include data disks in the image, then you need to include both the --data-snapshot-luns parameter
set to the LUN number and the --data-snapshots set to the ID of the data disk or snapshot.
NOTE
You need to wait for the image version to completely finish being built and replicated before you can use the same managed
image to create another image version.
You can also store all of your image version replicas in Zone Redundant Storage by adding
--storage-account-type standard_zrs when you create the image version.
Next steps
Create a VM from a specialized image version.
For information about how to supply purchase plan information, see Supply Azure Marketplace purchase plan
information when creating images.
Migrate from a managed image to an image version
using the Azure CLI
11/2/2020 • 3 minutes to read • Edit Online
If you have an existing managed image that you would like to migrate into a Shared Image Gallery, you can create
a Shared Image Gallery image directly from the managed image. Once you have tested your new image, you can
delete the source managed image. You can also migrate from a managed image to a Shared Image Gallery using
PowerShell.
Images in an image gallery have two components, which we will create in this example:
An Image definition carries information about the image and requirements for using it. This includes whether
the image is Windows or Linux, specialized or generalized, release notes, and minimum and maximum memory
requirements. It is a definition of a type of image.
An image version is what is used to create a VM when using a Shared Image Gallery. You can have multiple
versions of an image as needed for your environment. When you create a VM, the image version is used to
create new disks for the VM. Image versions can be used multiple times.
Image definition names can be made up of uppercase or lowercase letters, digits, dots, dashes, and periods.
For more information about the values you can specify for an image definition, see Image definitions.
Create an image definition in the gallery using az sig image-definition create.
In this example, the image definition is named myImageDefinition, and is for a generalized Linux OS image. To
create a definition for images using a Windows OS, use --os-type Windows .
resourceGroup=myGalleryRG
gallery=myGallery
imageDef=myImageDefinition
az sig image-definition create \
--resource-group $resourceGroup \
--gallery-name $gallery \
--gallery-image-definition $imageDef \
--publisher myPublisher \
--offer myOffer \
--sku mySKU \
--os-type Linux \
--os-state generalized
Allowed characters for image version are numbers and periods. Numbers must be within the range of a 32-bit
integer. Format: MajorVersion.MinorVersion.Patch.
In this example, the version of our image is 1.0.0 and we are going to create 1 replica in the South Central US
region and 1 replica in the East US 2 region using zone-redundant storage. When choosing target regions for
replication, remember that you also have to include the source region as a target for replication.
Pass the ID of the managed image in the --managed-image parameter.
imageID="/subscriptions/<subscription
ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/images/myImage"
az sig image-version create \
--resource-group $resourceGroup \
--gallery-name $gallery \
--gallery-image-definition $imageDef \
--gallery-image-version 1.0.0 \
--target-regions "southcentralus=1" "eastus2=1=standard_zrs" \
--replica-count 2 \
--managed-image $imageID
NOTE
You need to wait for the image version to completely finish being built and replicated before you can use the same managed
image to create another image version.
You can also store all of your image version replicas in Zone Redundant Storage by adding
--storage-account-type standard_zrs when you create the image version.
Next steps
Create a VM from a generalized image version.
For information about how to supply purchase plan information, see Supply Azure Marketplace purchase plan
information when creating images.
Copy an image from another gallery using the Azure
CLI
11/2/2020 • 3 minutes to read • Edit Online
If you have multiple galleries in your organization, you can also create image versions from existing image versions
stored in other galleries. For example, you might have a development and test gallery for creating and testing new
images. When they are ready to be used in production, you can copy them into a production gallery using this
example. You can also create an image from an image in another gallery using Azure PowerShell.
List the image definitions in a gallery, using az sig image-definition list. In this example, we are searching for image
definitions in the gallery named myGallery in the myGalleryRG resource group.
List the versions of an image in a gallery, using az sig image-version list to find the image version that you want to
copy into your new gallery. In this example, we are looking for all of the image versions that are part of the
myImageDefinition image definition.
Once you have all of the information you need, you can get the ID of the source image version using az sig image-
version show.
az sig image-version show \
--resource-group myGalleryRG \
--gallery-name myGallery \
--gallery-image-definition myImageDefinition \
--gallery-image-version 1.0.0 \
--query "id" -o tsv
{
"description": null,
"disallowed": null,
"endOfLifeDate": null,
"eula": null,
"hyperVgeneration": "V1",
"id": "/subscriptions/1111abcd-1a23-4b45-67g7-
1234567de123/resourceGroups/myGalleryRG/providers/Microsoft.Compute/galleries/myGallery/images/myImageDefinitio
n",
"identifier": {
"offer": "myOffer",
"publisher": "myPublisher",
"sku": "mySKU"
},
"location": "eastus",
"name": "myImageDefinition",
"osState": "Specialized",
"osType": "Linux",
"privacyStatementUri": null,
"provisioningState": "Succeeded",
"purchasePlan": null,
"recommended": null,
"releaseNoteUri": null,
"resourceGroup": "myGalleryRG",
"tags": null,
"type": "Microsoft.Compute/galleries/images"
}
Create a new image definition, in your new gallery, using the information from the output above.
az sig image-definition create \
--resource-group myNewGalleryRG \
--gallery-name myNewGallery \
--gallery-image-definition myImageDefinition \
--publisher myPublisher \
--offer myOffer \
--sku mySKU \
--os-type Linux \
--hyper-v-generation V1 \
--os-state specialized
NOTE
You need to wait for the image version to completely finish being built and replicated before you can use the same managed
image to create another image version.
You can also store your image in Premium storage by adding --storage-account-type premium_lrs , or Zone Redundant
Storage by adding --storage-account-type standard_zrs when you create the image version.
Next steps
Create a VM from a generalized or a specialized image version.
Also, try out Azure Image Builder (preview) can help automate image version creation, you can even use it to
update and create a new image version from an existing image version.
For information about how to supply purchase plan information, see Supply Azure Marketplace purchase plan
information when creating images.
Copy an image from another gallery using the Azure
CLI
11/2/2020 • 3 minutes to read • Edit Online
If you have multiple galleries in your organization, you can also create image versions from existing image
versions stored in other galleries. For example, you might have a development and test gallery for creating and
testing new images. When they are ready to be used in production, you can copy them into a production gallery
using this example. You can also create an image from an image in another gallery using Azure PowerShell.
List the image definitions in a gallery, using az sig image-definition list. In this example, we are searching for image
definitions in the gallery named myGallery in the myGalleryRG resource group.
List the versions of an image in a gallery, using az sig image-version list to find the image version that you want to
copy into your new gallery. In this example, we are looking for all of the image versions that are part of the
myImageDefinition image definition.
Once you have all of the information you need, you can get the ID of the source image version using az sig image-
version show.
az sig image-version show \
--resource-group myGalleryRG \
--gallery-name myGallery \
--gallery-image-definition myImageDefinition \
--gallery-image-version 1.0.0 \
--query "id" -o tsv
{
"description": null,
"disallowed": null,
"endOfLifeDate": null,
"eula": null,
"hyperVgeneration": "V1",
"id": "/subscriptions/1111abcd-1a23-4b45-67g7-
1234567de123/resourceGroups/myGalleryRG/providers/Microsoft.Compute/galleries/myGallery/images/myImageDefiniti
on",
"identifier": {
"offer": "myOffer",
"publisher": "myPublisher",
"sku": "mySKU"
},
"location": "eastus",
"name": "myImageDefinition",
"osState": "Specialized",
"osType": "Linux",
"privacyStatementUri": null,
"provisioningState": "Succeeded",
"purchasePlan": null,
"recommended": null,
"releaseNoteUri": null,
"resourceGroup": "myGalleryRG",
"tags": null,
"type": "Microsoft.Compute/galleries/images"
}
Create a new image definition, in your new gallery, using the information from the output above.
az sig image-definition create \
--resource-group myNewGalleryRG \
--gallery-name myNewGallery \
--gallery-image-definition myImageDefinition \
--publisher myPublisher \
--offer myOffer \
--sku mySKU \
--os-type Linux \
--hyper-v-generation V1 \
--os-state specialized
NOTE
You need to wait for the image version to completely finish being built and replicated before you can use the same managed
image to create another image version.
You can also store your image in Premium storage by adding --storage-account-type premium_lrs , or Zone Redundant
Storage by adding --storage-account-type standard_zrs when you create the image version.
Next steps
Create a VM from a generalized or a specialized image version.
Also, try out Azure Image Builder (preview) can help automate image version creation, you can even use it to
update and create a new image version from an existing image version.
For information about how to supply purchase plan information, see Supply Azure Marketplace purchase plan
information when creating images.
Create a VM using a specialized image version with
the Azure CLI
11/2/2020 • 2 minutes to read • Edit Online
Create a VM from a specialized image version stored in a Shared Image Gallery. If want to create a VM using a
generalized image version, see Create a VM from a generalized image version.
Replace resource names as needed in this example.
List the image definitions in a gallery using az sig image-definition list to see the name and ID of the definitions.
resourceGroup=myGalleryRG
gallery=myGallery
az sig image-definition list \
--resource-group $resourceGroup \
--gallery-name $gallery \
--query "[].[name, id]" \
--output tsv
Create the VM using az vm create using the --specialized parameter to indicate the the image is a specialized
image.
Use the image definition ID for --image to create the VM from the latest version of the image that is available. You
can also create the VM from a specific version by supplying the image version ID for --image .
In this example, we are creating a VM from the latest version of the myImageDefinition image.
Next steps
Azure Image Builder (preview) can help automate image version creation, you can even use it to update and create
a new image version from an existing image version.
You can also create Shared Image Gallery resource using templates. There are several Azure Quickstart Templates
available:
Create a Shared Image Gallery
Create an Image Definition in a Shared Image Gallery
Create an Image Version in a Shared Image Gallery
Create a VM from Image Version
Create a VM from a generalized image version using
the CLI
11/2/2020 • 2 minutes to read • Edit Online
Create a VM from a generalized image version stored in a Shared Image Gallery. If you want to create a VM using a
specialized image, see Create a VM from a specialized image.
resourceGroup=myGalleryRG
gallery=myGallery
az sig image-definition list --resource-group $resourceGroup --gallery-name $gallery --query "[].[name, id]" --
output tsv
Create the VM
Create a VM using az vm create. To use the latest version of the image, set --image to the ID of the image definition.
Replace resource names as needed in this example.
az vm create\
--resource-group $vmResourceGroup \
--name $vmName \
--image $imgDef \
--admin-username $adminUsername \
--generate-ssh-keys
You can also use a specific version by using the image version ID for the --image parameter. For example, to use
image version 1.0.0 type:
--image "/subscriptions/<subscription ID where the gallery is
located>/resourceGroups/myGalleryRG/providers/Microsoft.Compute/galleries/myGallery/images/myImageDefinition/versions/1.0.0"
.
Next steps
Azure Image Builder (preview) can help automate image version creation, you can even use it to update and create a
new image version from an existing image version.
List, update, and delete image resources
11/2/2020 • 2 minutes to read • Edit Online
You can manage your shared image gallery resources using the Azure CLI.
List information
Get the location, status and other information about the available image galleries using az sig list.
List the image definitions in a gallery, including information about OS type and status, using az sig image-
definition list.
List the shared image versions in a gallery, using az sig image-version list.
Update resources
There are some limitations on what can be updated. The following items can be updated:
Shared image gallery:
Description
Image definition:
Recommended vCPUs
Recommended memory
Description
End of life date
Image version:
Regional replica count
Target regions
Exclusion from latest
End of life date
If you plan on adding replica regions, do not delete the source managed image. The source managed image is
needed for replicating the image version to additional regions.
Update the description of a gallery using (az sig update.
az sig update \
--gallery-name myGallery \
--resource-group myGalleryRG \
--set description="My updated description."
Update an image version to add a region to replicate to using az sig image-version update. This change will take a
while as the image gets replicated to the new region.
This example shows how to use az sig image-version update to exclude this image version from being used as the
latest image.
This example shows how to use az sig image-version update to include this image version in being considered for
latest image.
Delete resources
You have to delete resources in reverse order, by deleting the image version first. After you delete all of the image
versions, you can delete the image definition. After you delete all image definitions, you can delete the gallery.
Delete an image version using az sig image-version delete.
az sig image-version delete \
--resource-group myGalleryRG \
--gallery-name myGallery \
--gallery-image-definition myImageDefinition \
--gallery-image-version 1.0.0
az sig delete \
--resource-group myGalleryRG \
--gallery-name myGallery
Next steps
Azure Image Builder (preview) can help automate image version creation, you can even use it to update and create
a new image version from an existing image version.
Create an Azure Shared Image Gallery using the
portal
11/2/2020 • 9 minutes to read • Edit Online
A Shared Image Gallery simplifies custom image sharing across your organization. Custom images are like
marketplace images, but you create them yourself. Custom images can be used to bootstrap deployment tasks like
preloading applications, application configurations, and other OS configurations.
The Shared Image Gallery lets you share your custom VM images with others in your organization, within or across
regions, within an AAD tenant. Choose which images you want to share, which regions you want to make them
available in, and who you want to share them with. You can create multiple galleries so that you can logically group
shared images.
The gallery is a top-level resource that provides full role-based access control (RBAC). Images can be versioned, and
you can choose to replicate each image version to a different set of Azure regions. The gallery only works with
Managed Images.
The Shared Image Gallery feature has multiple resource types. We will be using or building these in this article:
Image definition Image definitions are created within a gallery and carry
information about the image and requirements for using it
internally. This includes whether the image is Windows or
Linux, release notes, and minimum and maximum memory
requirements. It is a definition of a type of image.
When working through this article, replace the resource group and VM names where needed.
Clean up resources
When no longer needed, you can delete the resource group, virtual machine, and all related resources. To do so,
select the resource group for the virtual machine, select Delete , then confirm the name of the resource group to
delete.
If you want to delete individual resources, you need to delete them in reverse order. For example, to delete an
image definition, you need to delete all of the image versions created from that image.
Next steps
You can also create Shared Image Gallery resource using templates. There are several Azure Quickstart Templates
available:
Create a Shared Image Gallery
Create an Image Definition in a Shared Image Gallery
Create an Image Version in a Shared Image Gallery
Create a VM from Image Version
For more information about Shared Image Galleries, see the Overview. If you run into issues, see Troubleshooting
shared image galleries.
Troubleshooting shared image galleries in Azure
11/6/2020 • 20 minutes to read • Edit Online
If you run into issues while performing any operations on shared image galleries, image definitions, and image
versions, run the failing command again in debug mode. Debug mode is activated by passing the --debug switch
with CLI and the -Debug switch with PowerShell. Once you’ve located the error, follow this document to
troubleshoot the errors.
Replication is slow
Use the --expand ReplicationStatus flag to check if the replication to all the specified target regions has been
completed. If not, wait for up to 6 hours for the job to complete. If it fails, trigger the command again to create and
replicate the image version. If there are many target regions the image version is being replicated to, consider
doing the replication in phases.
Next steps
Learn more about shared image galleries.
Preview: Use customer-managed keys for encrypting
images
11/6/2020 • 6 minutes to read • Edit Online
Gallery images are stored as managed disks, so they are automatically encrypted using server-side encryption.
Server-side encryption uses 256-bit AES encryption, one of the strongest block ciphers available, and is FIPS 140-2
compliant. For more information about the cryptographic modules underlying Azure managed disks, see
Cryptography API: Next Generation
You can rely on platform-managed keys for the encryption of your images, use your own keys, or you can use both
together, for double encryption. If you choose to manage encryption with your own keys, you can specify a
customer-managed key to use for encrypting and decrypting all disks in your images.
Server-side encryption using customer-managed keys uses Azure Key Vault. You can either import your RSA keys
to your Key Vault or generate new RSA keys in Azure Key Vault.
Prerequisites
This article requires that you already have a disk encryption set in each region that you want to replicate your
image to.
To use only a customer-managed key, see Enable customer-managed keys with ser ver-side
encr yption using the Azure portal or PowerShell.
To use both platform-managed and customer-managed keys (for double encryption), see Enable double
encr yption at rest using the Azure portal or PowerShell.
IMPORTANT
You must use this link https://aka.ms/diskencryptionupdates to access the Azure portal. Double encryption at rest is
not currently visible in the public Azure portal without using the link.
Limitations
There are several limitations when using customer managed keys for encrypting shared image gallery images:
Encryption key sets must be in the same subscription as your image.
Encryption key sets are regional resources so each region requires a different encryption key set.
You cannot copy or share images that use customer managed keys.
Once you have used your own keys to encrypt a disk or image, you cannot go back to using platform-
managed keys for encrypting those disks or images.
IMPORTANT
Encryption using customer-managed keys is currently in public preview. This preview version is provided without a service
level agreement, and it's not recommended for production workloads. Certain features might not be supported or might
have constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.
PowerShell
For the public preview, you first need to register the feature.
It takes a few minutes for the registration to complete. Use Get-AzProviderFeature to check on the status of the
feature registration.
When RegistrationState returns Registered, you can move on to the next step.
Check your provider registration. Make sure it returns Registered .
To specify a disk encryption set to for an image version, use New-AzGalleryImageDefinition with the
-TargetRegion parameter.
$sourceId = <ID of the image version source>
$osDiskImageEncryption = @{DiskEncryptionSetId='subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/myRG/providers/Microsoft.Compute/diskEncryptionSets/myDESet'}
$dataDiskImageEncryption1 = @{DiskEncryptionSetId='subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/myRG/providers/Microsoft.Compute/diskEncryptionSets/myDESet1';Lun=1}
$dataDiskImageEncryption2 = @{DiskEncryptionSetId='subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/myRG/providers/Microsoft.Compute/diskEncryptionSets/myDESet2';Lun=2}
$dataDiskImageEncryptions = @($dataDiskImageEncryption1,$dataDiskImageEncryption2)
$encryption1 = @{OSDiskImage=$osDiskImageEncryption;DataDiskImages=$dataDiskImageEncryptions}
$eastUS2osDiskImageEncryption = @{DiskEncryptionSetId='subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/myRG/providers/Microsoft.Compute/diskEncryptionSets/myEastUS2DESet'}
$eastUS2dataDiskImageEncryption1 = @{DiskEncryptionSetId='subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/myRG/providers/Microsoft.Compute/diskEncryptionSets/myEastUS2DESet1';Lun=1}
$eastUS2dataDiskImageEncryption2 = @{DiskEncryptionSetId='subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/myRG/providers/Microsoft.Compute/diskEncryptionSets/myEastUS2DESet2';Lun=2}
$eastUS2DataDiskImageEncryptions = @($eastUS2dataDiskImageEncryption1,$eastUS2dataDiskImageEncryption2)
$encryption2 = @{OSDiskImage=$eastUS2osDiskImageEncryption;DataDiskImages=$eastUS2DataDiskImageEncryptions}
Create a VM
You can create a VM from a shared image gallery and use customer-managed keys to encrypt the disks. The syntax
is the same as creating a generalized or specialized VM from an image, you need to use the extended parameter
set and add
Set-AzVMOSDisk -Name $($vmName +"_OSDisk") -DiskEncryptionSetId $diskEncryptionSet.Id -CreateOption FromImage to
the VM configuration.
For data disks, You need to add the -DiskEncryptionSetId $setID parameter when you use Add-AzVMDataDisk.
CLI
For the public preview, you first need to register for the feature. Registration takes approximately 30 minutes.
az feature register --namespace Microsoft.Compute --name SIGEncryption
When this returns "state": "Registered" , you can move on to the next step.
Check your registration.
To specify a disk encryption set to for an image version, use az image gallery create-image-version with the
--target-region-encryption parameter. The format for --target-region-encryption is a comma-separated list of
keys for encrypting the OS and data disks. It should look like this:
<encryption set for the OS disk>,<Lun number of the data disk>,<encryption set for the data disk>,<Lun number
for the second data disk>,<encryption set for the second data disk>
.
If the source for the OS disk is a managed disk or a VM, use --managed-image to specify the source for the image
version. In this example, the source is a managed image that has an OS disk as well as a data disk at LUN 0. The OS
disk will be encrypted with DiskEncryptionSet1 and the data disk will be encrypted with DiskEncryptionSet2.
If the source for the OS disk is a snapshot, use --os-snapshot to specify the OS disk. If there are data disk
snapshots that should also be part of the image version, add those using --data-snapshot-luns to specify the LUN,
and --data-snapshots to specify the snapshots.
In this example, the sources are disk snapshots. There is an OS disk, and also a data disk at LUN 0. The OS disk will
be encrypted with DiskEncryptionSet1 and the data disk will be encrypted with DiskEncryptionSet2.
az sig image-version create \
-g MyResourceGroup \
--gallery-image-version 1.0.0 \
--location westus\
--target-regions westus=2=standard_lrs eastus\
--target-region-encryption WestUSDiskEncryptionSet1,0,WestUSDiskEncryptionSet2
EastUS2DiskEncryptionSet1,0,EastUS2DiskEncryptionSet2 \
--os-snapshot "/subscriptions/<subscription
ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/snapshots/myOSSnapshot" \
--data-snapshot-luns 0 \
--data-snapshots "/subscriptions/<subscription
ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/snapshots/myDDSnapshot" \
--gallery-name MyGallery \
--gallery-image-definition MyImage
Create the VM
You can create a VM from a shared image gallery and use customer-managed keys to encrypt the disks. The syntax
is the same as creating a generalized or specialized VM from an image, you just need to add the
--os-disk-encryption-set parameter with the ID of the encryption set. For data disks, add
--data-disk-encryption-sets with a space delimited list of the disk encryption sets for the data disks.
Portal
When you create your image version in the portal, you can use the Encr yption tab to enter apply your storage
encryption sets.
IMPORTANT
To use double encryption, you must use this link https://aka.ms/diskencryptionupdates to access the Azure portal. Double
encryption at rest is not currently visible in the public Azure portal without using the link.
1. In the Create an image version page, select the Encr yption tab.
2. In Encr yption type , select Encr yption at-rest with a customer-managed key or Double encr yption
with platform-managed and customer-managed keys .
3. For each disk in the image, select the Disk encr yption set to use from the drop-down.
Create the VM
You can create a VM from an image version and use customer-managed keys to encrypt the disks. When you
create the VM in the portal, on the Disks tab, select Encr yption at-rest with customer-managed keys or
Double encr yption with platform-managed and customer-managed keys for the Encr yption type . You
can then select the encryption set from the drop-down.
Next steps
Learn more about server-side disk encryption.
For information about how to supply purchase plan information, see Supply Azure Marketplace purchase plan
information when creating images.
Preview: Azure Image Builder overview
11/2/2020 • 5 minutes to read • Edit Online
Standardized virtual machine (VM) images allow organizations to migrate to the cloud and ensure consistency in
the deployments. Images typically include predefined security and configuration settings and necessary
software. Setting up your own imaging pipeline requires time, infrastructure and setup, but with Azure VM
Image Builder, just provide a simple configuration describing your image, submit it to the service, and the image
is built, and distributed.
The Azure VM Image Builder (Azure Image Builder) lets you start with a Windows or Linux-based Azure
Marketplace image, existing custom images or Red Hat Enterprise Linux (RHEL) ISO and begin to add your own
customizations. Because the Image Builder is built on HashiCorp Packer, you can also import your existing Packer
shell provisioner scripts. You can also specify where you would like your images hosted, in the Azure Shared
Image Gallery, as a managed image or a VHD.
IMPORTANT
Azure Image Builder is currently in public preview. This preview version is provided without a service level agreement, and
it's not recommended for production workloads. Certain features might not be supported or might have constrained
capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.
Preview features
For the preview, these features are supported:
Creation of golden baseline images, that includes your minimum security and corporate configurations, and
allow departments to customize it further for their needs.
Patching of existing images, Image Builder will allow you to continually patch existing custom images.
Connect image builder to your existing virtual networks, so you can connect to existing configuration servers
(DSC, Chef, Puppet etc.), file shares, or any other routable servers/services.
Integration with the Azure Shared Image Gallery, allows you to distribute, version, and scale images globally,
and gives you an image management system.
Integration with existing image build pipelines, just call Image Builder from your pipeline, or use the simple
Preview Image Builder Azure DevOps Task.
Migrate an existing image customization pipeline to Azure. Use your existing scripts, commands, and
processes to customize images.
Creation of images in VHD format to support Azure Stack.
Regions
The Azure Image Builder Service will be available for preview in these regions. Images can be distributed outside
of these regions.
East US
East US 2
West Central US
West US
West US 2
North Europe
West Europe
OS support
AIB will support Azure Marketplace base OS images:
Ubuntu 18.04
Ubuntu 16.04
RHEL 7.6, 7.7
CentOS 7.6, 7.7
SLES 12 SP4
SLES 15, SLES 15 SP1
Windows 10 RS5 Enterprise/Enterprise multi-session/Professional
Windows 2016
Windows 2019
RHEL ISOs support is no longer supported.
How it works
The Azure Image Builder is a fully managed Azure service that is accessible by an Azure resource provider. The
Azure Image Builder process has three main parts: source, customize and distribute, these are represented in a
template. The diagram below shows the components, with some of their properties.
Image Builder process
1. Create the Image Template as a .json file. This .json file contains information about the image source,
customizations, and distribution. There are multiple examples in the Azure Image Builder GitHub repository.
2. Submit it to the service, this will create an Image Template artifact in the resource group you specify. In the
background, Image Builder will download the source image or ISO, and scripts as needed. These are stored in
a separate resource group that is automatically created in your subscription, in the format:
IT_<DestinationResourceGroup>_<TemplateName>.
3. Once the Image Template is created, you can then build the image. In the background Image Builder uses the
template and source files to create a VM (default size: Standard_D1_v2), network, public IP, NSG, and storage
in the IT_<DestinationResourceGroup>_<TemplateName> resource group.
4. As part of the image creation, Image builder distributes the image according to the template, then deletes the
additional resources in the IT_<DestinationResourceGroup>_<TemplateName> resource group that was
created for the process.
Permissions
When you register for the (AIB), this grants the AIB Service permission to create, manage and delete a staging
resource group (IT_*), and have rights to add resources to it, that are required for the image build. This is done
by an AIB Service Principal Name (SPN) being made available in your subscription during a successful
registration.
To allow Azure VM Image Builder to distribute images to either the managed images or to a Shared Image
Gallery, you will need to create an Azure user-assigned identity that has permissions to read and write images. If
you are accessing Azure storage, then this will need permissions to read private containers.
Initially you must follow create Azure user-assigned managed identity documentation on how to create an
identity.
Once you have the identity you need to grant it permissions, to do this, you can use an Azure Custom Role
Definition, and then assign the user-assigned managed identity to use the Custom Role Definition.
Permissions are explained in more detail here, and the examples show how this is implemented.
NOTE
Previously with AIB, you would use the AIB SPN, and grant the SPN permissions to the image resource groups. We are
moving away from this model, to allow for future capabilities. From 26th May 2020, Image Builder will not accept
templates that do not have a user-assigned identity, existing templates will need to be resubmitted to the service with a
user-identity. The examples here already show how you can create a user-assigned identity and add them to a template.
For more information please review this documentation on this change and releases updates.
Costs
You will incur some compute, networking and storage costs when creating, building and storing images with
Azure Image Builder. These costs are similar to the costs incurred in manually creating custom images. For the
resources, you will be charged at your Azure rates.
During the image creation process, files are downloaded and stored in the
IT_<DestinationResourceGroup>_<TemplateName> resource group, which will incur a small storage costs. If you do
not want to keep these, delete the Image Template after the image build.
Image Builder creates a VM using a D1v2 VM size, and the storage, and networking needed for the VM. These
resources will last for the duration of the build process, and will be deleted once Image Builder has finished
creating the image.
Azure Image Builder will distribute the image to your chosen regions, which might incur network egress charges.
Hyper-V generation
Image Builder currently only natively supports creating Hyper-V generation (Gen1) 1 images to the Azure
Shared Image Gallery (SIG) or Managed Image. If you want to create Gen2 images, then you need to use a
source Gen2 image, and distribute to VHD. After, you will then need to create a Managed Image from the VHD,
and inject it into the SIG as a Gen2 image.
Next steps
To try out the Azure Image Builder, see the articles for building Linux or Windows images.
Preview: Create a Windows VM with Azure Image
Builder
11/2/2020 • 5 minutes to read • Edit Online
This article is to show you how you can create a customized Windows image using the Azure VM Image Builder.
The example in this article uses customizers for customizing the image:
PowerShell (ScriptUri) - download and run a PowerShell script.
Windows Restart - restarts the VM.
PowerShell (inline) - run a specific command. In this example, it creates a directory on the VM using
mkdir c:\\buildActions .
File - copy a file from GitHub onto the VM. This example copies index.md to c:\buildArtifacts\index.html on
the VM.
buildTimeoutInMinutes - Increase a build time to allow for longer running builds, the default is 240 minutes,
and you can increase a build time to allow for longer running builds.
vmProfile - specifying a vmSize and Network properties
osDiskSizeGB - you can increase the size of image
identity - providing an identity for Azure Image Builder to use during the build
You can also specify a buildTimeoutInMinutes . The default is 240 minutes, and you can increase a build time to
allow for longer running builds.
We will be using a sample .json template to configure the image. The .json file we are using is here:
helloImageTemplateWin.json.
IMPORTANT
Azure Image Builder is currently in public preview. This preview version is provided without a service level agreement, and it's
not recommended for production workloads. Certain features might not be supported or might have constrained
capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.
Set variables
We will be using some pieces of information repeatedly, so we will create some variables to store that information.
Create a variable for your subscription ID. You can get this using az account show | grep id .
# get identity id
imgBuilderCliId=$(az identity show -g $imageResourceGroup -n $idenityName | grep "clientId" | cut -c16- | tr -
d '",')
curl
https://raw.githubusercontent.com/danielsollondon/azvmimagebuilder/master/quickquickstarts/0_Creating_a_Custom
_Windows_Managed_Image/helloImageTemplateWin.json -o helloImageTemplateWin.json
You can modify this example, in the terminal using a text editor like vi .
vi helloImageTemplateWin.json
NOTE
For the source image, you must always specify a version, you cannot use latest . If you add or change the resource group
where the image is distributed to, you must make the permissions are set on the resource group.
Create the image
Submit the image configuration to the VM Image Builder service
az resource create \
--resource-group $imageResourceGroup \
--properties @helloImageTemplateWin.json \
--is-full-object \
--resource-type Microsoft.VirtualMachineImages/imageTemplates \
-n helloImageTemplateWin01
When complete, this will return a success message back to the console, and create an
Image Builder Configuration Template in the $imageResourceGroup . You can see this resource in the resource group
in the Azure portal, if you enable 'Show hidden types'.
In the background, Image Builder will also create a staging resource group in your subscription. This resource
group is used for the image build. It will be in this format: IT_<DestinationResourceGroup>_<TemplateName>
NOTE
You must not delete the staging resource group directly. First delete the image template artifact, this will cause the staging
resource group to be deleted.
If the service reports a failure during the image configuration template submission:
Review these troubleshooting steps.
You will need to delete the template, using the following snippet, before you retry submission.
az resource delete \
--resource-group $imageResourceGroup \
--resource-type Microsoft.VirtualMachineImages/imageTemplates \
-n helloImageTemplateLinux01
az resource invoke-action \
--resource-group $imageResourceGroup \
--resource-type Microsoft.VirtualMachineImages/imageTemplates \
-n helloImageTemplateWin01 \
--action Run
Wait until the build is complete. This can take about 15 minutes.
If you encounter any errors, please review these troubleshooting steps.
Create the VM
Create the VM using the image you built. Replace <password> with your own password for the aibuser on the
VM.
az vm create \
--resource-group $imageResourceGroup \
--name aibImgWinVm00 \
--admin-username aibuser \
--admin-password <password> \
--image $imageName \
--location $location
dir c:\
You should see these two directories created during image customization:
buildActions
buildArtifacts
Clean up
When you are done, delete the resources.
Delete the image builder template
az resource delete \
--resource-group $imageResourceGroup \
--resource-type Microsoft.VirtualMachineImages/imageTemplates \
-n helloImageTemplateWin01
Next steps
To learn more about the components of the .json file used in this article, see Image builder template reference.
Preview: Create a Windows VM with Azure Image
Builder using PowerShell
11/2/2020 • 7 minutes to read • Edit Online
This article demonstrates how you can create a customized Windows image using the Azure VM Image Builder
PowerShell module.
Cau t i on
Azure Image Builder is currently in public preview. This preview version is provided without a service level
agreement. It's not recommended for production workloads. Certain features might not be supported or might
have constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.
Prerequisites
If you don't have an Azure subscription, create a free account before you begin.
If you choose to use PowerShell locally, this article requires that you install the Az PowerShell module and connect
to your Azure account using the Connect-AzAccount cmdlet. For more information about installing the Az
PowerShell module, see Install Azure PowerShell.
IMPORTANT
While the Az.ImageBuilder and Az.ManagedSer viceIdentity PowerShell modules are in preview, you must install them
separately using the Install-Module cmdlet with the AllowPrerelease parameter. Once these PowerShell modules
become generally available, they become part of future Az PowerShell module releases and available natively from within
Azure Cloud Shell.
O P T IO N EXA M P L E/ L IN K
Select the Cloud Shell button on the menu bar at the upper
right in the Azure portal.
To run the code in this article in Azure Cloud Shell:
1. Start Cloud Shell.
2. Select the Copy button on a code block to copy the code.
3. Paste the code into the Cloud Shell session by selecting Ctrl +Shift +V on Windows and Linux or by
selecting Cmd +Shift +V on macOS.
4. Select Enter to run the code.
If you have multiple Azure subscriptions, choose the appropriate subscription in which the resources should be
billed. Select a specific subscription using the Set-AzContext cmdlet.
Register features
If this is your first time using Azure image builder during the preview, register the new
Vir tualMachineTemplatePreview feature.
NOTE
The RegistrationState may be in the Registering state for several minutes before changing to Registered . Wait until
the status is Registered before continuing.
Register the following resource providers for use with your Azure subscription if they aren't already registered.
Microsoft.Compute
Microsoft.KeyVault
Microsoft.Storage
Microsoft.VirtualMachineImages
Define variables
You'll be using several pieces of information repeatedly. Create variables to store the information.
# Destination image resource group name
$imageResourceGroup = 'myWinImgBuilderRG'
# Azure region
$location = 'WestUS2'
Create a variable for your Azure subscription ID. To confirm that the subscriptionID variable contains your
subscription ID, you can run the second line in the following example.
$myRoleImageCreationUrl =
'https://raw.githubusercontent.com/danielsollondon/azvmimagebuilder/master/solutions/12_Creating_AIB_Security_R
oles/aibRoleImageCreation.json'
$myRoleImageCreationPath = "$env:TEMP\myRoleImageCreation.json"
$RoleAssignParams = @{
ObjectId = $identityNamePrincipalId
RoleDefinitionName = $imageRoleDefName
Scope = "/subscriptions/$subscriptionID/resourceGroups/$imageResourceGroup"
}
New-AzRoleAssignment @RoleAssignParams
NOTE
If you receive the error: "New-AzRoleDefinition: Role definition limit exceeded. No more role definitions can be created.", see
Troubleshoot Azure RBAC.
$myGalleryName = 'myImageGallery'
$imageDefName = 'winSvrImages'
$GalleryParams = @{
GalleryName = $myGalleryName
ResourceGroupName = $imageResourceGroup
Location = $location
Name = $imageDefName
OsState = 'generalized'
OsType = 'Windows'
Publisher = 'myCo'
Offer = 'Windows'
Sku = 'Win2019'
}
New-AzGalleryImageDefinition @GalleryParams
Create an image
Create an Azure image builder source object. See Find Windows VM images in the Azure Marketplace with Azure
PowerShell for valid parameter values.
$SrcObjParams = @{
SourceTypePlatformImage = $true
Publisher = 'MicrosoftWindowsServer'
Offer = 'WindowsServer'
Sku = '2019-Datacenter'
Version = 'latest'
}
$srcPlatform = New-AzImageBuilderSourceObject @SrcObjParams
$disObjParams = @{
SharedImageDistributor = $true
ArtifactTag = @{tag='dis-share'}
GalleryImageId =
"/subscriptions/$subscriptionID/resourceGroups/$imageResourceGroup/providers/Microsoft.Compute/galleries/$myGal
leryName/images/$imageDefName"
ReplicationRegion = $location
RunOutputName = $runOutputName
ExcludeFromLatest = $false
}
$disSharedImg = New-AzImageBuilderDistributorObject @disObjParams
$ImgCustomParams = @{
PowerShellCustomizer = $true
CustomizerName = 'settingUpMgmtAgtPath'
RunElevated = $false
Inline = @("mkdir c:\\buildActions", "echo Azure-Image-Builder-Was-Here >
c:\\buildActions\\buildActionsOutput.txt")
}
$Customizer = New-AzImageBuilderCustomizerObject @ImgCustomParams
$ImgTemplateParams = @{
ImageTemplateName = $imageTemplateName
ResourceGroupName = $imageResourceGroup
Source = $srcPlatform
Distribute = $disSharedImg
Customize = $Customizer
Location = $location
UserAssignedIdentityId = $identityNameResourceId
}
New-AzImageBuilderTemplate @ImgTemplateParams
When complete, a message is returned and an image builder configuration template is created in the
$imageResourceGroup .
To determine if the template creation process was successful, you can use the following example.
Get-AzImageBuilderTemplate -ImageTemplateName $imageTemplateName -ResourceGroupName $imageResourceGroup |
Select-Object -Property Name, LastRunStatusRunState, LastRunStatusMessage, ProvisioningState
In the background, image builder also creates a staging resource group in your subscription. This resource group is
used for the image build. It's in the format: IT_<DestinationResourceGroup>_<TemplateName> .
WARNING
Do not delete the staging resource group directly. Delete the image template artifact, this will cause the staging resource
group to be deleted.
If the service reports a failure during the image configuration template submission:
See Troubleshooting Azure VM Image Build (AIB) Failures.
Delete the template using the following example before you retry.
Wait for the image build process to complete. This step could take up to an hour.
If you encounter errors, review Troubleshooting Azure VM Image Build (AIB) Failures.
Create a VM
Store login credentials for the VM in a variable. The password must be complex.
$Cred = Get-Credential
You should see output based on the contents of the file created during the image customization process.
Azure-Image-Builder-Was-Here
Clean up resources
If the resources created in this article aren't needed, you can delete them by running the following examples.
Delete the image builder template
The following example deletes the specified resource group and all resources contained within it. If resources
outside the scope of this article exist in the specified resource group, they will also be deleted.
Next steps
To learn more about the components of the .json file used in this article, see Image builder template reference.
Use Azure Image Builder for Windows VMs allowing
access to an existing Azure VNET
11/2/2020 • 7 minutes to read • Edit Online
This article shows you how you can use the Azure Image Builder to create a basic customized Windows image that
has access to existing resources on a VNET. The build VM you create is deployed to a new or existing VNET you
specify in your subscription. When you use an existing Azure VNET, the Azure Image Builder service does not
require public network connectivity.
IMPORTANT
Azure Image Builder is currently in public preview. This preview version is provided without a service level agreement, and it's
not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
For more information, see Supplemental Terms of Use for Microsoft Azure Previews.
O P T IO N EXA M P L E/ L IN K
Select the Cloud Shell button on the menu bar at the upper
right in the Azure portal.
# distribution properties object name (runOutput), i.e. this gives you the properties of the managed image on
completion
$runOutputName="winSvrSigR01"
# VNET properties (update to match your existing VNET, or leave as-is for demo)
# VNET name
$vnetName="myexistingvnet01"
# subnet name
$subnetName="subnet01"
# VNET resource group name
$vnetRgName="existingVnetRG"
# Existing Subnet NSG Name or the demo will create it
$nsgName="aibdemoNsg"
# NOTE! The VNET must always be in the same region as the AIB service region.
## NOTE! The VNET must always be in the same region as the Azure Image Builder service region.
$virtualNetwork | Set-AzVirtualNetwork
For more information on Image Builder networking, see Azure Image Builder Service networking options.
$aibRoleNetworkingUrl="https://raw.githubusercontent.com/danielsollondon/azvmimagebuilder/master/solutions/12_C
reating_AIB_Security_Roles/aibRoleNetworking.json"
$aibRoleNetworkingPath = "aibRoleNetworking.json"
$aibRoleImageCreationUrl="https://raw.githubusercontent.com/danielsollondon/azvmimagebuilder/master/solutions/1
2_Creating_AIB_Security_Roles/aibRoleImageCreation.json"
$aibRoleImageCreationPath = "aibRoleImageCreation.json"
# download configs
Invoke-WebRequest -Uri $templateUrl -OutFile $templateFilePath -UseBasicParsing
# create identity
New-AzUserAssignedIdentity -ResourceGroupName $imageResourceGroup -Name $idenityName
# create role definitions from role configurations examples, this avoids granting contributor to the SPN
New-AzRoleDefinition -InputFile ./aibRoleImageCreation.json
New-AzRoleDefinition -InputFile ./aibRoleNetworking.json
For more information on permissions, see Configure Azure Image Builder Service permissions using Azure CLI or
Configure Azure Image Builder Service permissions using PowerShell.
# note this will take minute, as validation is run (security / dependencies etc.)
$managementEp = $currentAzureContext.Environment.ResourceManagerUrl
$urlBuildStatus = [System.String]::Format("
{0}subscriptions/{1}/resourceGroups/$imageResourceGroup/providers/Microsoft.VirtualMachineImages/imageTemplates
/{2}?api-version=2020-02-14", $managementEp, $currentAzureContext.Subscription.Id,$imageTemplateName)
The image build for this example will take approximately 50 minutes (multiple reboots, windows update
installs/reboot), when you query the status, you need to look for lastRunStatus, below shows the build is still
running, if it had completed successfully, it would show 'succeeded'.
"lastRunStatus": {
"startTime": "2019-08-21T00:39:40.61322415Z",
"endTime": "0001-01-01T00:00:00Z",
"runState": "Running",
"runSubState": "Building",
"message": ""
},
$managementEp = $currentAzureContext.Environment.ResourceManagerUrl
$urlRunOutputStatus = [System.String]::Format("
{0}subscriptions/{1}/resourceGroups/$imageResourceGroup/providers/Microsoft.VirtualMachineImages/imageTemplates
/$imageTemplateName/runOutputs/{2}?api-version=2020-02-14", $managementEp,
$currentAzureContext.Subscription.Id, $runOutputName)
Create a VM
Now the build is finished, you can build a VM from the image. Use the examples from the PowerShell New-AzVM
documentation.
Clean Up
Delete Image Template Artifact
# Get ResourceID of the Image Template
$resTemplateId = Get-AzResource -ResourceName $imageTemplateName -ResourceGroupName $imageResourceGroup -
ResourceType Microsoft.VirtualMachineImages/imageTemplates -ApiVersion "2020-02-14"
## remove definitions
Remove-AzRoleDefinition -Id $imageRoleDefObjId -Force
Remove-AzRoleDefinition -Id $networkRoleObjId -Force
## delete identity
Remove-AzUserAssignedIdentity -ResourceGroupName $imageResourceGroup -Name $idenityName -Force
Next steps
Learn more about Azure Shared Image Galleries.
Azure Image Builder Service networking options
11/2/2020 • 3 minutes to read • Edit Online
You need to choose to deploy Azure Image Builder with or without an existing VNET.
NOTE
The VNET must be in the same region as the Azure Image Builder Service region.
"VirtualNetworkConfig": {
"name": "",
"subnetName": "",
"resourceGroupName": ""
},
subnetName Name of the subnet within the specified virtual network. Must
be specified if and only if name is specified.
Private Link service requires an IP from the given VNET and subnet. Currently, Azure doesn’t support Network
Policies on these IPs. Hence, network policies need to be disabled on the subnet. For more information, see the
Private Link documentation.
Checklist for using your VNET
1. Allow Azure Load Balancer (ALB) to communicate with the proxy VM in an NSG
AZ CLI example
PowerShell example
2. Disable Private Service Policy on subnet
AZ CLI example
PowerShell example
3. Allow Azure Image Builder to create an ALB and add VMs to the VNET
AZ CLI Example
PowerShell example
4. Allow Azure Image Builder to read/write source images and create images
AZ CLI example
PowerShell example
5. Ensure you are using VNET in the same region as the Azure Image Builder Service region.
Next steps
For more information, see Azure Image Builder overview.
Configure Azure Image Builder Service permissions
using Azure CLI
11/2/2020 • 7 minutes to read • Edit Online
Azure Image Builder Service requires configuration of permissions and privileges prior to building an image. The
following sections detail how to configure possible scenarios using Azure CLI.
IMPORTANT
Azure Image Builder is currently in public preview. This preview version is provided without a service level agreement, and it's
not recommended for production workloads. Certain features might not be supported or might have constrained
capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.
O P T IO N EXA M P L E/ L IN K
Select the Cloud Shell button on the menu bar at the upper
right in the Azure portal.
NOTE
Previously, Azure Image Builder used the Azure Image Builder service principal name (SPN) to grant permissions to the
image resource groups. Using the SPN will be deprecated. Use a user-assigned managed identity instead.
The following example shows you how to create an Azure user-assigned managed identity. Replace the
placeholder settings to set your variables.
identityName="aibIdentity"
imageResourceGroup=<Resource group>
az identity create \
--resource-group $imageResourceGroup \
--name $identityName
For more information about Azure user-assigned identities, see the Azure user-assigned managed identity
documentation on how to create an identity.
Microsoft.Compute/images/write
Microsoft.Compute/images/read
Microsoft.Compute/images/delete
Microsoft.Compute/galleries/read
Microsoft.Compute/galleries/read
Microsoft.Compute/galleries/images/read
Microsoft.Compute/galleries/images/versions/read
Microsoft.Network/virtualNetworks/read
Microsoft.Network/virtualNetworks/subnets/join/action
# Subscription ID - You can get this using `az account show | grep id` or from the Azure portal.
subscriptionID=<Subscription ID>
# Resource group - For Preview, image builder will only support creating custom images in the same Resource
Group as the source managed image.
imageResourceGroup=<Resource group>
identityName="aibIdentity"
# Create a unique role name to avoid clashes in the same Azure Active Directory domain
imageRoleDefName="Azure Image Builder Image Def"$(date +'%s')
# Grant the custom role to the user-assigned managed identity for Azure Image Builder.
az role assignment create \
--assignee $imgBuilderCliId \
--role $imageRoleDefName \
--scope /subscriptions/$subscriptionID/resourceGroups/$imageResourceGroup
# Grant the custom role to the user-assigned managed identity for Azure Image Builder.
az role assignment create \
--assignee $imgBuilderCliId \
--role $netRoleDefName \
--scope /subscriptions/$subscriptionID/resourceGroups/$VnetResourceGroup
NOTE
Azure Image Builder only uses the identity at image template submission time. The build VM does not have access to the
identity during image build.
In the Image Builder template, you need to provide the user-assigned managed identity.
"type": "Microsoft.VirtualMachineImages/imageTemplates",
"apiVersion": "2019-05-01-preview",
"location": "<Region>",
..
"identity": {
"type": "UserAssigned",
"userAssignedIdentities": {
"<Image Builder ID>": {}
}
For more information using a user-assigned managed identity, see the Create a Custom Image that will use an
Azure User-Assigned Managed Identity to seemlessly access files Azure Storage. The quickstart walks through how
to create and configure the user-assigned managed identity to access a storage account.
Next steps
For more information, see Azure Image Builder overview.
Configure Azure Image Builder Service permissions
using PowerShell
11/2/2020 • 6 minutes to read • Edit Online
Azure Image Builder Service requires configuration of permissions and privileges prior to building an image. The
following sections detail how to configure possible scenarios using PowerShell.
IMPORTANT
Azure Image Builder is currently in public preview. This preview version is provided without a service level agreement, and it's
not recommended for production workloads. Certain features might not be supported or might have constrained
capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.
O P T IO N EXA M P L E/ L IN K
Select the Cloud Shell button on the menu bar at the upper
right in the Azure portal.
NOTE
Previously, Azure Image Builder used the Azure Image Builder service principal name (SPN) to grant permissions to the
image resource groups. Using the SPN will be deprecated. Use a user-assigned managed identity instead.
The following example shows you how to create an Azure user-assigned managed identity. Replace the
placeholder settings to set your variables.
$parameters = @{
Name = 'aibIdentity'
ResourceGroupName = '<Resource group>'
}
# create identity
New-AzUserAssignedIdentity @parameters
For more information about Azure user-assigned identities, see the Azure user-assigned managed identity
documentation on how to create an identity.
Microsoft.Compute/images/write
Microsoft.Compute/images/read
Microsoft.Compute/images/delete
Microsoft.Compute/galleries/read
Microsoft.Compute/galleries/read
Microsoft.Compute/galleries/images/read
Microsoft.Compute/galleries/images/versions/read
Microsoft.Network/virtualNetworks/read
Microsoft.Network/virtualNetworks/subnets/join/action
# Create a unique role name to avoid clashes in the same Azure Active Directory domain
$timeInt=$(get-date -UFormat "%s")
$imageRoleDefName="Azure Image Builder Image Def"+$timeInt
# Grant the custom role to the user-assigned managed identity for Azure Image Builder.
$parameters = @{
ObjectId = $identityNamePrincipalId
RoleDefinitionName = $imageRoleDefName
Scope = '/subscriptions/' + $sub_id + '/resourceGroups/' + $imageResourceGroup
}
New-AzRoleAssignment @parameters
# Create a unique role name to avoid clashes in the same AAD domain
$timeInt=$(get-date -UFormat "%s")
$networkRoleDefName="Azure Image Builder Network Def"+$timeInt
# Assign the custom role to the user-assigned managed identity for Azure Image Builder
$parameters = @{
ObjectId = $identityNamePrincipalId
RoleDefinitionName = $networkRoleDefName
Scope = '/subscriptions/' + $sub_id + '/resourceGroups/' + $res_group
}
New-AzRoleAssignment @parameters
Next steps
For more information, see Azure Image Builder overview.
Azure Image Builder Service DevOps Task
11/2/2020 • 9 minutes to read • Edit Online
This article shows you how to use an Azure DevOps task to inject build artifacts into a VM image so you can install
and configure your application and OS.
Prerequisites
Install the Stable DevOps Task from Visual Studio Marketplace.
You must have a VSTS DevOps account, and a Build Pipeline created
Register and enable the Image Builder feature requirements in the subscription used by the pipelines:
Az PowerShell
Az CLI
Create a Standard Azure Storage Account in the source image Resource Group, you can use other Resource
Group/Storage accounts. The storage account is used transfer the build artifacts from the DevOps task to the
image.
# Az PowerShell
$timeInt=$(get-date -UFormat "%s")
$storageAccName="aibstorage"+$timeInt
$location=westus
# create storage account and blob in resource group
New-AzStorageAccount -ResourceGroupName $strResourceGroup -Name $storageAccName -Location $location -
SkuName Standard_LRS
# Az CLI
location=westus
scriptStorageAcc=aibstordot$(date +'%s')
# create storage account and blob in resource group
az storage account create -n $scriptStorageAcc -g $strResourceGroup -l $location --sku Standard_LRS
/subscriptions/<subscriptionID>/resourceGroups/<rgName>/providers/Microsoft.Compute/images/<imageName>
Azure Shared Image Gallery - You need to pass in the resourceId of the image version, for example:
/subscriptions/$subscriptionID/resourceGroups/$sigResourceGroup/providers/Microsoft.Compute/galleries/$s
igName/images/$imageDefName/versions/<versionNumber>
If you need to get the latest Shared Image Gallery version, you can have an AZ PowerShell or AZ CLI task
before that will get the latest version and set a DevOps variable. Use the variable in the AZ VM Image Builder
DevOps task. For more information, see the examples.
(Marketplace) Base Image There is a drop-down list of popular images, these will always use the 'latest'
version of the supported OS's.
If the base image is not in the list, you can specify the exact image using Publisher:Offer:Sku .
Base Image Version (optional) - You can supply the version of the image you want to use, default is latest .
Customize
Provisioner
Initially, two customizers are supported - Shell and PowerShell . Only inline is supported. If you want to download
scripts, then you can pass inline commands to do so.
For your OS, select PowerShell or Shell.
Windows Update Task
For Windows only, the task runs Windows Update at the end of the customizations. It handles the required reboots.
The following Windows Update configuration is executed:
"type": "WindowsUpdate",
"searchCriteria": "IsInstalled=0",
"filters": [
"exclude:$_.Title -like '*Preview*'",
"include:$true"
It installs important and recommended Windows Updates that are not preview.
Handling Reboots
Currently the DevOps task does not have support for rebooting Windows builds, if you try to reboot with
PowerShell code, the build will fail. However, you can use code to reboot Linux builds.
Build Path
The task is designed to be able to inject DevOps Build release artifacts into the image. To make this work, you need
to set up a build pipeline. In the setup of the release pipeline, you must add the repo of the build artifacts.
Select the Build Path button to choose the build folder you want to be placed on the image. The Image Builder task
copies all files and directories within it. When the image is being created, Image Builder deploys the files and
directories into different paths, depending on OS.
IMPORTANT
When adding a repo artifact, you may find the directory is prefixed with an underscore _. The underscore can cause issues
with the inline commands. Use the appropriate quotes in the commands.
Windows - Files exist in C:\ . A directory named buildArtifacts is created which includes the webapp
directory.
Linux - Files exist in /tmp . The webapp directory is created which includes all files and directories. You must
move the files from this directory. Otherwise, they will be deleted since it is in the temporary directory.
Inline customization script
Windows - You can enter PowerShell inline commands separated by commas. If you want to run a script in
your build directory, you can use:
& 'c:\buildArtifacts\webapp\webconfig.ps1'
Linux - On Linux systems the build artifacts are put into the /tmp directory. However, on many Linux OSs,
on a reboot, the /tmp directory contents are deleted. If you want the artifacts to exist in the image, you must
create another directory and copy them over. For example:
If you are ok using the "/tmp" directory, then you can use the code below to execute the script.
NOTE
Image Builder does not automatically remove the build artifacts, it is strongly suggested that you always have code to
remove the build artifacts.
Windows - Image builder deploys files to the c:\buildArtifacts directory. The directory is persisted you
must remove the directory. You can remove it in the script you execute. For example:
Linux - The build artifacts are put into the /tmp directory. However, on many Linux OSs, on a reboot, the
/tmp directory contents are deleted. It is suggested that you have code to remove the contents and not rely
on the OS to remove the contents. For example:
sudo rm -R "/tmp/AppsAndImageBuilderLinux"
NOTE
You need to manually delete the storage account or container after each build.
Distribute
There are 3 distribute types supported.
Managed Image
ResourceID:
/subscriptions/<subscriptionID>/resourceGroups/<rgName>/providers/Microsoft.Compute/images/<imageName>
Locations
Azure Shared Image Gallery
The Shared Image Gallery must already exist.
ResourceID:
/subscriptions/<subscriptionID>/resourceGroups/<rgName>/providers/Microsoft.Compute/galleries/<galleryNa
me>/images/<imageDefName>
Regions: list of regions, comma separated. For example, westus, eastus, centralus
VHD
You cannot pass any values to this, Image Builder will emit the VHD to the temporary Image Builder resource
group, IT_<DestinationResourceGroup>_<TemplateName> , in the vhds container. When you start the release build,
image builder emits logs. When it has finished, it will emit the VHD URL.
Optional Settings
VM Size - You can override the VM size, from the default of Standard_D1_v2. You may override to reduce total
customization time, or because you want to create the images that depend on certain VM sizes, such as GPU /
HPC etc.
How it works
When you create the release, the task creates a container in the storage account, named imagebuilder-vststask. It
zips and uploads your build artifacts and creates a SAS Token for the zip file.
The task uses the properties passed to the task to create the Image Builder Template artifact. The task does the
following:
Downloads the build artifact zip file and any other associated scripts. The files are saved in a storage account in
the temporary Image Builder resource group IT_<DestinationResourceGroup>_<TemplateName> .
Creates a template prefixed t_ and a 10-digit monotonic integer. The template is saved to the resource group
you selected. The template exists for the duration of the build in the resource group.
Example output:
start reading task parameters...
found build at: /home/vsts/work/r1/a/_ImageBuilding/webapp
end reading parameters
getting storage account details for aibstordot1556933914
created archive /home/vsts/work/_temp/temp_web_package_21475337782320203.zip
Source for image: { type: 'SharedImageVersion',
imageVersionId:
'/subscriptions/<subscriptionID>/resourceGroups/<rgName>/providers/Microsoft.Compute/galleries/<galleryName>/im
ages/<imageDefName>/versions/<imgVersionNumber>' }
template name: t_1556938436xxx
starting put template...
When the image build starts, the run status is reported in the release logs:
When the image build completes, you see output similar to following text:
FAQ
Can I use an existing image template I have already created, outside of DevOps?
Currently, not at this time.
Can I specify the image template name?
No. A unique template name is used and then deleted.
The image builder failed. How can I troubleshoot?
If there is a build failure, the DevOps task does not delete the staging resource group. You can access the staging
resource group that contains the build customization log.
You will see an error in the DevOps log for the VM Image Builder task, and see the customization.log location. For
example:
For more information on troubleshooting, see Troubleshoot Azure Image Builder Service.
After investigating the failure, you can delete the staging resource group. First, delete the Image Template Resource
artifact. The artifact is prefixed with t_ and can be found in the DevOps task build log:
...
Source for image: { type: 'SharedImageVersion',
imageVersionId:
'/subscriptions/<subscriptionID>/resourceGroups/<rgName>/providers/Microsoft.Compute/galleries/<galleryName>/im
ages/<imageDefName>/versions/<imgVersionNumber>' }
...
template name: t_1556938436xxx
...
The Image Template resource artifact is in the resource group specified initially in the task. When you're done
troubleshooting delete the artifact. If deleting using the Azure portal, within the resource group, select Show
Hidden Types , to view the artifact.
Next steps
For more information, see Azure Image Builder overview.
Preview: Create an Azure Image Builder template
11/2/2020 • 18 minutes to read • Edit Online
Azure Image Builder uses a .json file to pass information into the Image Builder service. In this article we will go over the sections of the json file, so you
can build your own. To see examples of full .json files, see the Azure Image Builder GitHub.
This is the basic template format:
{
"type": "Microsoft.VirtualMachineImages/imageTemplates",
"apiVersion": "2020-02-14",
"location": "<region>",
"tags": {
"<name": "<value>",
"<name>": "<value>"
},
"identity":{},
"dependsOn": [],
"properties": {
"buildTimeoutInMinutes": <minutes>,
"vmProfile":
{
"vmSize": "<vmSize>",
"osDiskSizeGB": <sizeInGB>,
"vnetConfig": {
"subnetId":
"/subscriptions/<subscriptionID>/resourceGroups/<vnetRgName>/providers/Microsoft.Network/virtualNetworks/<vnetName>/subnets/<subnetName>"
}
},
"source": {},
"customize": {},
"distribute": {}
}
}
"type": "Microsoft.VirtualMachineImages/imageTemplates",
"apiVersion": "2020-02-14",
Location
The location is the region where the custom image will be created. For the Image Builder preview, the following regions are supported:
East US
East US 2
West Central US
West US
West US 2
North Europe
West Europe
"location": "<region>",
vmProfile
By default Image Builder will use a "Standard_D1_v2" build VM, you can override this, for example, if you want to customize an Image for a GPU VM, you
need a GPU VM size. This is optional.
{
"vmSize": "Standard_D1_v2"
},
osDiskSizeGB
By default, Image Builder will not change the size of the image, it will use the size from the source image. You can only increase the size of the OS Disk
(Win and Linux), this is optional, and a value of 0 means leave the same size as the source image. You cannot reduce the OS Disk size to smaller than the
size from the source image.
{
"osDiskSizeGB": 100
},
vnetConfig
If you do not specify any VNET properties, then Image Builder will create its own VNET, Public IP, and NSG. The Public IP is used for the service to
communicate with the build VM, however if you do not want a Public IP or want Image Builder to have access to your existing VNET resources, such as
configuration servers (DSC, Chef, Puppet, Ansible), file shares etc., then you can specify a VNET. For more information, review the networking
documentation, this is optional.
"vnetConfig": {
"subnetId":
"/subscriptions/<subscriptionID>/resourceGroups/<vnetRgName>/providers/Microsoft.Network/virtualNetworks/<vnetName>/subnets/<subnetName>"
}
}
Tags
These are key/value pairs you can specify for the image that's generated.
Depends on (optional)
This optional section can be used to ensure that dependencies are completed before proceeding.
"dependsOn": [],
Identity
Required - For Image Builder to have permissions to read/write images, read in scripts from Azure Storage you must create an Azure User-Assigned
Identity, that has permissions to the individual resources. For details on how Image Builder permissions work, and relevant steps, please review the
documentation.
"identity": {
"type": "UserAssigned",
"userAssignedIdentities": {
"<imgBuilderId>": {}
}
},
Properties: source
The source section contains information about the source image that will be used by Image Builder. Image Builder currently only natively supports
creating Hyper-V generation (Gen1) 1 images to the Azure Shared Image Gallery (SIG) or Managed Image. If you want to create Gen2 images, then you
need to use a source Gen2 image, and distribute to VHD. After, you will then need to create a Managed Image from the VHD, and inject it into the SIG as a
Gen2 image.
The API requires a 'SourceType' that defines the source for the image build, currently there are three types:
PlatformImage - indicated the source image is a Marketplace image.
ManagedImage - use this when starting from a regular managed image.
SharedImageVersion - this is used when you are using an image version in a Shared Image Gallery as the source.
NOTE
When using existing Windows custom images, you can run the Sysprep command up to 8 times on a single Windows image, for more information, see the sysprep
documentation.
PlatformImage source
Azure Image Builder supports Windows Server and client, and Linux Azure Marketplace images, see here for the full list.
"source": {
"type": "PlatformImage",
"publisher": "Canonical",
"offer": "UbuntuServer",
"sku": "18.04-LTS",
"version": "latest"
},
The properties here are the same that are used to create VM's, using AZ CLI, run the below to get the properties:
You can use 'latest' in the version, the version is evaluated when the image build takes place, not when the template is submitted. If you use this
functionality with the Shared Image Gallery destination, you can avoid resubmitting the template, and rerun the image build at intervals, so your images
are recreated from the most recent images.
Support for Market Place Plan Information
You can also specify plan information, for example:
"source": {
"type": "PlatformImage",
"publisher": "RedHat",
"offer": "rhel-byos",
"sku": "rhel-lvm75",
"version": "latest",
"planInfo": {
"planName": "rhel-lvm75",
"planProduct": "rhel-byos",
"planPublisher": "redhat"
}
ManagedImage source
Sets the source image as an existing managed image of a generalized VHD or VM.
NOTE
The source managed image must be of a supported OS and the image must same region as your Azure Image Builder template.
"source": {
"type": "ManagedImage",
"imageId":
"/subscriptions/<subscriptionId>/resourceGroups/{destinationResourceGroupName}/providers/Microsoft.Compute/images/<imageName>"
}
The imageId should be the ResourceId of the managed image. Use az image list to list available images.
SharedImageVersion source
Sets the source image an existing image version in a Shared Image Gallery.
NOTE
The source managed image must be of a supported OS and the image must same region as your Azure Image Builder template, if not, please replicate the image
version to the Image Builder Template region.
"source": {
"type": "SharedImageVersion",
"imageVersionID": "/subscriptions/<subscriptionId>/resourceGroups/<resourceGroup>/p
roviders/Microsoft.Compute/galleries/<sharedImageGalleryName>/images/<imageDefinitionName/versions/<imageVersion>"
}
The imageVersionId should be the ResourceId of the image version. Use az sig image-version list to list image versions.
Properties: buildTimeoutInMinutes
By default, the Image Builder will run for 240 minutes. After that, it will timeout and stop, whether or not the image build is complete. If the timeout is hit,
you will see an error similar to this:
[ERROR] Failed while waiting for packerizer: Timeout waiting for microservice to
[ERROR] complete: 'context deadline exceeded'
If you do not specify a buildTimeoutInMinutes value, or set it to 0, this will use the default value. You can increase or decrease the value, up to the
maximum of 960mins (16hrs). For Windows, we do not recommend setting this below 60 minutes. If you find you are hitting the timeout, review the
logs, to see if the customization step is waiting on something like user input.
If you find you need more time for customizations to complete, set this to what you think you need, with a little overhead. But, do not set it too high
because you might have to wait for it to timeout before seeing an error.
Properties: customize
Image Builder supports multiple ‘customizers’. Customizers are functions that are used to customize your image, such as running scripts, or rebooting
servers.
When using customize :
You can use multiple customizers, but they must have a unique name .
Customizers execute in the order specified in the template.
If one customizer fails, then the whole customization component will fail and report back an error.
It is strongly advised you test the script thoroughly before using it in a template. Debugging the script on your own VM will be easier.
Do not put sensitive data in the scripts.
The script locations need to be publicly accessible, unless you are using MSI.
"customize": [
{
"type": "Shell",
"name": "<name>",
"scriptUri": "<path to script>",
"sha256Checksum": "<sha256 checksum>"
},
{
"type": "Shell",
"name": "<name>",
"inline": [
"<command to run inline>",
]
}
],
The customize section is an array. Azure Image Builder will run through the customizers in sequential order. Any failure in any customizer will fail the
build process.
NOTE
Inline commands can be viewed in the image template definition and by Microsoft Support when helping with a support case. If you have sensitive information, it
should be moved into scripts in Azure Storage, where access requires authentication.
Shell customizer
The shell customizer supports running shell scripts, these must be publicly accessible for the IB to access them.
"customize": [
{
"type": "Shell",
"name": "<name>",
"scriptUri": "<link to script>",
"sha256Checksum": "<sha256 checksum>"
},
],
"customize": [
{
"type": "Shell",
"name": "<name>",
"inline": "<commands to run>"
},
],
OS Support: Linux
Customize properties:
type – Shell
name - name for tracking the customization
scriptUri - URI to the location of the file
inline - array of shell commands, separated by commas.
sha256Checksum - Value of sha256 checksum of the file, you generate this locally, and then Image Builder will checksum and validate.
To generate the sha256Checksum, using a terminal on Mac/Linux run: sha256sum <fileName>
For commands to run with super user privileges, they must be prefixed with sudo .
NOTE
Inline commands are stored as part of the image template definition, you can see these when you dump out the image definition, and these are also visible to Microsoft
Support in the event of a support case for troubleshooting purposes. If you have sensitive commands or values, it is strongly recommended these are moved into
scripts, and use a user identity to authenticate to Azure Storage.
"customize": [
{
"type": "WindowsRestart",
"restartCommand": "shutdown /r /f /t 0",
"restartCheckCommand": "echo Azure-Image-Builder-Restarted-the-VM > c:\\buildArtifacts\\azureImageBuilderRestart.txt",
"restartTimeout": "5m"
}
],
OS Support: Windows
Customize properties:
Type : WindowsRestart
restar tCommand - Command to execute the restart (optional). The default is 'shutdown /r /f /t 0 /c \"packer restart\"' .
restar tCheckCommand – Command to check if restart succeeded (optional).
restar tTimeout - Restart timeout specified as a string of magnitude and unit. For example, 5m (5 minutes) or 2h (2 hours). The default is: '5m'
Linux restart
There is no Linux Restart customizer, however, if you are installing drivers, or components that require a restart, you can install them and invoke a restart
using the Shell customizer, there is a 20min SSH timeout to the build VM.
PowerShell customizer
The shell customizer supports running PowerShell scripts and inline command, the scripts must be publicly accessible for the IB to access them.
"customize": [
{
"type": "PowerShell",
"name": "<name>",
"scriptUri": "<path to script>",
"runElevated": "<true false>",
"sha256Checksum": "<sha256 checksum>"
},
{
"type": "PowerShell",
"name": "<name>",
"inline": "<PowerShell syntax to run>",
"validExitCodes": "<exit code>",
"runElevated": "<true or false>"
}
],
NOTE
The file customizer is only suitable for small file downloads, < 20MB. For larger file downloads use a script or inline command, the use code to download files, such as,
Linux wget or curl , Windows, Invoke-WebRequest .
Files in the File customizer can be downloaded from Azure Storage using MSI.
Windows Update Customizer
This customizer is built on the community Windows Update Provisioner for Packer, which is an open source project maintained by the Packer community.
Microsoft tests and validate the provisioner with the Image Builder service, and will support investigating issues with it, and work to resolve issues,
however the open source project is not officially supported by Microsoft. For detailed documentation on and help with the Windows Update Provisioner
please see the project repository.
"customize": [
{
"type": "WindowsUpdate",
"searchCriteria": "IsInstalled=0",
"filters": [
"exclude:$_.Title -like '*Preview*'",
"include:$true"
],
"updateLimit": 20
}
],
OS support: Windows
Customize properties:
type – WindowsUpdate.
searchCriteria - Optional, defines which type of updates are installed (Recommended, Important etc.), BrowseOnly=0 and IsInstalled=0
(Recommended) is the default.
filters – Optional, allows you to specify a filter to include or exclude updates.
updateLimit – Optional, defines how many updates can be installed, default 1000.
NOTE
The Windows Update customizer can fail if there are any outstanding Windows restarts, or application installations still running, typically you may see this error in the
customization.log, System.Runtime.InteropServices.COMException (0x80240016): Exception from HRESULT: 0x80240016 . We strongly advise you consider adding
in a Windows Restart, and/or allowing applications enough time to complete their installations using [sleep] or wait
commands(https://docs.microsoft.com/powershell/module/microsoft.powershell.utility/start-sleep?view=powershell-7) in the inline commands or scripts before running
Windows Update.
Generalize
By default, Azure Image Builder will also run ‘deprovision’ code at the end of each image customization phase, to ‘generalize’ the image. Generalizing is a
process where the image is set up so it can be reused to create multiple VMs. For Windows VMs, Azure Image Builder uses Sysprep. For Linux, Azure
Image Builder runs ‘waagent -deprovision’.
The commands Image Builder users to generalize may not be suitable for every situation, so Azure Image Builder will allow you to customize this
command, if needed.
If you are migrating existing customization, and you are using different Sysprep/waagent commands, you can use the Image Builder generic commands,
and if the VM creation fails, use your own Sysprep or waagent commands.
If Azure Image Builder creates a Windows custom image successfully, and you create a VM from it, then find that the VM creation fails or does not
complete successfully, you will need to review the Windows Server Sysprep documentation or raise a support request with the Windows Server Sysprep
Customer Services Support team, who can troubleshoot and advise on the correct Sysprep usage.
Default Sysprep command
Properties: distribute
Azure Image Builder supports three distribution targets:
managedImage - managed image.
sharedImage - Shared Image Gallery.
VHD - VHD in a storage account.
You can distribute an image to both of the target types in the same configuration.
NOTE
The default AIB sysprep command does not include "/mode:vm", however this maybe required when create images that will have the HyperV role installed. If you need
to add this command argument, you must override the sysprep command.
Because you can have more than one target to distribute to, Image Builder maintains a state for every distribution target that can be accessed by
querying the runOutputName . The runOutputName is an object you can query post distribution for information about that distribution. For example, you
can query the location of the VHD, or regions where the image version was replicated to, or SIG Image version created. This is a property of every
distribution target. The runOutputName must be unique to each distribution target. Here is an example, this is querying a Shared Image Gallery
distribution:
subscriptionID=<subcriptionID>
imageResourceGroup=<resourceGroup of image template>
runOutputName=<runOutputName>
az resource show \
--ids
"/subscriptions/$subscriptionID/resourcegroups/$imageResourceGroup/providers/Microsoft.VirtualMachineImages/imageTemplates/ImageTemplateLinuxRHEL77/
runOutputs/$runOutputName" \
--api-version=2020-02-14
Output:
{
"id":
"/subscriptions/xxxxxx/resourcegroups/rheltest/providers/Microsoft.VirtualMachineImages/imageTemplates/ImageTemplateLinuxRHEL77/runOutputs/rhel77",
"identity": null,
"kind": null,
"location": null,
"managedBy": null,
"name": "rhel77",
"plan": null,
"properties": {
"artifactId":
"/subscriptions/xxxxxx/resourceGroups/aibDevOpsImg/providers/Microsoft.Compute/galleries/devOpsSIG/images/rhel/versions/0.24105.52755",
"provisioningState": "Succeeded"
},
"resourceGroup": "rheltest",
"sku": null,
"tags": null,
"type": "Microsoft.VirtualMachineImages/imageTemplates/runOutputs"
}
Distribute: managedImage
The image output will be a managed image resource.
{
"type":"managedImage",
"imageId": "<resource ID>",
"location": "<region>",
"runOutputName": "<name>",
"artifactTags": {
"<name": "<value>",
"<name>": "<value>"
}
}
Distribute properties:
type – managedImage
imageId – Resource ID of the destination image, expected format:
/subscriptions/<subscriptionId>/resourceGroups/<destinationResourceGroupName>/providers/Microsoft.Compute/images/<imageName>
location - location of the managed image.
runOutputName – unique name for identifying the distribution.
ar tifactTags - Optional user specified key value pair tags.
NOTE
The destination resource group must exist. If you want the image distributed to a different region, it will increase the deployment time .
Distribute: sharedImage
The Azure Shared Image Gallery is a new Image Management service that allows managing of image region replication, versioning and sharing custom
images. Azure Image Builder supports distributing with this service, so you can distribute images to regions supported by Shared Image Galleries.
A Shared Image Gallery is made up of:
Gallery - Container for multiple shared images. A gallery is deployed in one region.
Image definitions - a conceptual grouping for images.
Image versions - this is an image type used for deploying a VM or scale set. Image versions can be replicated to other regions where VMs need to be
deployed.
Before you can distribute to the Image Gallery, you must create a gallery and an image definition, see Shared images.
{
"type": "SharedImage",
"galleryImageId": "<resource ID>",
"runOutputName": "<name>",
"artifactTags": {
"<name>": "<value>",
"<name>": "<value>"
},
"replicationRegions": [
"<region where the gallery is deployed>",
"<region>"
]
}
NOTE
If the image template and referenced image definition are not in the same location, you will see additional time to create images. Image Builder currently does not
have a location parameter for the image version resource, we take it from its parent image definition . For example, if an image definition is in westus and you
want the image version replicated to eastus, a blob is copied to to westus, from this, an image version resource in westus is created, and then replicate to eastus. To
avoid the additional replication time, ensure the image definition and image template are in the same location.
Distribute: VHD
You can output to a VHD. You can then copy the VHD, and use it to publish to Azure MarketPlace, or use with Azure Stack.
{
"type": "VHD",
"runOutputName": "<VHD name>",
"tags": {
"<name": "<value>",
"<name>": "<value>"
}
}
az resource show \
--ids
"/subscriptions/$subscriptionId/resourcegroups/<imageResourceGroup>/providers/Microsoft.VirtualMachineImages/imageTemplates/<imageTemplateName>/runO
utputs/<runOutputName>" | grep artifactUri
NOTE
Once the VHD has been created, copy it to a different location, as soon as possible. The VHD is stored in a storage account in the temporary resource group created
when the image template is submitted to the Azure Image Builder service. If you delete the image template, then you will lose the VHD.
az resource invoke-action \
--resource-group $imageResourceGroup \
--resource-type Microsoft.VirtualMachineImages/imageTemplates \
-n helloImageTemplateLinux01 \
--action Cancel
Next steps
There are sample .json files for different scenarios in the Azure Image Builder GitHub.
Preview: Create a Windows image and distribute it to
a Shared Image Gallery
11/2/2020 • 7 minutes to read • Edit Online
This article is to show you how you can use the Azure Image Builder, and Azure PowerShell, to create an image
version in a Shared Image Gallery, then distribute the image globally. You can also do this using the Azure CLI.
We will be using a .json template to configure the image. The .json file we are using is here:
armTemplateWinSIG.json. We will be downloading and editing a local version of the template, so this article is
written using local PowerShell session.
To distribute the image to a Shared Image Gallery, the template uses sharedImage as the value for the distribute
section of the template.
Azure Image Builder automatically runs sysprep to generalize the image, this is a generic sysprep command, which
you can override if needed.
Be aware how many times you layer customizations. You can run the Sysprep command up to 8 times on a single
Windows image. After running Sysprep 8 times, you must recreate your Windows image. For more information,
see Limits on how many times you can run Sysprep.
IMPORTANT
Azure Image Builder is currently in public preview. This preview version is provided without a service level agreement, and it's
not recommended for production workloads. Certain features might not be supported or might have constrained
capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.
If they do not return Registered , use the following to register the providers:
Create variables
We will be using some pieces of information repeatedly, so we will create some variables to store that information.
Replace the values for the variables, like username and vmpassword , with your own information.
# Location
$location="westus"
# Create a resource group for Image Template and Shared Image Gallery
New-AzResourceGroup `
-Name $imageResourceGroup `
-Location $location
# create identity
New-AzUserAssignedIdentity -ResourceGroupName $imageResourceGroup -Name $identityName
$aibRoleImageCreationUrl="https://raw.githubusercontent.com/danielsollondon/azvmimagebuilder/master/solutions/
12_Creating_AIB_Security_Roles/aibRoleImageCreation.json"
$aibRoleImageCreationPath = "aibRoleImageCreation.json"
# download config
Invoke-WebRequest -Uri $aibRoleImageCreationUrl -OutFile $aibRoleImageCreationPath -UseBasicParsing
### NOTE: If you see this error: 'New-AzRoleDefinition: Role definition limit exceeded. No more role
definitions can be created.' See this article to resolve:
https://docs.microsoft.com/azure/role-based-access-control/troubleshooting
$templateFilePath = "armTemplateWinSIG.json"
Invoke-WebRequest `
-Uri
"https://raw.githubusercontent.com/danielsollondon/azvmimagebuilder/master/quickquickstarts/1_Creating_a_Custo
m_Win_Shared_Image_Gallery_Image/armTemplateWinSIG.json" `
-OutFile $templateFilePath `
-UseBasicParsing
Invoke-AzResourceAction `
-ResourceName $imageTemplateName `
-ResourceGroupName $imageResourceGroup `
-ResourceType Microsoft.VirtualMachineImages/imageTemplates `
-ApiVersion "2019-05-01-preview" `
-Action Run
Creating the image and replicating it to both regions can take a while. Wait until this part is finished before moving
on to creating a VM.
For information on options for automating getting the image build status, see the Readme for this template on
GitHub.
Create the VM
Create a VM from the image version that was created by Azure Image Builder.
Get the image version you created.
$imageVersion = Get-AzGalleryImageVersion `
-ResourceGroupName $imageResourceGroup `
-GalleryName $sigGalleryName `
-GalleryImageDefinitionName $imageDefName
Create the VM in the second region that were the image was replicated.
$vmResourceGroup = "myResourceGroup"
$vmName = "myVMfromImage"
# Network pieces
$subnetConfig = New-AzVirtualNetworkSubnetConfig -Name mySubnet -AddressPrefix 192.168.1.0/24
$vnet = New-AzVirtualNetwork -ResourceGroupName $vmResourceGroup -Location $replRegion2 `
-Name MYvNET -AddressPrefix 192.168.0.0/16 -Subnet $subnetConfig
$pip = New-AzPublicIpAddress -ResourceGroupName $vmResourceGroup -Location $replRegion2 `
-Name "mypublicdns$(Get-Random)" -AllocationMethod Static -IdleTimeoutInMinutes 4
$nsgRuleRDP = New-AzNetworkSecurityRuleConfig -Name myNetworkSecurityGroupRuleRDP -Protocol Tcp `
-Direction Inbound -Priority 1000 -SourceAddressPrefix * -SourcePortRange * -DestinationAddressPrefix * `
-DestinationPortRange 3389 -Access Allow
$nsg = New-AzNetworkSecurityGroup -ResourceGroupName $vmResourceGroup -Location $replRegion2 `
-Name myNetworkSecurityGroup -SecurityRules $nsgRuleRDP
$nic = New-AzNetworkInterface -Name myNic -ResourceGroupName $vmResourceGroup -Location $replRegion2 `
-SubnetId $vnet.Subnets[0].Id -PublicIpAddressId $pip.Id -NetworkSecurityGroupId $nsg.Id
# Create a virtual machine configuration using $imageVersion.Id to specify the shared image
$vmConfig = New-AzVMConfig -VMName $vmName -VMSize Standard_D1_v2 | `
Set-AzVMOperatingSystem -Windows -ComputerName $vmName -Credential $cred | `
Set-AzVMSourceImage -Id $imageVersion.Id | `
Add-AzVMNetworkInterface -Id $nic.Id
dir c:\
You should see a directory named buildActions that was created during image customization.
Clean up resources
If you want to now try re-customizing the image version to create a new version of the same image, skip this
step and go on to Use Azure Image Builder to create another image version.
This will delete the image that was created, along with all of the other resource files. Make sure you are finished
with this deployment before deleting the resources.
Delete the resource group template first, otherwise the staging resource group (IT_) used by AIB will not be cleaned
up.
Get ResourceID of the image template.
remove definitions
delete identity
Next Steps
To learn how to update the image version you created, see Use Azure Image Builder to create another image
version.
Preview: Create a new VM image version from an
existing image version using Azure Image Builder in
Windows
11/2/2020 • 4 minutes to read • Edit Online
This article shows you how to take an existing image version in a Shared Image Gallery, update it, and publish it as
a new image version to the gallery.
We will be using a sample .json template to configure the image. The .json file we are using is here:
helloImageTemplateforSIGfromWinSIG.json.
IMPORTANT
Azure Image Builder is currently in public preview. This preview version is provided without a service level agreement, and it's
not recommended for production workloads. Certain features might not be supported or might have constrained
capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.
Create a variable for your subscription ID. You can get this using az account show | grep id .
subscriptionID=<Subscription ID>
If you already have your own Shared Image Gallery, and did not follow the previous example, you will need to
assign permissions for Image Builder to access the Resource Group, so it can access the gallery. Please review the
steps in the Create an image and distribute to a Shared Image Gallery example.
az resource create \
--resource-group $sigResourceGroup \
--properties @helloImageTemplateforSIGfromWinSIG.json \
--is-full-object \
--resource-type Microsoft.VirtualMachineImages/imageTemplates \
-n imageTemplateforSIGfromWinSIG01
az resource invoke-action \
--resource-group $sigResourceGroup \
--resource-type Microsoft.VirtualMachineImages/imageTemplates \
-n imageTemplateforSIGfromWinSIG01 \
--action Run
Wait until the image has been built and replication before moving on to the next step.
Create the VM
az vm create \
--resource-group $sigResourceGroup \
--name aibImgWinVm002 \
--admin-username $username \
--admin-password $vmpassword \
--image
"/subscriptions/$subscriptionID/resourceGroups/$sigResourceGroup/providers/Microsoft.Compute/galleries/$sigNam
e/images/$imageDefName/versions/latest" \
--location $location
dir c:\
Next steps
To learn more about the components of the .json file used in this article, see Image builder template reference.
Create an image and use a user-assigned managed
identity to access files in Azure Storage
11/2/2020 • 5 minutes to read • Edit Online
Azure Image Builder supports using scripts, or copying files from multiple locations, such as GitHub and Azure
storage etc. To use these, they must have been externally accessible to Azure Image Builder, but you could protect
Azure Storage blobs using SAS Tokens.
This article shows how to create a customized image using the Azure VM Image Builder, where the service will use
a User-assigned Managed Identity to access files in Azure storage for the image customization, without you having
to make the files publicly accessible, or setting up SAS tokens.
In the example below, you will create two resource groups, one will be used for the custom image, and the other
will host an Azure Storage Account, that contains a script file. This simulates a real life scenario, where you may
have build artifacts, or image files in different storage accounts, outside of Image Builder. You will create a user-
assigned identity, then grant that read permissions on the script file, but you will not set any public access to that
file. You will then use the Shell customizer to download and run that script from the storage account.
IMPORTANT
Azure Image Builder is currently in public preview. This preview version is provided without a service level agreement, and it's
not recommended for production workloads. Certain features might not be supported or might have constrained
capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.
Create a variable for your subscription ID. You can get this using az account show | grep id .
Create the resource groups for both the image and the script storage.
# get identity id
imgBuilderCliId=$(az identity show -g $imageResourceGroup -n $idenityName | grep "clientId" | cut -c16- | tr -
d '",')
Create the storage and copy the sample script into it from GitHub.
# script container
scriptStorageAccContainer=scriptscont$(date +'%s')
# script url
scriptUrl=https://$scriptStorageAcc.blob.core.windows.net/$scriptStorageAccContainer/customizeScript.sh
Give Image Builder permission to create resources in the image resource group. The --assignee value is the user-
identity ID.
az role assignment create \
--assignee $imgBuilderCliId \
--role "Storage Blob Data Reader" \
--scope
/subscriptions/$subscriptionID/resourceGroups/$strResourceGroup/providers/Microsoft.Storage/storageAccounts/$s
criptStorageAcc/blobServices/default/containers/$scriptStorageAccContainer
curl
https://raw.githubusercontent.com/danielsollondon/azvmimagebuilder/master/quickquickstarts/7_Creating_Custom_I
mage_using_MSI_to_Access_Storage/helloImageTemplateMsi.json -o helloImageTemplateMsi.json
sed -i -e "s/<subscriptionID>/$subscriptionID/g" helloImageTemplateMsi.json
sed -i -e "s/<rgName>/$imageResourceGroup/g" helloImageTemplateMsi.json
sed -i -e "s/<region>/$location/g" helloImageTemplateMsi.json
sed -i -e "s/<imageName>/$imageName/g" helloImageTemplateMsi.json
sed -i -e "s%<scriptUrl>%$scriptUrl%g" helloImageTemplateMsi.json
sed -i -e "s%<imgBuilderId>%$imgBuilderId%g" helloImageTemplateMsi.json
sed -i -e "s%<runOutputName>%$runOutputName%g" helloImageTemplateMsi.json
az resource create \
--resource-group $imageResourceGroup \
--properties @helloImageTemplateMsi.json \
--is-full-object \
--resource-type Microsoft.VirtualMachineImages/imageTemplates \
-n helloImageTemplateMsi01
az resource invoke-action \
--resource-group $imageResourceGroup \
--resource-type Microsoft.VirtualMachineImages/imageTemplates \
-n helloImageTemplateMsi01 \
--action Run
Wait for the build to complete. This can take about 15 minutes.
Create a VM
Create a VM from the image.
az vm create \
--resource-group $imageResourceGroup \
--name aibImgVm00 \
--admin-username aibuser \
--image $imageName \
--location $location \
--generate-ssh-keys
After the VM has been created, start an SSH session with the VM.
ssh aibuser@<publicIp>
You should see the image was customized with a Message of the Day as soon as your SSH connection is
established!
*******************************************************
** This VM was built from the: **
** !! AZURE VM IMAGE BUILDER Custom Image !! **
** You have just been Customized :-) **
*******************************************************
Clean up
When you are finished, you can delete the resources if they are no longer needed.
Next steps
If you have any trouble working with Azure Image Builder, see Troubleshooting.
Troubleshoot Azure Image Builder Service
11/2/2020 • 21 minutes to read • Edit Online
This article helps you troubleshoot and resolve common issues you may encounter when using Azure Image
Builder Service.
AIB failures can happen in 2 areas:
Image Template submission
Image Build
NOTE
For PowerShell, you will need to install the Azure Image Builder PowerShell Modules.
The following sections include problem resolution guidance for common image template submission errors.
Update/Upgrade of image templates is currently not supported
Error
Cause
The template already exists.
Solution
If you submit an image configuration template and the submission fails, a failed template artifact still exists. Delete
the failed template.
The resource operation completed with terminal provisioning state 'Failed'
Error
Microsoft.VirtualMachineImages/imageTemplates 'helloImageTemplateforSIG01' failed with message '{
"status": "Failed",
"error": {
"code": "ResourceDeploymentFailure",
"message": "The resource operation completed with terminal provisioning state 'Failed'.",
"details": [
{
"code": "InternalOperationError",
"message": "Internal error occurred."
Cause
In most cases, the resource deployment failure error occurs due to missing permissions.
Solution
Depending on your scenario, Azure Image Builder may need permissions to:
Source image or Shared Image Gallery resource group
Distribution image or Shared Image Gallery resource
The storage account, container, or blob that the File customizer is accessing.
For more information on configuring permissions, see Configure Azure Image Builder Service permissions using
Azure CLI or Configure Azure Image Builder Service permissions using PowerShell
Error getting managed image
Error
Cause
Missing permissions.
Solution
Depending on your scenario, Azure Image Builder may need permissions to:
Source image or Shared Image Gallery resource group
Distribution image or Shared Image Gallery resource
The storage account, container, or blob that the File customizer is accessing.
For more information on configuring permissions, see Configure Azure Image Builder Service permissions using
Azure CLI or Configure Azure Image Builder Service permissions using PowerShell
Build step failed for image version
Error
Downloading external file (<myFile>) to local file (xxxxx.0.customizer.fp) [attempt 1 of 10] failed: Error
downloading '<myFile>' to 'xxxxx.0.customizer.fp'..
Cause
The file name or location is incorrect, or the location is not reachable.
Solution
Ensure the file is reachable. Verify the name and location are correct.
Customization log
When the image build is running, logs are created and stored in a storage account. Azure Image Builder creates
the storage account in the temporary resource group when you create an image template artifact.
The storage account name uses the following pattern:
IT_<ImageResourceGroupName> <TemplateName> <GUID>
For example, IT_aibmdi_helloImageTemplateLinux01.
You can view the customization.log in storage account in the resource group by selecting Storage Account >
Blobs > packerlogs . Then select director y > customization.log .
Understanding the customization log
The log is verbose. It covers the image build including any issues with the image distribution, such as Shared
Image Gallery replication. These errors are surfaced in the error message of the image template status.
The customization.log includes the following stages:
1. Deploy the build VM and dependencies using ARM templates to the IT_ staging resource group stage. This
stage includes multiple POSTs to the Azure Image Builder resource provider:
PACKER ERR 2020/04/30 23:28:50 packer: 2020/04/30 23:28:50 Azure request method="GET"
request="https://management.azure.com/subscriptions/<subID>/resourcegroups/IT_aibImageRG200_window2019V
netTemplate01_dec33089-1cc3-4505-ae28-
6661e43fac48/providers/Microsoft.Resources/deployments/pkrdp51lc0339jg/operationStatuses/08586133176207
523519?[REDACTED]" body=""
PACKER ERR 2020/04/30 23:30:50 packer: 2020/04/30 23:30:50 Waiting for WinRM, up to timeout: 10m0s
..
PACKER OUT azure-arm: WinRM connected.
If Linux, the Azure Image Builder Service will connect using SSH:
4. Run customizations stage. When customizations run, you can identify them by reviewing the
customization.log. Search for (telemetry).
5. De-provision stage. Azure Image Builder adds a hidden customizer. This de-provision step is responsible for
preparing the VM for de-provisioning. It runs Windows Sysprep (using c:\DeprovisioningScript.ps1), or in
Linux waagent deprovision (using /tmp/DeprovisioningScript.sh).
For example:
6. Clean up stage. Once the build has completed, Azure Image Builder resources are deleted.
PACKER ERR 2020/02/04 02:04:23 packer: 2020/02/04 02:04:23 Azure request method="DELETE"
request="https://management.azure.com/subscriptions/<subId>/resourceGroups/IT_aibDevOpsImg_t_vvvvvvv_yy
yyyy-de5f-4f7c-92f2-xxxxxxxx/providers/Microsoft.Network/networkInterfaces/pkrnijamvpo08eo?[REDACTED]"
body=""
Tips for troubleshooting script/inline customization
Test the code before supplying it to Image Builder
Ensure Azure Policy's and Firewalls allow connectivity to remote resources.
Output comments to the console, such as using Write-Host or echo , this will allow you to search the
customization.log.
"provisioningState": "Succeeded",
"lastRunStatus": {
"startTime": "2020-04-30T23:24:06.756985789Z",
"endTime": "2020-04-30T23:39:14.268729811Z",
"runState": "Failed",
"message": "Failed while waiting for packerizer: Microservice has failed: Failed while processing request:
Error when executing packerizer: Packer build command has failed: exit status 1. During the image build, a
failure has occurred, please review the build log to identify which build/customization step failed. For more
troubleshooting steps go to https://aka.ms/azvmimagebuilderts. Image Build log location:
https://xxxxxxxxxx.blob.core.windows.net/packerlogs/xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx/customization.log.
OperationId: xxxxxx-5a8c-4379-xxxx-8d85493bc791. Use this operationId to search packer logs."
Cause
Customization failure.
Solution
Review log to locate customizers failures. Search for (telemetry).
For example:
Timeout exceeded
Error
Cause
The build exceeded the build timeout. This error is seen in the 'lastrunstatus'.
Solution
1. Review the customization.log. Identify the last customizer to run. Search for (telemetry) starting from the
bottom of the log.
2. Check script customizations. The customizations may not be suppressing user interaction for commands,
such as quiet options. For example, apt-get install -y results in the script execution waiting for user
interaction.
3. If you are using the File customizer to download artifacts greater than 20 MB, see workarounds section.
4. Review errors and dependencies in script that may cause the script to wait.
5. If you expect that the customizations need more time, increase buildTimeoutInMinutes. The default is four
hours.
6. If you have resource-intensive actions, such as downloading gigabytes of files, consider the underlying
build VM size. The service uses a Standard_D1_v2 VM. The VM has, 1 vCPU and 3.5 GB of memory. If you
are downloading 50 GB, this will likely exhaust the VM resources and cause communication failures
between the Azure Image Builder Service and build VM. Retry the build with a larger memory VM, by
setting the VM_Size.
Long file download time
Error
Cause
File customizer is downloading a large file.
Solution
The file customizer is only suitable for small file downloads less than 20 MB. For larger file downloads, use a script
or inline command. For example, on Linux you can use wget or curl . On Windows, you can use
Invoke-WebRequest .
Cause
Image Builder timed out waiting for the image to be added and replicated to the Shared Image Gallery (SIG). If the
image is being injected into the SIG, it can be assumed the image build was successful. However, the overall
process failed, because the image builder was waiting on shared image gallery to complete the replication. Even
though the build has failed, the replication continues. You can get the properties of the image version by checking
the distribution runOutput.
$runOutputName=<distributionRunOutput>
az resource show \
--ids
"/subscriptions/$subscriptionID/resourcegroups/$imageResourceGroup/providers/Microsoft.VirtualMachineImages/im
ageTemplates/$imageTemplateName/runOutputs/$runOutputName" \
--api-version=2019-05-01-preview
Solution
Increase the buildTimeoutInMinutes .
Low Windows resource information events
Error
Cause
Resource exhaustion. This issue is commonly seen with Windows Update running using the default build VM size
D1_V2.
Solution
Increase the build VM size.
Builds finished but no artifacts were created
Error
Cause
Time out caused by waiting for required Azure resources to be created.
Solution
Rerun the build to try again.
Resource not found
Error
"provisioningState": "Succeeded",
"lastRunStatus": {
"startTime": "2020-05-01T00:13:52.599326198Z",
"endTime": "2020-05-01T00:15:13.62366898Z",
"runState": "Failed",
"message": "network.InterfacesClient#UpdateTags: Failure responding to request: StatusCode=404 -- Original
Error: autorest/azure: Service returned an error. Status=404 Code=\"ResourceNotFound\" Message=\"The Resource
'Microsoft.Network/networkInterfaces/aibpls7lz2e.nic.4609d697-be0a-4cb0-86af-49b6fe877fe1' under resource
group 'IT_aibImageRG200_window2019VnetTemplate01_9988723b-af56-413a-9006-84130af0e9df' was not found.\""
},
Cause
Missing permissions.
Solution
Recheck Azure Image Builder has all permissions it requires.
For more information on configuring permissions, see Configure Azure Image Builder Service permissions using
Azure CLI or Configure Azure Image Builder Service permissions using PowerShell
Sysprep timing
Error
Cause
The cause may be a timing issue due to the D1_V2 VM size. If customizations are limited and execute in less than
three seconds, sysprep commands are run by Azure Image Builder to de-provision. When Azure Image Builder de-
provisions, the sysprep command checks for the WindowsAzureGuestAgent, which may not be fully installed
causing the timing issue.
Solution
Increase the VM size. Or, you can add a 60-second PowerShell sleep customization to avoid the timing issue.
Cancelling builder after context cancellation context canceled
Error
PACKER ERR 2020/03/26 22:11:23 Cancelling builder after context cancellation context canceled
PACKER OUT Cancelling build after receiving terminated
PACKER ERR 2020/03/26 22:11:23 packer-builder-azure-arm plugin: Cancelling hook after context cancellation
context canceled
..
PACKER ERR 2020/03/26 22:11:23 packer-builder-azure-arm plugin: Cancelling provisioning due to context
cancellation: context canceled
PACKER ERR 2020/03/26 22:11:25 packer-builder-azure-arm plugin: [ERROR] Remote command exited without exit
status or exit signal.
PACKER ERR 2020/03/26 22:11:25 packer-builder-azure-arm plugin: [INFO] RPC endpoint: Communicator ended with:
2300218
PACKER ERR 2020/03/26 22:11:25 [INFO] 148974 bytes written for 'stdout'
PACKER ERR 2020/03/26 22:11:25 [INFO] 0 bytes written for 'stderr'
PACKER ERR 2020/03/26 22:11:25 [INFO] RPC client: Communicator ended with: 2300218
PACKER ERR 2020/03/26 22:11:25 [INFO] RPC endpoint: Communicator ended with: 2300218
Cause
Image Builder service uses port 22(Linux), or 5986(Windows)to connect to the build VM, this occurs when the
service is disconnected from the build VM during an image build. Reasons for disconnection can vary, but
enabling or configuring firewalls in script can block the ports above.
Solution
Review your scripts for firewall changes/enablement, or changes to SSH or WinRM, and ensure any changes allow
for constant connectivity between the service and build VM on the ports above. For more information on Image
Builder networking, please review the requirements.
DevOps task
Troubleshooting the task
The task will only fail if an error occurs during customization, when this happens the task will report failure and
leave the staging resource group, with the logs, so you can identify the issue.
To locate the log, you need to know the template name, go into pipeline > failed build > drill into the AIB DevOps
task, then you will see the log and a template name:
Go to the portal, search for the template name in resource group, then look for the resource group with IT_*. Go to
the storage account > blobs > containers > logs.
Troubleshooting successful builds
There maybe some cases where you need to investigate successful builds, and want to review the log. As
mentioned, if the image build is successful, the staging resource group that contains the logs will be deleted as
part of the clean up. However, what you can do, is introduce a sleep after the inline command, then get the logs as
the build is paused. To do this follow these steps:
1. Update the inline command, and add: Write-Host / Echo “Sleep” – this will allow you to search in the log
2. Add a sleep for at least 10mins, you can use Start-Sleep, or Sleep Linux command.
3. Use the method above to identify the log location, and then keep downloading/checking the log until it gets to
the sleep.
Operation was canceled
Error
2020-05-05T18:28:24.9280196Z ##[section]Starting: Azure VM Image Builder Task
2020-05-05T18:28:24.9609966Z ==============================================================================
2020-05-05T18:28:24.9610739Z Task : Azure VM Image Builder Test(Preview)
2020-05-05T18:28:24.9611277Z Description : Build images using Azure Image Builder resource provider.
2020-05-05T18:28:24.9611608Z Version : 1.0.18
2020-05-05T18:28:24.9612003Z Author : Microsoft Corporation
2020-05-05T18:28:24.9612718Z Help : For documentation, and end to end example, please visit:
http://aka.ms/azvmimagebuilderdevops
2020-05-05T18:28:24.9613390Z ==============================================================================
2020-05-05T18:28:26.0651512Z start reading task parameters...
2020-05-05T18:28:26.0673377Z found build at: d:\a\r1\a\_AppsAndImageBuilder\webApp
2020-05-05T18:28:26.0708785Z end reading parameters
2020-05-05T18:28:26.0745447Z getting storage account details for aibstagstor1565047758
2020-05-05T18:28:29.8812270Z created archive d:\a\_temp\temp_web_package_09737279437949953.zip
2020-05-05T18:28:33.1568013Z Source for image: { type: 'PlatformImage',
2020-05-05T18:28:33.1584131Z publisher: 'MicrosoftWindowsServer',
2020-05-05T18:28:33.1585965Z offer: 'WindowsServer',
2020-05-05T18:28:33.1592768Z sku: '2016-Datacenter',
2020-05-05T18:28:33.1594191Z version: '14393.3630.2004101604' }
2020-05-05T18:28:33.1595387Z template name: t_1588703313152
2020-05-05T18:28:33.1597453Z starting put template...
2020-05-05T18:28:52.9278603Z put template: Succeeded
2020-05-05T18:28:52.9281282Z starting run template...
2020-05-05T19:33:14.3923479Z ##[error]The operation was canceled.
2020-05-05T19:33:14.3939721Z ##[section]Finishing: Azure VM Image Builder Task
Cause
If the build was not canceled by a user, it was canceled by Azure DevOps User Agent. Most likely the 1-hour
timeout has occurred due to Azure DevOps capabilities. If you are using a private project and agent, you get 60
minutes of build time. If the build exceeds the timeout, DevOps cancels the running task.
For more information on Azure DevOps capabilities and limitations, see Microsoft-hosted agents
Solution
You can host your own DevOps agents, or look to reduce the time of your build. For example, if you are
distributing to the shared image gallery, replicate to one region. If you want to replicate asynchronously.
Slow Windows Logon: 'Please wait for the Windows Modules Installer'
Error
After you create a Windows 10 image with Image Builder, then create a VM from the image, you RDP, and have to
wait minutes at the first logon seeing a blue screen with the message:
Solution
Firstly in the image build check that there are no outstanding reboots required by adding a Windows Restart
customizer as the last customization, and that all software installation is complete. Lastly, add /mode:vm option to
the default sysprep that AIB uses, see below, 'VMs created from AIB images do not create successfully' >
'Overriding the Commands'
NOTE
If AIB creates a Windows custom image successfully, and you create a VM from it then find the VM will not create
successfully (For example, the VM creation command does not complete/timeouts), you will need to review the Windows
Server Sysprep documentation. Or, you can raise a support request with the Windows Server Sysprep Customer Services
Support team, who can troubleshoot and advise on the correct Sysprep command.
c:\DeprovisioningScript.ps1
Linux:
/tmp/DeprovisioningScript.sh
Next steps
For more information, see Azure Image Builder overview.
Find and use VM images in the Azure Marketplace
with Azure PowerShell
11/2/2020 • 6 minutes to read • Edit Online
This article describes how to use Azure PowerShell to find VM images in the Azure Marketplace. You can then
specify a Marketplace image when you create a VM.
You can also browse available images and offers using the Azure Marketplace storefront, the Azure portal, or the
Azure CLI.
Terminology
A Marketplace image in Azure has the following attributes:
Publisher : The organization that created the image. Examples: Canonical, MicrosoftWindowsServer
Offer : The name of a group of related images created by a publisher. Examples: UbuntuServer, WindowsServer
SKU : An instance of an offer, such as a major release of a distribution. Examples: 18.04-LTS, 2019-Datacenter
Version : The version number of an image SKU.
To identify a Marketplace image when you deploy a VM programmatically, supply these values individually as
parameters. Some tools accept an image URN, which combines these values, separated by the colon (:) character:
Publisher:Offer:Sku:Version. In a URN, you can replace the version number with "latest", which selects the latest
version of the image.
If the image publisher provides additional license and purchase terms, then you must accept those terms and
enable programmatic deployment. You'll also need to supply purchase plan parameters when deploying a VM
programmatically. See Deploy an image with Marketplace terms.
P UB L ISH ER O F F ER SK U
$pubName="<publisher>"
Get-AzVMImageOffer -Location $locName -PublisherName $pubName | Select Offer
$offerName="<offer>"
Get-AzVMImageSku -Location $locName -PublisherName $pubName -Offer $offerName | Select Skus
4. Fill in your chosen SKU name and get the image version:
$skuName="<SKU>"
Get-AzVMImage -Location $locName -PublisherName $pubName -Offer $offerName -Sku $skuName | Select
Version
From the output of the Get-AzVMImage command, you can select a version image to deploy a new virtual machine.
The following example shows the full sequence of commands and their outputs:
$locName="West US"
Get-AzVMImagePublisher -Location $locName | Select PublisherName
Partial output:
PublisherName
-------------
...
abiquo
accedian
accellion
accessdata-group
accops
Acronis
Acronis.Backup
actian-corp
actian_matrix
actifio
activeeon
adgs
advantech
advantech-webaccess
advantys
...
$pubName="MicrosoftWindowsServer"
Get-AzVMImageOffer -Location $locName -PublisherName $pubName | Select Offer
Output:
Offer
-----
Windows-HUB
WindowsServer
WindowsServerSemiAnnual
$offerName="WindowsServer"
Get-AzVMImageSku -Location $locName -PublisherName $pubName -Offer $offerName | Select Skus
Partial output:
Skus
----
2008-R2-SP1
2008-R2-SP1-smalldisk
2012-Datacenter
2012-Datacenter-smalldisk
2012-R2-Datacenter
2012-R2-Datacenter-smalldisk
2016-Datacenter
2016-Datacenter-Server-Core
2016-Datacenter-Server-Core-smalldisk
2016-Datacenter-smalldisk
2016-Datacenter-with-Containers
2016-Datacenter-with-RDSH
2019-Datacenter
2019-Datacenter-Core
2019-Datacenter-Core-smalldisk
2019-Datacenter-Core-with-Containers
...
Then, for the 2019-Datacenter SKU:
$skuName="2019-Datacenter"
Get-AzVMImage -Location $locName -PublisherName $pubName -Offer $offerName -Sku $skuName | Select Version
Now you can combine the selected publisher, offer, SKU, and version into a URN (values separated by :). Pass this
URN with the --image parameter when you create a VM with the New-AzVM cmdlet. You can optionally replace
the version number in the URN with "latest" to get the latest version of the image.
If you deploy a VM with a Resource Manager template, then you'll set the image parameters individually in the
imageReference properties. See the template reference.
$version = "2016.127.20170406"
Get-AzVMImage -Location $locName -PublisherName $pubName -Offer $offerName -Skus $skuName -Version $version
Output:
Id : /Subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-
xxxxxxxxxxxx/Providers/Microsoft.Compute/Locations/westus/Publishers/MicrosoftWindowsServer/ArtifactTypes/VMI
mage/Offers/WindowsServer/Skus/2016-Datacenter/Versions/2019.0.20190115
Location : westus
PublisherName : MicrosoftWindowsServer
Offer : WindowsServer
Skus : 2019-Datacenter
Version : 2019.0.20190115
FilterExpression :
Name : 2019.0.20190115
OSDiskImage : {
"operatingSystem": "Windows"
}
PurchasePlan : null
DataDiskImages : []
The example below shows a similar command for the Data Science Virtual Machine - Windows 2016 image,
which has the following PurchasePlan properties: name , product , and publisher . Some images also have a
promotion code property. To deploy this image, see the following sections to accept the terms and to enable
programmatic deployment.
Output:
Id : /Subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-
xxxxxxxxxxxx/Providers/Microsoft.Compute/Locations/westus/Publishers/microsoft-
ads/ArtifactTypes/VMImage/Offers/windows-data-science-vm/Skus/windows2016/Versions/19.01.14
Location : westus
PublisherName : microsoft-ads
Offer : windows-data-science-vm
Skus : windows2016
Version : 19.01.14
FilterExpression :
Name : 19.01.14
OSDiskImage : {
"operatingSystem": "Windows"
}
PurchasePlan : {
"publisher": "microsoft-ads",
"name": "windows2016",
"product": "windows-data-science-vm"
}
DataDiskImages : []
Output:
Publisher : microsoft-ads
Product : windows-data-science-vm
Plan : windows2016
LicenseTextLink :
https://storelegalterms.blob.core.windows.net/legalterms/3E5ED_legalterms_MICROSOFT%253a2DADS%253a24WINDOWS%2
53a2DDATA%253a2DSCIENCE%253a2DVM%253a24WINDOWS2016%253a24OC5SKMQOXSED66BBSNTF4XRCS4XLOHP7QMPV54DQU7JCBZWYFP35
IDPOWTUKXUC7ZAG7W6ZMDD6NHWNKUIVSYBZUTZ245F44SU5AD7Q.txt
PrivacyPolicyLink : https://www.microsoft.com/EN-US/privacystatement/OnlineServices/Default.aspx
Signature :
2UMWH6PHSAIM4U22HXPXW25AL2NHUJ7Y7GRV27EBL6SUIDURGMYG6IIDO3P47FFIBBDFHZHSQTR7PNK6VIIRYJRQ3WXSE6BTNUNENXA
Accepted : False
Signdate : 1/25/2019 7:43:00 PM
Use the Set-AzMarketplaceterms cmdlet to accept or reject the terms. You only need to accept terms once per
subscription for the image. Be sure to use all lowercase letters in the parameter values.
$agreementTerms=Get-AzMarketplaceterms -Publisher "microsoft-ads" -Product "windows-data-science-vm" -Name
"windows2016"
Output:
Publisher : microsoft-ads
Product : windows-data-science-vm
Plan : windows2016
LicenseTextLink :
https://storelegalterms.blob.core.windows.net/legalterms/3E5ED_legalterms_MICROSOFT%253a2DADS%253a24WINDOWS%2
53a2DDATA%253a2DSCIENCE%253a2DV
M%253a24WINDOWS2016%253a24OC5SKMQOXSED66BBSNTF4XRCS4XLOHP7QMPV54DQU7JCBZWYFP35IDPOWTUKXUC7ZAG7W6ZMDD6NHWNKUIV
SYBZUTZ245F44SU5AD7Q.txt
PrivacyPolicyLink : https://www.microsoft.com/EN-US/privacystatement/OnlineServices/Default.aspx
Signature :
XXXXXXK3MNJ5SROEG2BYDA2YGECU33GXTD3UFPLPC4BAVKAUL3PDYL3KBKBLG4ZCDJZVNSA7KJWTGMDSYDD6KRLV3LV274DLBXXXXXX
Accepted : True
Signdate : 2/23/2018 7:49:31 PM
...
$publisherName = "microsoft-ads"
$productName = "windows-data-science-vm"
$planName = "windows2016"
$vmConfig = Set-AzVMPlan -VM $vmConfig -Publisher $publisherName -Product $productName -Name $planName
$cred=Get-Credential
$offerName = "windows-data-science-vm"
$skuName = "windows2016"
$version = "19.01.14"
$vmConfig = Set-AzVMSourceImage -VM $vmConfig -PublisherName $publisherName -Offer $offerName -Skus $skuName
-Version $version
...
You'll then pass the VM configuration along with network configuration objects to the New-AzVM cmdlet.
Next steps
To create a virtual machine quickly with the New-AzVM cmdlet by using basic image information, see Create a
Windows virtual machine with PowerShell.
For more information on using Azure Marketplace images to create custom images in a shared image gallery, see
Supply Azure Marketplace purchase plan information when creating images.
Supply Azure Marketplace purchase plan
information when creating images
11/2/2020 • 2 minutes to read • Edit Online
If you are creating an image in a shared gallery, using a source that was originally created from an Azure
Marketplace image, you may need to keep track of purchase plan information. This article shows how to find
purchase plan information for a VM, then use that information when creating an image definition. We also
cover using the information from the image definition to simplify supplying the purchase plan information
when creating a VM for an image.
For more information about finding and using Marketplace images, see Find and use Azure Marketplace
images.
$vm = Get-azvm `
-ResourceGroupName myResourceGroup `
-Name myVM
$vm.Plan.Publisher
$vm.Plan.Name
$vm.Plan.Product
Then create variables for the gallery you want to use. In this example, we are creating a variable named
$gallery for myGallery in the myGalleryRG resource group.
$gallery = Get-AzGallery `
-Name myGallery `
-ResourceGroupName myGalleryRG
Create the image definition, using the -PurchasePlanPublisher , -PurchasePlanProduct , and -PurchasePlanName
parameters.
$imageDefinition = New-AzGalleryImageDefinition `
-GalleryName $gallery.Name `
-ResourceGroupName $gallery.ResourceGroupName `
-Location $gallery.Location `
-Name 'myImageDefinition' `
-OsState specialized `
-OsType Linux `
-Publisher 'myPublisher' `
-Offer 'myOffer' `
-Sku 'mySKU' `
-PurchasePlanPublisher $vm.Plan.Publisher `
-PurchasePlanProduct $vm.Plan.Product `
-PurchasePlanName $vm.Plan.Name
Then create your image version using New-AzGalleryImageVersion. You can create an image version from a
VM, managed image, VHD\snapshot, or another image version.
Create the VM
When you go to create a VM from the image, you can use the information from the image definition to pass in
the publisher information using Set-AzVMPlan.
# Create some variables for the new VM.
$resourceGroup = "mySIGPubVM"
$location = "West Central US"
$vmName = "mySIGPubVM"
# Create a virtual machine configuration using Set-AzVMSourceImage -Id $imageDefinition.Id to use the
latest available image version. Set-AZVMPlan is used to pass the plan information in for the VM.
$vmConfig = New-AzVMConfig `
-VMName $vmName `
-VMSize Standard_D1_v2 | `
Set-AzVMSourceImage -Id $imageDefinition.Id | `
Set-AzVMPlan `
-Publisher $imageDefinition.PurchasePlan.Publisher `
-Product $imageDefinition.PurchasePlan.Product `
-Name $imageDefinition.PurchasePlan.Name | `
Add-AzVMNetworkInterface -Id $nic.Id
Before you upload a Windows virtual machine (VM) from on-premises to Azure, you must prepare the virtual
hard disk (VHD or VHDX). Azure supports both generation 1 and generation 2 VMs that are in VHD file format
and that have a fixed-size disk. The maximum size allowed for the OS VHD on a generation 1 VM is 2 TB.
You can convert a VHDX file to VHD, convert a dynamically expanding disk to a fixed-size disk, but you can't
change a VM's generation. For more information, see Should I create a generation 1 or 2 VM in Hyper-V? and
Support for generation 2 VMs on Azure.
For information about the support policy for Azure VMs, see Microsoft server software support for Azure VMs.
NOTE
The instructions in this article apply to:
The 64-bit version of Windows Server 2008 R2 and later Windows Server operating systems. For information about
running a 32-bit operating system in Azure, see Support for 32-bit operating systems in Azure VMs.
If any Disaster Recovery tool will be used to migrate the workload, like Azure Site Recovery or Azure Migrate, this
process is still required on the Guest OS to prepare the image before the migration.
IMPORTANT
Use an elevated PowerShell session to run the examples in this article.
sfc.exe /scannow
After the SFC scan completes, install Windows Updates and restart the computer.
If the VM needs to work with a specific proxy, add a proxy exception for the Azure IP address
(168.63.129.16) so the VM can connect to Azure:
3. Open DiskPart:
diskpart.exe
4. Set Coordinated Universal Time (UTC) time for Windows. Also, set the startup type of the Windows time
service w32time to Automatic :
6. Make sure the environmental variables TEMP and TMP are set to their default values:
Get-Service -Name BFE, Dhcp, Dnscache, IKEEXT, iphlpsvc, nsi, mpssvc, RemoteRegistry |
Where-Object StartType -ne Automatic |
Set-Service -StartupType Automatic
NOTE
If you receive an error message when running
Set-ItemProperty -Path 'HKLM:\SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services -Name <string> -
Value <object>
, you can safely ignore it. It means the domain isn't setting that configuration through a Group Policy Object.
2. The RDP port is set up correctly using the default port of 3389:
When you deploy a VM, the default rules are created for port 3389. To change the port number, do that
after the VM is deployed in Azure.
3. The listener is listening on every network interface:
This code ensures that you can connect when you deploy the VM. You can also review these settings after
the VM is deployed in Azure.
9. If the VM is part of a domain, check the following policies to make sure the previous settings aren't
reverted.
GO A L P O L IC Y VA L UE
2. Run the following example to allow WinRM through the three firewall profiles (domain, private, and
public), and enable the PowerShell remote service:
Enable-PSRemoting -Force
Set-NetFirewallRule -DisplayName 'Windows Remote Management (HTTP-In)' -Enabled True
4. Enable the rule for file and printer sharing so the VM can respond to ping requests inside the virtual
network:
Set-NetFirewallRule -DisplayName 'File and Printer Sharing (Echo Request - ICMPv4-In)' -Enabled True
6. If the VM is part of a domain, check the following Azure AD policies to make sure the previous settings
aren't reverted.
GO A L P O L IC Y VA L UE
GO A L P O L IC Y VA L UE
Enable the Windows Firewall profiles Computer Protect all network connections
Configuration\Policies\Windows
Settings\Administrative
Templates\Network\Network
Connection\Windows
Firewall\Domain Profile\Windows
Firewall
Verify the VM
Make sure the VM is healthy, secure, and RDP accessible:
1. To make sure the disk is healthy and consistent, check the disk at the next VM restart:
chkdsk.exe /f
3. The dump log can be helpful in troubleshooting Windows crash issues. Enable the dump log collection:
# Set up the guest OS to collect user mode dumps on a service crash event
$key = 'HKLM:\SOFTWARE\Microsoft\Windows\Windows Error Reporting\LocalDumps'
if ((Test-Path -Path $key) -eq $false) {(New-Item -Path 'HKLM:\SOFTWARE\Microsoft\Windows\Windows
Error Reporting' -Name LocalDumps)}
New-ItemProperty -Path $key -Name DumpFolder -Type ExpandString -Force -Value 'C:\CrashDumps'
New-ItemProperty -Path $key -Name CrashCount -Type DWord -Force -Value 10
New-ItemProperty -Path $key -Name DumpType -Type DWord -Force -Value 2
Set-Service -Name WerSvc -StartupType Manual
winmgmt.exe /verifyrepository
netstat.exe -anob
9. Check the following Azure AD policy to make sure they're not removing any of the required access
accounts:
Computer Configuration\Windows Settings\Security Settings\Local Policies\User Rights
Assignment\Access this computer from the network
The policy should list the following groups:
Administrators
Backup Operators
Everyone
Users
10. Restart the VM to make sure that Windows is still healthy and can be reached through the RDP connection.
At this point, consider creating a VM on your local Hyper-V server to make sure the VM starts completely.
Then test to make sure you can reach the VM through RDP.
11. Remove any extra Transport Driver Interface (TDI) filters. For example, remove software that analyzes TCP
packets or extra firewalls.
12. Uninstall any other third-party software or driver that's related to physical components or any other
virtualization technology.
Install Windows updates
Ideally, you should keep the machine updated to the patch level, if this isn't possible, make sure the following
updates are installed. To get the latest updates, see the Windows update history pages: Windows 10, and
Windows Server 2019, Windows 8.1, and Windows Server 2012 R2 and Windows 7 SP1, and Windows Server
2008 R2 SP1.
W IN DO W W IN DO W W IN DO W
W IN DO W S 10 S 10 S 10
S 7 SP 1, W IN DO W W IN DO W V1607, V1709, V1803,
W IN DO W S 8, S 8. 1, W IN DO W W IN DO W W IN DO W
S SERVER W IN DO W W IN DO W S SERVER W IN DO W S SERVER S SERVER
C OMPON 2008 R2 S SERVER S SERVER 2016 S 10 2016 2016
EN T B IN A RY SP 1 2012 2012 R2 V1607 V1703 V1709 V1803
volmgr.sy 10.0.1506 - -
s 3.0
termdd.sy 6.1.7601. - - - - - -
s 23403 -
KB31255
74
rdpdd.dll 6.1.7601. - - - - - -
23403 -
KB31255
74
rdpwd.sys 6.1.7601. - - - - - -
23403 -
KB31255
74
NOTE
To avoid an accidental reboot during VM provisioning, we recommend ensuring that all Windows Update installations are
finished and that no updates are pending. One way to do this is to install all possible Windows updates and reboot once
before you run the sysprep.exe command.
NOTE
After you run sysprep.exe in the following steps, turn off the VM. Don't turn it back on until you create an image from it
in Azure.
NOTE
A custom unattend.xml file is not supported. Although we do support the additionalUnattendContent property, that
provides only limited support for adding microsoft-windows-shell-setup options into the unattend.xml file that the Azure
provisioning agent uses. You can use, for example, additionalUnattendContent to add FirstLogonCommands and
LogonCommands. For more information, see additionalUnattendContent FirstLogonCommands example.
NOTE
If you are preparing a Windows OS disk after you convert to a fixed disk and resize if needed, create a VM that uses the
disk. Start and sign in to the VM and continue with the sections in this article to finish preparing it for uploading.
If you are preparing a data disk you may stop with this section and proceed to uploading your disk.
In this example, replace the value for Path with the path to the virtual hard disk that you want to convert. Replace
the value for DestinationPath with the new path and name of the converted disk.
Use Hyper-V Manager to resize the disk
1. Open Hyper-V Manager and select your local computer on the left. In the menu above the computer list, select
Action > Edit Disk .
2. On the Locate Vir tual Hard Disk page, select your virtual disk.
3. On the Choose Action page, select Expand > Next .
4. On the Locate Vir tual Hard Disk page, enter the new size in GiB > Next .
5. Select Finish .
Use PowerShell to resize the disk
You can resize a virtual disk using the Resize-VHD cmdlet in PowerShell. If you need information about installing
this cmdlet see Install the Hyper-V role.
The following example resizes the disk from 100.5 MiB to 101 MiB to meet the Azure alignment requirement.
In this example, replace the value for Path with the path to the virtual hard disk that you want to resize. Replace
the value for SizeBytes with the new size in bytes for the disk.
Convert from VMware VMDK disk format
If you have a Windows VM image in the VMDK file format, then you can use Azure Migrate to convert the VMDK
and upload it to Azure.
If a data disk is attached to the VM, the temporal drive volume's letter is typically D. This designation could
be different, depending on your settings and the number of available drives.
We recommend disabling script blockers that might be provided by antivirus software. They might
interfere and block the Windows Provisioning Agent scripts executed when you deploy a new VM from
your image.
Next steps
Upload a Windows VM image to Azure for Resource Manager deployments
Troubleshoot Azure Windows VM activation problems
Create a managed image of a generalized VM in
Azure
11/2/2020 • 5 minutes to read • Edit Online
A managed image resource can be created from a generalized virtual machine (VM) that is stored as either a
managed disk or an unmanaged disk in a storage account. The image can then be used to create multiple VMs. For
information on how managed images are billed, see Managed Disks pricing.
One managed image supports up to 20 simultaneous deployments. Attempting to create more than 20 VMs
concurrently, from the same managed image, may result in provisioning timeouts due to the storage performance
limitations of a single VHD. To create more than 20 VMs concurrently, use a Shared Image Galleries image
configured with 1 replica for every 20 concurrent VM deployments.
IMPORTANT
After you have run Sysprep on a VM, that VM is considered generalized and cannot be restarted. The process of
generalizing a VM is not reversible. If you need to keep the original VM functioning, you should create a copy of the VM and
generalize its copy.
Sysprep requires the drives to be fully decrypted. If you have enabled encryption on your VM, disable encryption before you
run Sysprep.
If you plan to run Sysprep before uploading your virtual hard disk (VHD) to Azure for the first time, make sure you have
prepared your VM.
TIP
Optional Use DISM to optimize your image and reduce your VM's first boot time.
To optimize your image, mount your VHD by double-clicking on it in Windows explorer, and then run DISM with the
/optimize-image parameter.
NOTE
If you would like to store your image in zone-redundant storage, you need to create it in a region that supports availability
zones and include the -ZoneResilient parameter in the image configuration ( New-AzImageConfig command).
$vmName = "myVM"
$rgName = "myResourceGroup"
$location = "EastUS"
$imageName = "myImage"
$vmName = "myVM"
$rgName = "myResourceGroup"
$location = "EastUS"
$imageName = "myImage"
$diskID = $vm.StorageProfile.OsDisk.ManagedDisk.Id
$rgName = "myResourceGroup"
$location = "EastUS"
$snapshotName = "mySnapshot"
$imageName = "myImage"
Next steps
Create a VM from a managed image.
Create a VM from a managed image
11/2/2020 • 2 minutes to read • Edit Online
You can create multiple virtual machines (VMs) from an Azure managed VM image using the Azure portal or
PowerShell. A managed VM image contains the information necessary to create a VM, including the OS and data
disks. The virtual hard disks (VHDs) that make up the image, including both the OS disks and any data disks, are
stored as managed disks.
Before creating a new VM, you'll need to create a managed VM image to use as the source image and grant read
access on the image to any user who should have access to the image.
One managed image supports up to 20 simultaneous deployments. Attempting to create more than 20 VMs
concurrently, from the same managed image, may result in provisioning timeouts due to the storage performance
limitations of a single VHD. To create more than 20 VMs concurrently, use a Shared Image Galleries image
configured with 1 replica for every 20 concurrent VM deployments.
Use PowerShell
You can use PowerShell to create a VM from an image by using the simplified parameter set for the New-AzVm
cmdlet. The image needs to be in the same resource group where you'll create the VM.
The simplified parameter set for New-AzVm only requires that you provide a name, resource group, and image
name to create a VM from an image. New-AzVm will use the value of the -Name parameter as the name of all of
the resources that it creates automatically. In this example, we provide more detailed names for each of the
resources but let the cmdlet create them automatically. You can also create resources beforehand, such as the
virtual network, and pass the resource name into the cmdlet. New-AzVm will use the existing resources if it can
find them by their name.
The following example creates a VM named myVMFromImage, in the myResourceGroup resource group, from the
image named myImage.
New-AzVm `
-ResourceGroupName "myResourceGroup" `
-Name "myVMfromImage" `
-ImageName "myImage" `
-Location "East US" `
-VirtualNetworkName "myImageVnet" `
-SubnetName "myImageSubnet" `
-SecurityGroupName "myImageNSG" `
-PublicIpAddressName "myImagePIP" `
-OpenPorts 3389
Next steps
Create and manage Windows VMs with the Azure PowerShell module
PowerShell: How to use Packer to create virtual
machine images in Azure
11/2/2020 • 6 minutes to read • Edit Online
Each virtual machine (VM) in Azure is created from an image that defines the Windows distribution and OS
version. Images can include pre-installed applications and configurations. The Azure Marketplace provides many
first and third-party images for most common OS' and application environments, or you can create your own
custom images tailored to your needs. This article details how to use the open-source tool Packer to define and
build custom images in Azure.
This article was last tested on 8/5/2020 using Packer version 1.6.1.
NOTE
Azure now has a service, Azure Image Builder (preview), for defining and creating your own custom images. Azure Image
Builder is built on Packer, so you can even use your existing Packer shell provisioner scripts with it. To get started with Azure
Image Builder, see Create a Windows VM with Azure Image Builder.
$rgName = "myPackerGroup"
$location = "East US"
New-AzResourceGroup -Name $rgName -Location $location
$plainPassword
$sp.ApplicationId
To authenticate to Azure, you also need to obtain your Azure tenant and subscription IDs with Get-AzSubscription:
Get-AzSubscription
PA RA M ET ER W H ERE TO O BTA IN
"client_id": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx",
"client_secret": "ppppppp-pppp-pppp-pppp-ppppppppppp",
"tenant_id": "zzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz",
"subscription_id": "yyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyy",
"managed_image_resource_group_name": "myPackerGroup",
"managed_image_name": "myPackerImage",
"os_type": "Windows",
"image_publisher": "MicrosoftWindowsServer",
"image_offer": "WindowsServer",
"image_sku": "2016-Datacenter",
"communicator": "winrm",
"winrm_use_ssl": true,
"winrm_insecure": true,
"winrm_timeout": "5m",
"winrm_username": "packer",
"azure_tags": {
"dept": "Engineering",
"task": "Image deployment"
},
"build_resource_group_name": "myPackerGroup",
"vm_size": "Standard_D2_v2"
}],
"provisioners": [{
"type": "powershell",
"inline": [
"Add-WindowsFeature Web-Server",
"while ((Get-Service RdAgent).Status -ne 'Running') { Start-Sleep -s 5 }",
"while ((Get-Service WindowsAzureGuestAgent).Status -ne 'Running') { Start-Sleep -s 5 }",
"& $env:SystemRoot\\System32\\Sysprep\\Sysprep.exe /oobe /generalize /quiet /quit",
"while($true) { $imageState = Get-ItemProperty
HKLM:\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Setup\\State | Select ImageState;
if($imageState.ImageState -ne 'IMAGE_STATE_GENERALIZE_RESEAL_TO_OOBE') { Write-Output $imageState.ImageState;
Start-Sleep -s 10 } else { break } }"
]
}]
}
This template builds a Windows Server 2016 VM, installs IIS, then generalizes the VM with Sysprep. The IIS install
shows how you can use the PowerShell provisioner to run additional commands. The final Packer image then
includes the required software install and configuration.
The Windows Guest Agent participates in the Sysprep process. The agent must be fully installed before the VM can
be sysprep'ed. To ensure that this is true, all agent services must be running before you execute sysprep.exe. The
preceding JSON snippet shows one way to do this in the PowerShell provisioner. This snippet is required only if the
VM is configured to install the agent, which is the default.
ManagedImageResourceGroupName: myResourceGroup
ManagedImageName: myPackerImage
ManagedImageLocation: eastus
It takes a few minutes for Packer to build the VM, run the provisioners, and clean up the deployment.
New-AzVm `
-ResourceGroupName $rgName `
-Name "myVM" `
-Location $location `
-VirtualNetworkName "myVnet" `
-SubnetName "mySubnet" `
-SecurityGroupName "myNetworkSecurityGroup" `
-PublicIpAddressName "myPublicIpAddress" `
-OpenPorts 80 `
-Image "myPackerImage"
If you wish to create VMs in a different resource group or region than your Packer image, specify the image ID
rather than image name. You can obtain the image ID with Get-AzImage.
It takes a few minutes to create the VM from your Packer image.
Get-AzPublicIPAddress `
-ResourceGroupName $rgName `
-Name "myPublicIPAddress" | select "IpAddress"
To see your VM, that includes the IIS install from the Packer provisioner, in action, enter the public IP address in to a
web browser.
Next steps
You can also use existing Packer provisioner scripts with Azure Image Builder.
Use Windows client in Azure for dev/test scenarios
11/2/2020 • 2 minutes to read • Edit Online
You can use Windows 7, Windows 8, or Windows 10 Enterprise (x64) in Azure for dev/test scenarios provided you
have an appropriate Visual Studio (formerly MSDN) subscription. This article outlines the eligibility requirements
for running Windows 7, Windows 8.1, Windows 10 Enterprise in Azure and use of the following Azure Gallery
images.
NOTE
For Windows 10 Pro and Windows 10 Pro N image in Azure Gallery, please refer to How to deploy Windows 10 on Azure
with Multitenant Hosting Rights
Subscription eligibility
Active Visual Studio subscribers (people who have acquired a Visual Studio subscription license) can use Windows
client for development and testing purposes. Windows client can be used on your own hardware and Azure virtual
machines running in any type of Azure subscription. Windows client may not be deployed to or used on Azure for
normal production use, or used by people who are not active Visual Studio subscribers.
For your convenience, certain Windows 10 images are available from the Azure Gallery within eligible dev/test
offers. Visual Studio subscribers within any type of offer can also adequately prepare and create a 64-bit Windows
7, Windows 8, or Windows 10 image and then upload to Azure. The use remains limited to dev/test by active Visual
Studio subscribers.
Eligible offers
The following table details the offer IDs that are eligible to deploy Windows 10 through the Azure Gallery. The
Windows 10 images are only visible to the following offers. Visual Studio subscribers who need to run Windows
client in a different offer type require you to adequately prepare and create a 64-bit Windows 7, Windows 8, or
Windows 10 image and then upload to Azure.
Or, click Billing and then click your subscription ID. The offer ID appears in the Billing window.
You can also view the offer ID from the 'Subscriptions' tab of the Azure Account portal:
Next steps
You can now deploy your VMs using PowerShell, Resource Manager templates, or Visual Studio.
Download a Windows VHD from Azure
11/2/2020 • 2 minutes to read • Edit Online
In this article, you learn how to download a Windows virtual hard disk (VHD) file from Azure using the Azure
portal.
Stop the VM
A VHD can’t be downloaded from Azure if it's attached to a running VM. You need to stop the VM to download a
VHD.
1. On the Hub menu in the Azure portal, click Vir tual Machines .
2. Select the VM from the list.
3. On the blade for the VM, click Stop .
NOTE
The expiration time is increased from the default to provide enough time to download the large VHD file for a Windows
Server operating system. You can expect a VHD file that contains the Windows Server operating system to take several hours
to download depending on your connection. If you are downloading a VHD for a data disk, the default time is sufficient.
Download VHD
1. Under the URL that was generated, click Download the VHD file.
2. You may need to click Save in your browser to start the download. The default name for the VHD file is abcd.
Next steps
Learn how to upload a VHD file to Azure.
Create managed disks from unmanaged disks in a storage account.
Manage Azure disks with PowerShell.
Create a managed disk from an image version
11/6/2020 • 2 minutes to read • Edit Online
If needed, you can export the OS or a single data disk from an image version as a managed disk from an image
version stored in a Shared Image Gallery.
CLI
List the image versions in a gallery using az sig image-version list. In this example, we are looking for all of the
image versions that are part of the myImageDefinition image definition in the myGallery image gallery.
Set the source variable to the ID of the image version, then use az disk create to create the managed disk.
In this example, we export the OS disk of the image version to create a managed disk named myManagedOSDisk,
in the EastUS region, in a resource group named myResourceGroup.
source="/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Compute/gallerie
s/<galleryName>/images/<galleryImageDefinition>/versions/<imageVersion>"
If you want to export a data disk from the image version, add --gallery-image-reference-lun to specify the LUN
location of the data disk to export.
In this example, we export the data disk located at LUN 0 of the image version to create a managed disk named
myManagedDataDisk, in the EastUS region, in a resource group named myResourceGroup.
source="/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Compute/gallerie
s/<galleryName>/images/<galleryImageDefinition>/versions/<imageVersion>"
PowerShell
List the image versions in a gallery using Get-AzResource.
Get-AzResource `
-ResourceType Microsoft.Compute/galleries/images/versions | `
Format-Table -Property Name,ResourceId,ResourceGroupName
Once you have all of the information you need, you can use Get-AzGalleryImageVersion to get the source image
version you want to use and assign it to a variable. In this example, we are getting the 1.0.0 image version, of the
myImageDefinition definition, in the myGallery source gallery, in the myResourceGroup resource group.
$sourceImgVer = Get-AzGalleryImageVersion `
-GalleryImageDefinitionName myImageDefinition `
-GalleryName myGallery `
-ResourceGroupName myResourceGroup `
-Name 1.0.0
After setting the source variable to the ID of the image version, use New-AzDiskConfig to create a disk
configuration and New-AzDisk to create the disk.
In this example, we export the OS disk of the image version to create a managed disk named myManagedOSDisk,
in the EastUS region, in a resource group named myResourceGroup.
Create a disk configuration.
$diskConfig = New-AzDiskConfig `
-Location EastUS `
-CreateOption FromImage `
-GalleryImageReference @{Id = $sourceImgVer.Id}
If you want to export a data disk on the image version, add a LUN ID to the disk configuration to specify the LUN
location of the data disk to export.
In this example, we export the data disk located at LUN 0 of the image version to create a managed disk named
myManagedDataDisk, in the EastUS region, in a resource group named myResourceGroup.
Create a disk configuration.
$diskConfig = New-AzDiskConfig `
-Location EastUS `
-CreateOption FromImage `
-GalleryImageReference @{Id = $sourceImgVer.Id; Lun=0}
Next steps
You can also create an image version from a managed disk using the Azure CLI or PowerShell.
Security recommendations for virtual machines in
Azure
11/6/2020 • 3 minutes to read • Edit Online
This article contains security recommendations for Azure Virtual Machines. Follow these recommendations to help
fulfill the security obligations described in our model for shared responsibility. The recommendations will also help
you improve overall security for your web app solutions. For more information about what Microsoft does to fulfill
service-provider responsibilities, see Shared responsibilities for cloud computing.
Some of this article's recommendations can be automatically addressed by Azure Security Center. Azure Security
Center is the first line of defense for your resources in Azure. It periodically analyzes the security state of your Azure
resources to identify potential security vulnerabilities. It then recommends how to address the vulnerabilities. For
more information, see Security recommendations in Azure Security Center.
For general information about Azure Security Center, see What is Azure Security Center?.
General
REC O M M EN DAT IO N C O M M EN T S SEC URIT Y C EN T ER
When you build custom VM images, Before you create images, install the -
apply the latest updates. latest updates for the operating system
and for all applications that will be part
of your image.
Keep your VMs current. You can use the Update Management Yes
solution in Azure Automation to
manage operating system updates for
your Windows and Linux computers in
Azure.
Use multiple VMs for greater resilience If your VM runs applications that must -
and availability. be highly available, use multiple VMs or
availability sets.
Data security
REC O M M EN DAT IO N C O M M EN T S SEC URIT Y C EN T ER
Encrypt operating system disks. Azure Disk Encryption helps you Yes
encrypt your Windows and Linux IaaS
VM disks. Without the necessary keys,
the contents of encrypted disks are
unreadable. Disk encryption protects
stored data from unauthorized access
that would otherwise be possible if the
disk were copied.
Monitoring
REC O M M EN DAT IO N C O M M EN T S SEC URIT Y C EN T ER
Monitor your VMs. You can use Azure Monitor for VMs to -
monitor the state of your Azure VMs
and virtual machine scale sets.
Performance issues with a VM can lead
to service disruption, which violates the
security principle of availability.
Networking
REC O M M EN DAT IO N C O M M EN T S SEC URIT Y C EN T ER
Next steps
Check with your application provider to learn about additional security requirements. For more information about
developing secure applications, see Secure-development documentation.
Secure your management ports with just-in-time
access
11/6/2020 • 10 minutes to read • Edit Online
Lock down inbound traffic to your Azure Virtual Machines with Azure Security Center's just-in-time (JIT) virtual
machine (VM) access feature. This reduces exposure to attacks while providing easy access when you need to
connect to a VM.
For a full explanation about how JIT works and the underlying logic, see Just-in-time explained.
This page teaches you how to include JIT in your security program. You'll learn how to:
Enable JIT on your VMs - You can enable JIT with your own custom options for one or more VMs using
Security Center, PowerShell, or the REST API. Alternatively, you can enable JIT with default, hard-coded
parameters, from Azure virtual machines. When enabled, JIT locks down inbound traffic to your Azure VMs by
creating a rule in your network security group.
Request access to a VM that has JIT enabled - The goal of JIT is to ensure that even though your inbound
traffic is locked down, Security Center still provides easy access to connect to VMs when needed. You can
request access to a JIT-enabled VM from Security Center, Azure virtual machines, PowerShell, or the REST API.
Audit the activity - To ensure your VMs are secured appropriately, review the accesses to your JIT-enabled
VMs as part of your regular security checks.
Availability
A SP EC T DETA IL S
Required roles and permissions: Reader and SecurityReader roles can both view the JIT
status and parameters.
To create custom roles that can work with JIT, see What
permissions are needed to configure and use JIT?.
To create a least-privileged role for users that need to request
JIT access to a VM, and perform no other JIT operations, use
the Set-JitLeastPrivilegedRole script from the Security Center
GitHub community pages.
From Security Center, you can enable and configure the JIT VM access.
1. Open the Azure Defender dashboard and from the advanced protection area, select Just-in-time VM
access .
The Just-in-time VM access page opens with your VMs grouped into the following tabs:
Configured - VMs that have been already been configured to support just-in-time VM access. For each
VM, the configured tab shows:
the number of approved JIT requests in the last seven days
the last access date and time
the connection details configured
the last user
Not configured - VMs without JIT enabled, but that can support JIT. We recommend that you enable JIT
for these VMs.
Unsuppor ted - VMs without JIT enabled and which don't support the feature. Your VM might be in this
tab for the following reasons:
Missing network security group (NSG) - JIT requires an NSG to be configured
Classic VM - JIT supports VMs that are deployed through Azure Resource Manager, not 'classic
deployment'. Learn more about classic vs Azure Resource Manager deployment models.
Other - Your VM might be in this tab if the JIT solution is disabled in the security policy of the
subscription or the resource group.
2. From the Not configured tab, mark the VMs to protect with JIT and select Enable JIT on VMs .
The JIT VM access page opens listing the ports that Security Center recommends protecting:
22 - SSH
3389 - RDP
5985 - WinRM
5986 - WinRM
To accept the default settings, select Save .
3. To customize the JIT options:
Add custom ports with the Add button.
Modify one of the default ports, by selecting it from the list.
For each port (custom and default) the Add por t configuration pane offers the following options:
Protocol - The protocol that is allowed on this port when a request is approved
Allowed source IPs - The IP ranges that are allowed on this port when a request is approved
Maximum request time - The maximum time window during which a specific port can be opened
a. Set the port security to your needs.
b. Select OK .
4. Select Save .
Edit the JIT configuration on a JIT -enabled VM using Security Center
You can modify a VM's just-in-time configuration by adding and configuring a new port to protect for that VM, or
by changing any other setting related to an already protected port.
To edit the existing JIT rules for a VM:
1. Open the Azure Defender dashboard and from the advanced protection area, select Adaptive application
controls .
2. From the Configured tab, right-click on the VM to which you want to add a port, and select edit.
3. Under JIT VM access configuration , you can either edit the existing settings of an already protected port
or add a new custom port.
4. When you've finished editing the ports, select Save .
NOTE
If a user who is requesting access is behind a proxy, the option My IP may not work. You may need to define the full IP
address range of the organization.
Next steps
In this article, you learned how to set up and use just-in-time VM access. To learn why JIT should be used, read the
concept article explaining the threats it's defending against:
JIT explained
Create an Azure Bastion host using Azure PowerShell
11/2/2020 • 2 minutes to read • Edit Online
This article shows you how to create an Azure Bastion host using PowerShell. Once you provision the Azure Bastion
service in your virtual network, the seamless RDP/SSH experience is available to all of the VMs in the same virtual
network. Azure Bastion deployment is per virtual network, not per subscription/account or virtual machine.
Optionally, you can create an Azure Bastion host by using the Azure portal.
Prerequisites
Verify that you have an Azure subscription. If you don't already have an Azure subscription, you can activate your
MSDN subscriber benefits or sign up for a free account.
This article uses PowerShell cmdlets. To run the cmdlets, you can use Azure Cloud Shell. The Azure Cloud Shell is a
free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and
configured to use with your account.
To open the Cloud Shell, just select Tr y it from the upper right corner of a code block. You can also launch Cloud
Shell in a separate browser tab by going to https://shell.azure.com/powershell. Select Copy to copy the blocks of
code, paste it into the Cloud Shell, and press enter to run it.
You can also install and run the Azure PowerShell cmdlets locally on your computer. PowerShell cmdlets are
updated frequently. If you have not installed the latest version, the values specified in the instructions may fail. To
find the versions of Azure PowerShell installed on your computer, use the Get-Module -ListAvailable Az cmdlet. To
install or update, see Install the Azure PowerShell module.
$subnetName = "AzureBastionSubnet"
$subnet = New-AzVirtualNetworkSubnetConfig -Name $subnetName -AddressPrefix 10.0.0.0/24
$vnet = New-AzVirtualNetwork -Name "myVnet" -ResourceGroupName "myBastionRG" -Location "westeurope" -
AddressPrefix 10.0.0.0/16 -Subnet $subnet
2. Create a public IP address for Azure Bastion. The public IP is the public IP address the Bastion resource on
which RDP/SSH will be accessed (over port 443). The public IP address must be in the same region as the
Bastion resource you are creating.
3. Create a new Azure Bastion resource in the AzureBastionSubnet of your virtual network. It takes about 5
minutes for the Bastion resource to create and deploy.
$bastion = New-AzBastion -ResourceGroupName "myBastionRG" -Name "myBastion" -PublicIpAddress $publicip -
VirtualNetwork $vnet
Next steps
Read the Bastion FAQ for additional information.
To use Network Security Groups with the Azure Bastion subnet, see Work with NSGs.
Create an Azure Bastion host using Azure CLI
11/2/2020 • 2 minutes to read • Edit Online
This article shows you how to create an Azure Bastion host using Azure CLI. Once you provision the Azure Bastion
service in your virtual network, the seamless RDP/SSH experience is available to all of the VMs in the same virtual
network. Azure Bastion deployment is per virtual network, not per subscription/account or virtual machine.
Optionally, you can create an Azure Bastion host by using the Azure portal, or using Azure PowerShell.
Prerequisites
Verify that you have an Azure subscription. If you don't already have an Azure subscription, you can activate your
MSDN subscriber benefits or sign up for a free account.
This article uses the Azure CLI. To run commands, you can use Azure Cloud Shell. The Azure Cloud Shell is a free
interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and
configured to use with your account.
To open the Cloud Shell, just select Tr y it from the upper right corner of a code block. You can also launch Cloud
Shell in a separate browser tab by going to https://shell.azure.com and toggle the dropdown in the left corner to
reflect Bash or PowerShell. Select Copy to copy the blocks of code, paste it into the Cloud Shell, and press enter to
run it.
NOTE
As shown in the examples, use the --location parameter with --resource-group for every command to ensure that the
resources are deployed together.
1. Create a virtual network and an Azure Bastion subnet. You must create the Azure Bastion subnet using the
name value AzureBastionSubnet . This value lets Azure know which subnet to deploy the Bastion resources
to. This is different than a Gateway subnet. You must use a subnet of at least /27 or larger subnet (/27, /26,
and so on). Create the AzureBastionSubnet without any route tables or delegations. If you use Network
Security Groups on the AzureBastionSubnet , refer to the Work with NSGs article.
2. Create a public IP address for Azure Bastion. The public IP is the public IP address the Bastion resource on
which RDP/SSH will be accessed (over port 443). The public IP address must be in the same region as the
Bastion resource you are creating.
az network public-ip create --resource-group MyResourceGroup --name MyIp --sku Standard --location
northeurope
3. Create a new Azure Bastion resource in the AzureBastionSubnet of your virtual network. It takes about 5
minutes for the Bastion resource to create and deploy.
az network bastion create --name MyBastion --public-ip-address MyIp --resource-group MyResourceGroup --
vnet-name MyVnet --location northeurope
Next steps
Read the Bastion FAQ for additional information.
To use Network Security Groups with the Azure Bastion subnet, see Work with NSGs.
Connect using SSH to a Linux virtual machine using
Azure Bastion
11/2/2020 • 3 minutes to read • Edit Online
This article shows you how to securely and seamlessly SSH to your Linux VMs in an Azure virtual network. You can
connect to a VM directly from the Azure portal. When using Azure Bastion, VMs don't require a client, agent, or
additional software. For more information about Azure Bastion, see the Overview.
You can use Azure Bastion to connect to a Linux virtual machine using SSH. You can use both username/password
and SSH keys for authentication. You can connect to your VM with SSH keys by using either:
A private key that you manually enter
A file that contains the private key information
The SSH private key must be in a format that begins with "-----BEGIN RSA PRIVATE KEY-----" and ends with
"-----END RSA PRIVATE KEY-----" .
Prerequisites
Make sure that you have set up an Azure Bastion host for the virtual network in which the VM resides. For more
information, see Create an Azure Bastion host. Once the Bastion service is provisioned and deployed in your virtual
network, you can use it to connect to any VM in this virtual network.
When you use Bastion to connect, it assumes that you are using RDP to connect to a Windows VM, and SSH to
connect to your Linux VMs. For information about connecting to a Windows VM, see Connect to a VM - Windows.
Required roles
In order to make a connection, the following roles are required:
Reader role on the virtual machine
Reader role on the NIC with private IP of the virtual machine
Reader role on the Azure Bastion resource
Ports
In order to connect to the Linux VM via SSH, you must have the following ports open on your VM:
Inbound port: SSH (22)
3. Enter the username and password for SSH to your virtual machine.
4. Click Connect button after entering the key.
2. After you click Bastion, a side bar appears that has three tabs – RDP, SSH, and Bastion. If Bastion was
provisioned for the virtual network, the Bastion tab is active by default. If you didn't provision Bastion for the
virtual network, see Configure Bastion.
3. Enter the username and select SSH Private Key .
4. Enter your private key into the text area SSH Private Key (or paste it directly).
5. Click Connect button after entering the key.
2. After you click Bastion, a side bar appears that has three tabs – RDP, SSH, and Bastion. If Bastion was
provisioned for the virtual network, the Bastion tab is active by default. If you didn't provision Bastion for the
virtual network, see Configure Bastion.
3. Enter the username and select SSH Private Key from Local File .
4. Click the Browse button (the folder icon in the local file).
5. Browse for the file, then click Open .
6. Click Connect to connect to the VM. Once you click Connect, SSH to this virtual machine will directly open
in the Azure portal. This connection is over HTML5 using port 443 on the Bastion service over the private IP
of your virtual machine.
Next steps
Read the Bastion FAQ
Connect to a Windows virtual machine using Azure
Bastion
11/2/2020 • 2 minutes to read • Edit Online
Using Azure Bastion, you can securely and seamlessly connect to your virtual machines over SSL directly in the
Azure portal. When you use Azure Bastion, your VMs don't require a client, agent, or additional software. This
article shows you how to connect to your Windows VMs. For information about connecting to a Linux VM, see
Connect to a Linux VM.
Azure Bastion provides secure connectivity to all of the VMs in the virtual network in which it is provisioned. Using
Azure Bastion protects your virtual machines from exposing RDP/SSH ports to the outside world, while still
providing secure access using RDP/SSH. For more information, see the What is Azure Bastion?.
Prerequisites
Before you begin, verify that you have met the following criteria:
A VNet with the Bastion host already installed.
Make sure that you have set up an Azure Bastion host for the virtual network in which the VM is located.
Once the Bastion service is provisioned and deployed in your virtual network, you can use it to connect to
any VM in the virtual network. To set up an Azure Bastion host, see Create a bastion host.
A Windows virtual machine in the virtual network.
The following required roles:
Reader role on the virtual machine.
Reader role on the NIC with private IP of the virtual machine.
Reader role on the Azure Bastion resource.
Ports: To connect to the Windows VM, you must have the following ports open on your Windows VM:
Inbound ports: RDP (3389)
Connect
1. Open the Azure portal. Navigate to the virtual machine that you want to connect to, then select Connect .
Select Bastion from the dropdown.
2. After you select Bastion from the dropdown, a side bar appears that has three tabs: RDP, SSH, and Bastion.
Because Bastion was provisioned for the virtual network, the Bastion tab is active by default. Select Use
Bastion .
3. On the Connect using Azure Bastion page, enter the username and password for your virtual machine,
then select Connect .
4. The RDP connection to this virtual machine via Bastion will open directly in the Azure portal (over HTML5)
using port 443 and the Bastion service.
Next steps
Read the Bastion FAQ.
Azure Disk Encryption scenarios on Windows VMs
11/6/2020 • 12 minutes to read • Edit Online
Azure Disk Encryption for Windows virtual machines (VMs) uses the BitLocker feature of Windows to provide full
disk encryption of the OS disk and data disk. Additionally, it provides encryption of the temporary disk when the
VolumeType parameter is All.
Azure Disk Encryption is integrated with Azure Key Vault to help you control and manage the disk encryption keys
and secrets. For an overview of the service, see Azure Disk Encryption for Windows VMs.
You can only apply disk encryption to virtual machines of supported VM sizes and operating systems. You must
also meet the following prerequisites:
Networking requirements
Group Policy requirements
Encryption key storage requirements
IMPORTANT
If you have previously used Azure Disk Encryption with Azure AD to encrypt a VM, you must continue use this
option to encrypt your VM. See Azure Disk Encryption with Azure AD (previous release) for details.
You should take a snapshot and/or create a backup before disks are encrypted. Backups ensure that a recovery
option is possible if an unexpected failure occurs during encryption. VMs with managed disks require a backup before
encryption occurs. Once a backup is made, you can use the Set-AzVMDiskEncryptionExtension cmdlet to encrypt
managed disks by specifying the -skipVmBackup parameter. For more information about how to back up and restore
encrypted VMs, see Back up and restore encrypted Azure VM.
Encrypting or disabling encryption may cause a VM to reboot.
az login
If you have multiple subscriptions and want to specify a specific one, get your subscription list with az account list
and specify with az account set.
az account list
az account set --subscription "<subscription name or ID>"
For more information, see Get started with Azure CLI 2.0.
Azure PowerShell
The Azure PowerShell az module provides a set of cmdlets that uses the Azure Resource Manager model for
managing your Azure resources. You can use it in your browser with Azure Cloud Shell, or you can install it on
your local machine using the instructions in Install the Azure PowerShell module.
If you already have it installed locally, make sure you use the latest version of Azure PowerShell SDK version to
configure Azure Disk Encryption. Download the latest version of Azure PowerShell release.
To Sign in to your Azure account with Azure PowerShell, use the Connect-AzAccount cmdlet.
Connect-AzAccount
If you have multiple subscriptions and want to specify one, use the Get-AzSubscription cmdlet to list them,
followed by the Set-AzContext cmdlet:
Running the Get-AzContext cmdlet will verify that the correct subscription has been selected.
To confirm the Azure Disk Encryption cmdlets are installed, use the Get-command cmdlet:
Get-command *diskencryption*
$KVRGname = 'MyKeyVaultResourceGroup';
$VMRGName = 'MyVirtualMachineResourceGroup';
$vmName = 'MyExtraSecureVM';
$KeyVaultName = 'MySecureVault';
$keyEncryptionKeyName = 'MyKeyEncryptionKey';
$KeyVault = Get-AzKeyVault -VaultName $KeyVaultName -ResourceGroupName $KVRGname;
$diskEncryptionKeyVaultUrl = $KeyVault.VaultUri;
$KeyVaultResourceId = $KeyVault.ResourceId;
$keyEncryptionKeyUrl = (Get-AzKeyVaultKey -VaultName $KeyVaultName -Name
$keyEncryptionKeyName).Key.kid;
NOTE
The syntax for the value of disk-encryption-keyvault parameter is the full identifier string:
/subscriptions/[subscription-id-guid]/resourceGroups/[resource-group-
name]/providers/Microsoft.KeyVault/vaults/[keyvault-name]
The syntax for the value of the key-encryption-key parameter is the full URI to the KEK as in: https://[keyvault-
name].vault.azure.net/keys/[kekname]/[kek-unique-id]
Verify the disks are encr ypted: To check on the encryption status of an IaaS VM, use the Get-
AzVmDiskEncryptionStatus cmdlet.
Disable disk encr yption: To disable the encryption, use the Disable-AzVMDiskEncryption cmdlet.
Disabling data disk encryption on Windows VM when both OS and data disks have been encrypted doesn't
work as expected. Disable encryption on all disks instead.
NOTE
The syntax for the value of disk-encryption-keyvault parameter is the full identifier string:
/subscriptions/[subscription-id-guid]/resourceGroups/[resource-group-
name]/providers/Microsoft.KeyVault/vaults/[keyvault-name]
The syntax for the value of the key-encryption-key parameter is the full URI to the KEK as in: https://[keyvault-
name].vault.azure.net/keys/[kekname]/[kek-unique-id]
Verify the disks are encr ypted: To check on the encryption status of an IaaS VM, use the az vm
encryption show command.
Disable encr yption: To disable encryption, use the az vm encryption disable command. Disabling data
disk encryption on Windows VM when both OS and data disks have been encrypted doesn't work as
expected. Disable encryption on all disks instead.
PA RA M ET ER DESC RIP T IO N
keyVaultName Name of the key vault that the BitLocker key should be
uploaded to. You can get it by using the cmdlet
(Get-AzKeyVault -ResourceGroupName
<MyKeyVaultResourceGroupName>). Vaultname
or the Azure CLI command
az keyvault list --resource-group
"MyKeyVaultResourceGroup"
keyVaultResourceGroup Name of the resource group that contains the key vault
forceUpdateTag Pass in a unique value like a GUID every time the operation
needs to be force run.
$KVRGname = 'MyKeyVaultResourceGroup';
$VMRGName = 'MyVirtualMachineResourceGroup';
$vmName = 'MySecureVM';
$KeyVaultName = 'MySecureVault';
$KeyVault = Get-AzKeyVault -VaultName $KeyVaultName -ResourceGroupName $KVRGname;
$diskEncryptionKeyVaultUrl = $KeyVault.VaultUri;
$KeyVaultResourceId = $KeyVault.ResourceId;
$sequenceVersion = [Guid]::NewGuid();
Encr ypt a running VM using KEK: This example uses "All" for the -VolumeType parameter, which
includes both OS and Data volumes. If you only want to encrypt the OS volume, use "OS" for the -
VolumeType parameter.
$KVRGname = 'MyKeyVaultResourceGroup';
$VMRGName = 'MyVirtualMachineResourceGroup';
$vmName = 'MyExtraSecureVM';
$KeyVaultName = 'MySecureVault';
$keyEncryptionKeyName = 'MyKeyEncryptionKey';
$KeyVault = Get-AzKeyVault -VaultName $KeyVaultName -ResourceGroupName $KVRGname;
$diskEncryptionKeyVaultUrl = $KeyVault.VaultUri;
$KeyVaultResourceId = $KeyVault.ResourceId;
$keyEncryptionKeyUrl = (Get-AzKeyVaultKey -VaultName $KeyVaultName -Name
$keyEncryptionKeyName).Key.kid;
$sequenceVersion = [Guid]::NewGuid();
NOTE
The syntax for the value of disk-encryption-keyvault parameter is the full identifier string:
/subscriptions/[subscription-id-guid]/resourceGroups/[resource-group-
name]/providers/Microsoft.KeyVault/vaults/[keyvault-name]
The syntax for the value of the key-encryption-key parameter is the full URI to the KEK as in: https://[keyvault-
name].vault.azure.net/keys/[kekname]/[kek-unique-id]
Disable encryption
You can disable encryption using Azure PowerShell, the Azure CLI, or with a Resource Manager template. Disabling
data disk encryption on Windows VM when both OS and data disks have been encrypted doesn't work as
expected. Disable encryption on all disks instead.
Disable disk encr yption with Azure PowerShell: To disable the encryption, use the Disable-
AzVMDiskEncryption cmdlet.
Disable-AzVMDiskEncryption -ResourceGroupName 'MyVirtualMachineResourceGroup' -VMName 'MySecureVM' -
VolumeType "all"
Disable encr yption with the Azure CLI: To disable encryption, use the az vm encryption disable
command.
Unsupported scenarios
Azure Disk Encryption does not work for the following scenarios, features, and technology:
Encrypting basic tier VM or VMs created through the classic VM creation method.
Encrypting VMs configured with software-based RAID systems.
Encrypting VMs configured with Storage Spaces Direct (S2D), or Windows Server versions before 2016
configured with Windows Storage Spaces.
Integration with an on-premises key management system.
Azure Files (shared file system).
Network File System (NFS).
Dynamic volumes.
Windows Server containers, which create dynamic volumes for each container.
Ephemeral OS disks.
Encryption of shared/distributed file systems like (but not limited to) DFS, GFS, DRDB, and CephFS.
Moving an encrypted VMs to another subscription or region.
Creating an image or snapshot of an encrypted VM and using it to deploy additional VMs.
Gen2 VMs (see: Support for generation 2 VMs on Azure)
M-series VMs with Write Accelerator disks.
Applying ADE to a VM that has disks encrypted with server-side encryption with customer-managed keys (SSE
+ CMK). Applying SSE + CMK to a data disk on a VM encrypted with ADE is an unsupported scenario as well.
Migrating a VM that is encrypted with ADE, or has ever been encrypted with ADE, to server-side encryption
with customer-managed keys.
Azure VM sizes with no local temporary disk; specifically, Dv4, Dsv4, Ev4, and Esv4.
Next steps
Azure Disk Encryption overview
Azure Disk Encryption sample scripts
Azure Disk Encryption troubleshooting
Quickstart: Create and encrypt a Windows VM with
the Azure CLI
11/2/2020 • 3 minutes to read • Edit Online
The Azure CLI is used to create and manage Azure resources from the command line or in scripts. This quickstart
shows you how to use the Azure CLI to create and encrypt a Windows Server 2016 virtual machine (VM).
If you don't have an Azure subscription, create a free account before you begin.
O P T IO N EXA M P L E/ L IN K
Select the Cloud Shell button on the menu bar at the upper
right in the Azure portal.
az vm create \
--resource-group myResourceGroup \
--name myVM \
--image win2016datacenter \
--admin-username azureuser \
--admin-password myPassword12
It takes a few minutes to create the VM and supporting resources. The following example output shows the VM
create operation was successful.
{
"fqdns": "",
"id":
"/subscriptions/<guid>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM",
"location": "eastus",
"macAddress": "00-0D-3A-23-9A-49",
"powerState": "VM running",
"privateIpAddress": "10.0.0.4",
"publicIpAddress": "52.174.34.95",
"resourceGroup": "myResourceGroup"
}
IMPORTANT
Each Key Vault must have a unique name. The following example creates a Key Vault named myKV, but you must name
yours something different.
Clean up resources
When no longer needed, you can use the az group delete command to remove the resource group, VM, and Key
Vault.
Next steps
In this quickstart, you created a virtual machine, created a Key Vault that was enable for encryption keys, and
encrypted the VM. Advance to the next article to learn more about Azure Disk Encryption prerequisites for IaaS
VMs.
Azure Disk Encryption overview
Quickstart: Create and encrypt a Windows virtual
machine in Azure with PowerShell
11/2/2020 • 2 minutes to read • Edit Online
The Azure PowerShell module is used to create and manage Azure resources from the PowerShell command line
or in scripts. This quickstart shows you how to use the Azure PowerShell module to create a Windows virtual
machine (VM), create a Key Vault for the storage of encryption keys, and encrypt the VM.
If you don't have an Azure subscription, create a free account before you begin.
$cred = Get-Credential
New-AzVM -Name MyVm -Credential $cred -ResourceGroupName MyResourceGroup -Image win2016datacenter -Size
Standard_D2S_V3
IMPORTANT
Each Key Vault must have a unique name. The following example creates a Key Vault named myKV, but you must name
yours something different.
When encryption is enabled, you will see the following in the returned output:
OsVolumeEncrypted : Encrypted
DataVolumesEncrypted : NoDiskFound
OsVolumeEncryptionSettings : Microsoft.Azure.Management.Compute.Models.DiskEncryptionSettings
ProgressMessage : Provisioning succeeded
Clean up resources
When no longer needed, you can use the Remove-AzResourceGroup cmdlet to remove the resource group, VM,
and all related resources:
Next steps
In this quickstart, you created a virtual machine, created a Key Vault that was enable for encryption keys, and
encrypted the VM. Advance to the next article to learn more about Azure Disk Encryption prerequisites for IaaS
VMs.
Azure Disk Encryption overview
Quickstart: Create and encrypt a Windows virtual
machine with the Azure portal
11/2/2020 • 2 minutes to read • Edit Online
Azure virtual machines (VMs) can be created through the Azure portal. The Azure portal is a browser-based user
interface to create VMs and their associated resources. In this quickstart you will use the Azure portal to deploy a
Windows virtual machine, create a key vault for the storage of encryption keys, and encrypt the VM.
If you don't have an Azure subscription, create a free account before you begin.
Sign in to Azure
Sign in to the Azure portal.
9. Select the "Management" tab and verify that you have a Diagnostics Storage Account. If you have no storage
accounts, select "Create New", give your new account a name, and select "Ok"
7. To the left of Key vault and key , select Click to select a key .
8. On the Select key from Azure Key Vault , under the Key Vault field, select Create new .
9. On the Create key vault screen, ensure that the Resource Group is myResourceGroup, and give your key
vault a name. Every key vault across Azure must have an unique name.
10. On the Access Policies tab, check the Azure Disk Encr yption for volume encr yption box.
11. Select Review + create .
12. After the key vault has passed validation, select Create . This will return you to the Select key from Azure
Key Vault screen.
13. Leave the Key field blank and choose Select .
14. At the top of the encryption screen, click Save . A popup will warn you that the VM will reboot. Click Yes .
Clean up resources
When no longer needed, you can delete the resource group, virtual machine, and all related resources. To do so,
select the resource group for the virtual machine, select Delete, then confirm the name of the resource group to
delete.
Next steps
In this quickstart, you created a Key Vault that was enable for encryption keys, created a virtual machine, and
enabled the virtual machine for encryption.
Azure Disk Encryption overview
Creating and configuring a key vault for Azure Disk
Encryption
11/2/2020 • 6 minutes to read • Edit Online
Azure Disk Encryption uses Azure Key Vault to control and manage disk encryption keys and secrets. For more
information about key vaults, see Get started with Azure Key Vault and Secure your key vault.
WARNING
If you have previously used Azure Disk Encryption with Azure AD to encrypt a VM, you must continue use this option
to encrypt your VM. See Creating and configuring a key vault for Azure Disk Encryption with Azure AD (previous
release) for details.
Creating and configuring a key vault for use with Azure Disk Encryption involves three steps:
NOTE
You must select the option in the Azure Key Vault access policy settings to enable access to Azure Disk Encryption for
volume encryption. If you have enabled the firewall on the key vault, you must go to the Networking tab on the key vault
and enable access to Microsoft Trusted Services.
NOTE
The steps in this article are automated in the Azure Disk Encryption prerequisites CLI script and Azure Disk Encryption
prerequisites PowerShell script.
Connect-AzAccount
Azure PowerShell
WARNING
Your key vault and VMs must be in the same subscription. Also, to ensure that encryption secrets don't cross regional
boundaries, Azure Disk Encryption requires the Key Vault and the VMs to be co-located in the same region. Create and use
a Key Vault that is in the same subscription and region as the VMs to be encrypted.
Each Key Vault must have a unique name. Replace with the name of your key vault in the following examples.
Azure CLI
When creating a key vault using Azure CLI, add the "--enabled-for-disk-encryption" flag.
Azure PowerShell
When creating a key vault using Azure PowerShell, add the "-EnabledForDiskEncryption" flag.
Enable Key Vault for deployment, if needed: Enables the Microsoft.Compute resource provider to
retrieve secrets from this key vault when this key vault is referenced in resource creation, for example
when creating a virtual machine.
Enable Key Vault for template deployment, if needed: Allow Resource Manager to retrieve secrets
from the vault.
Azure PowerShell
Use the key vault PowerShell cmdlet Set-AzKeyVaultAccessPolicy to enable disk encryption for the key vault.
Enable Key Vault for disk encr yption: EnabledForDiskEncryption is required for Azure Disk
encryption.
Enable Key Vault for deployment, if needed: Enables the Microsoft.Compute resource provider to
retrieve secrets from this key vault when this key vault is referenced in resource creation, for example
when creating a virtual machine.
Enable Key Vault for template deployment, if needed: Enables Azure Resource Manager to get
secrets from this key vault when this key vault is referenced in a template deployment.
Set-AzKeyVaultAccessPolicy -VaultName "<your-unique-keyvault-name>" -ResourceGroupName
"MyResourceGroup" -EnabledForTemplateDeployment
Azure portal
1. Select your key vault, go to Access Policies , and Click to show advanced access policies .
2. Select the box labeled Enable access to Azure Disk Encr yption for volume encr yption .
3. Select Enable access to Azure Vir tual Machines for deployment and/or Enable Access to Azure
Resource Manager for template deployment , if needed.
4. Click Save .
You may instead import a private key using the Azure CLI az keyvault key import command:
In either case, you will supply the name of your KEK to the Azure CLI az vm encryption enable --key-encryption-
key parameter.
Azure PowerShell
Use the Azure PowerShell Add-AzKeyVaultKey cmdlet to generate a new KEK and store it in your key vault.
You may instead import a private key using the Azure PowerShell az keyvault key import command.
In either case, you will supply the ID of your KEK key Vault and the URL of your KEK to the Azure PowerShell Set-
AzVMDiskEncryptionExtension -KeyEncryptionKeyVaultId and -KeyEncryptionKeyUrl parameters. Note that this
example assumes that you are using the same key vault for both the disk encryption key and the KEK.
Next steps
Azure Disk Encryption prerequisites CLI script
Azure Disk Encryption prerequisites PowerShell script
Learn Azure Disk Encryption scenarios on Windows VMs
Learn how to troubleshoot Azure Disk Encryption
Read the Azure Disk Encryption sample scripts
Azure Disk Encryption sample scripts
11/2/2020 • 7 minutes to read • Edit Online
This article provides sample scripts for preparing pre-encrypted VHDs and other tasks.
NOTE
All scripts refer to the latest, non-AAD version of ADE, except where noted.
To compress the OS partition and prepare the machine for BitLocker, execute the bdehdcfg if needed:
NOTE
Prepare the VM with a separate data/resource VHD for getting the external key by using BitLocker.
Upload encrypted VHD to an Azure storage account
After DM-Crypt encryption is enabled, the local encrypted VHD needs to be uploaded to your storage account.
# In cloud shell, the account ID is a managed service identity, so specify the username directly
# $upn = "user@domain.com"
# Set-AzKeyVaultAccessPolicy -VaultName $kvname -UserPrincipalName $acctid -PermissionsToKeys wrapKey -
PermissionsToSecrets set
# When running as a service principal, retrieve the service principal ID from the account ID, and set access
policy to that
# $acctid = (Get-AzureRmContext).Account.Id
# $spoid = (Get-AzureRmADServicePrincipal -ServicePrincipalName $acctid).Id
# Set-AzKeyVaultAccessPolicy -VaultName $kvname -ObjectId $spoid -BypassObjectIdValidation -PermissionsToKeys
wrapKey -PermissionsToSecrets set
# This is the passphrase that was provided for encryption during the distribution installation
$passphrase = "contoso-password"
Use the $secretUrl in the next step for attaching the OS disk without using KEK.
Disk encryption secret encrypted with a KEK
Before you upload the secret to the key vault, you can optionally encrypt it by using a key encryption key. Use the
wrap API to first encrypt the secret using the key encryption key. The output of this wrap operation is a base64
URL encoded string, which you can then upload as a secret by using the Set-AzKeyVaultSecret cmdlet.
# This is the passphrase that was provided for encryption during the distribution installation
$passphrase = "contoso-password"
Add-AzKeyVaultKey -VaultName $KeyVaultName -Name "keyencryptionkey" -Destination Software
$KeyEncryptionKey = Get-AzKeyVaultKey -VaultName $KeyVault.OriginalVault.Name -Name "keyencryptionkey"
$apiversion = "2015-06-01"
##############################
# Get Auth URI
##############################
$response = try { Invoke-RestMethod -Method GET -Uri $uri -Headers $headers } catch {
$_.Exception.Response }
$authHeader = $response.Headers["www-authenticate"]
$authUri = [regex]::match($authHeader, 'authorization="(.*?)"').Groups[1].Value
##############################
# Get Auth Token
##############################
$response = Invoke-RestMethod -Method POST -Uri $uri -Headers $headers -Body $body
$access_token = $response.access_token
##############################
# Get KEK info
##############################
$keyid = $response.key.kid
##############################
# Encrypt passphrase using KEK
##############################
$passphraseB64 = [Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes($Passphrase))
$uri = $keyid + "/encrypt?api-version=" + $apiversion
$headers = @{"Authorization" = "Bearer " + $access_token; "Content-Type" = "application/json"}
$bodyObj = @{"alg" = "RSA-OAEP"; "value" = $passphraseB64}
$body = $bodyObj | ConvertTo-Json
$response = Invoke-RestMethod -Method POST -Uri $uri -Headers $headers -Body $body
$wrappedSecret = $response.value
##############################
# Store secret
# Store secret
##############################
$secretName = [guid]::NewGuid().ToString()
$uri = $KeyVault.VaultUri + "/secrets/" + $secretName + "?api-version=" + $apiversion
$secretAttributes = @{"enabled" = $true}
$secretTags = @{"DiskEncryptionKeyEncryptionAlgorithm" = "RSA-OAEP"; "DiskEncryptionKeyFileName" =
"LinuxPassPhraseFileName"}
$headers = @{"Authorization" = "Bearer " + $access_token; "Content-Type" = "application/json"}
$bodyObj = @{"value" = $wrappedSecret; "attributes" = $secretAttributes; "tags" = $secretTags}
$body = $bodyObj | ConvertTo-Json
$response = Invoke-RestMethod -Method PUT -Uri $uri -Headers $headers -Body $body
$secretUrl = $response.id
Use $KeyEncryptionKey and $secretUrl in the next step for attaching the OS disk using KEK.
Set-AzVMOSDisk `
-VM $VirtualMachine `
-Name $OSDiskName `
-SourceImageUri $VhdUri `
-VhdUri $OSDiskUri `
-Windows `
-CreateOption FromImage `
-DiskEncryptionKeyVaultId $KeyVault.ResourceId `
-DiskEncryptionKeyUrl $SecretUrl
Using a KEK
When you attach the OS disk, pass $KeyEncryptionKey and $secretUrl . The URL was generated in the "Disk
encryption secret encrypted with a KEK" section.
Set-AzVMOSDisk `
-VM $VirtualMachine `
-Name $OSDiskName `
-SourceImageUri $CopiedTemplateBlobUri `
-VhdUri $OSDiskUri `
-Windows `
-CreateOption FromImage `
-DiskEncryptionKeyVaultId $KeyVault.ResourceId `
-DiskEncryptionKeyUrl $SecretUrl `
-KeyEncryptionKeyVaultId $KeyVault.ResourceId `
-KeyEncryptionKeyURL $KeyEncryptionKey.Id
Azure Disk Encryption troubleshooting guide
11/2/2020 • 3 minutes to read • Edit Online
This guide is for IT professionals, information security analysts, and cloud administrators whose organizations use
Azure Disk Encryption. This article is to help with troubleshooting disk-encryption-related problems.
Before taking any of the steps below, first ensure that the VMs you are attempting to encrypt are among the
supported VM sizes and operating systems, and that you have met all the prerequisites:
Networking requirements
Group policy requirements
Encryption key storage requirements
\windows\system32\bdehdcfg.exe
\windows\system32\bdehdcfglib.dll
\windows\system32\en-US\bdehdcfglib.dll.mui
\windows\system32\en-US\bdehdcfg.exe.mui
Next steps
In this document, you learned more about some common problems in Azure Disk Encryption and how to
troubleshoot those problems. For more information about this service and its capabilities, see the following
articles:
Apply disk encryption in Azure Security Center
Azure data encryption at rest
Azure Disk Encryption for Windows virtual machines
FAQ
11/2/2020 • 7 minutes to read • Edit Online
This article provides answers to frequently asked questions (FAQ) about Azure Disk Encryption for Windows VMs.
For more information about this service, see Azure Disk Encryption overview.
Can I encrypt both boot and data volumes with Azure Disk Encryption?
You can encrypt both boot and data volumes, but you can't encrypt the data without first encrypting the OS
volume.
WARNING
If you have previously used Azure Disk Encryption with Azure AD app by specifying Azure AD credentials to encrypt this
VM, you will have to continue use this option to encrypt your VM. You can't use Azure Disk Encryption on this encrypted
VM as this isn't a supported scenario, meaning switching away from AAD application for this encrypted VM isn't
supported yet.
Does Azure Disk Encryption allow you to bring your own key (BYOK)?
Yes, you can supply your own key encryption keys. These keys are safeguarded in Azure Key Vault, which is the key
store for Azure Disk Encryption. For more information on the key encryption keys support scenarios, see Creating
and configuring a key vault for Azure Disk Encryption.
NOTE
Do not delete or edit any contents in this disk. Do not unmount the disk since the encryption key presence is needed for any
encryption operations on the IaaS VM.
What encryption method does Azure Disk Encryption use?
Azure Disk Encryption selects the encryption method in BitLocker based on the version of Windows as follows:
W IN DO W S VERSIO N S VERSIO N EN C RY P T IO N M ET H O D
Windows Server 2012, Windows 8, 8.1, < 1511 AES 256 bit *
10
* AES 256 bit with Diffuser isn't supported in Windows 2012 and later.
To determine Windows OS version, run the 'winver' tool in your virtual machine.
Next steps
In this document, you learned more about the most frequent questions related to Azure Disk Encryption. For more
information about this service, see the following articles:
Azure Disk Encryption Overview
Apply disk encryption in Azure Security Center
Azure data encryption at rest
Azure Disk Encryption with Azure AD (previous
release)
11/2/2020 • 2 minutes to read • Edit Online
The new release of Azure Disk Encr yption eliminates the requirement for providing an Azure AD
application parameter to enable VM disk encr yption. With the new release, you are no longer
required to provide Azure AD credentials during the enable encr yption step. All new VMs must be
encr ypted without the Azure AD application parameters using the new release. To view instructions
to enable VM disk encr yption using the new release, see Azure Disk Encr yption for Windows VMs .
VMs that were already encr ypted with Azure AD application parameters are still suppor ted and
should continue to be maintained with the AAD syntax.
This article supplements Azure Disk Encryption for Windows VMs with additional requirements and prerequisites
for Azure Disk Encryption with Azure AD (previous release). The Supported VMs and operating systems section
remains the same.
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\v4.0.30319]
"SystemDefaultTlsVersions"=dword:00000001
"SchUseStrongCrypto"=dword:00000001
[HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Microsoft\.NETFramework\v4.0.30319]
"SystemDefaultTlsVersions"=dword:00000001
"SchUseStrongCrypto"=dword:00000001`
Group Policy:
The Azure Disk Encryption solution uses the BitLocker external key protector for Windows IaaS VMs. For
domain joined VMs, don't push any group policies that enforce TPM protectors. For information about the
group policy for “Allow BitLocker without a compatible TPM,” see BitLocker Group Policy Reference.
BitLocker policy on domain joined virtual machines with custom group policy must include the following
setting: Configure user storage of BitLocker recovery information -> Allow 256-bit recovery key. Azure Disk
Encryption will fail when custom group policy settings for BitLocker are incompatible. On machines that
didn't have the correct policy setting, apply the new policy, force the new policy to update (gpupdate.exe
/force), and then restarting may be required.
Next steps
Creating and configuring a key vault for Azure Disk Encryption with Azure AD (previous release)
Enable Azure Disk Encryption with Azure AD on Windows VMs (previous release)
Azure Disk Encryption prerequisites CLI script
Azure Disk Encryption prerequisites PowerShell script
Creating and configuring a key vault for Azure Disk
Encryption with Azure AD (previous release)
11/2/2020 • 15 minutes to read • Edit Online
The new release of Azure Disk Encr yption eliminates the requirement for providing an Azure AD
application parameter to enable VM disk encr yption. With the new release, you are no longer
required to provide Azure AD credentials during the enable encr yption step. All new VMs must be
encr ypted without the Azure AD application parameters using the new release. To view instructions
to enable VM disk encr yption using the new release, see Azure Disk Encr yption . VMs that were
already encr ypted with Azure AD application parameters are still suppor ted and should continue to
be maintained with the AAD syntax.
Azure Disk Encryption uses Azure Key Vault to control and manage disk encryption keys and secrets. For more
information about key vaults, see Get started with Azure Key Vault and Secure your key vault.
Creating and configuring a key vault for use with Azure Disk Encryption with Azure AD (previous release) involves
three steps:
1. Create a key vault.
2. Set up an Azure AD application and service principal.
3. Set the key vault access policy for the Azure AD app.
4. Set key vault advanced access policies.
You may also, if you wish, generate or import a key encryption key (KEK).
See the main Creating and configuring a key vault for Azure Disk Encryption article for steps on how to Install
tools and connect to Azure.
NOTE
The steps in this article are automated in the Azure Disk Encryption prerequisites CLI script and Azure Disk Encryption
prerequisites PowerShell script.
WARNING
In order to make sure the encryption secrets don’t cross regional boundaries, Azure Disk Encryption needs the Key Vault
and the VMs to be co-located in the same region. Create and use a Key Vault that is in the same region as the VM to be
encrypted.
# Get-AzLocation
New-AzResourceGroup –Name 'MyKeyVaultResourceGroup' –Location 'East US'
3. Note the Vault Name , Resource Group Name , Resource ID , Vault URI , and the Object ID that are
returned for later use when you encrypt the disks.
Create a key vault with Azure CLI
You can manage your key vault with Azure CLI using the az keyvault commands. To create a key vault, use az
keyvault create.
1. Create a new resource group, if needed, with az group create. To list locations, use az account list-locations
3. Note the Vault Name (name), Resource Group Name , Resource ID (ID), Vault URI , and the Object ID
that are returned for use later.
Create a key vault with a Resource Manager template
You can create a key vault by using the Resource Manager template.
1. On the Azure quickstart template, click Deploy to Azure .
2. Select the subscription, resource group, resource group location, Key Vault name, Object ID, legal terms, and
agreement, and then click Purchase .
2. The $azureAdApplication.ApplicationId is the Azure AD ClientID and the $aadClientSecret is the client secret
that you will use later to enable Azure Disk Encryption. Safeguard the Azure AD client secret appropriately.
Running $azureAdApplication.ApplicationId will show you the ApplicationID.
Set up an Azure AD app and service principal with Azure CLI
You can manage your service principals with Azure CLI using the az ad sp commands. For more information, see
Create an Azure service principal.
1. Create a new service principal.
2. The appId returned is the Azure AD ClientID used in other commands. It's also the SPN you'll use for az
keyvault set-policy. The password is the client secret that you should use later to enable Azure Disk
Encryption. Safeguard the Azure AD client secret appropriately.
Set up an Azure AD app and service principal though the Azure portal
Use the steps from the Use portal to create an Azure Active Directory application and service principal that can
access resources article to create an Azure AD application. Each step listed below will take you directly to the
article section to complete.
1. Verify required permissions
2. Create an Azure Active Directory application
You can use any name and sign-on URL you would like when creating the application.
3. Get the application ID and the authentication key.
The authentication key is the client secret and is used as the AadClientSecret for Set-
AzVMDiskEncryptionExtension.
The authentication key is used by the application as a credential to sign in to Azure AD. In the
Azure portal, this secret is called keys, but has no relation to key vaults. Secure this secret
appropriately.
The application ID will be used later as the AadClientId for Set-AzVMDiskEncryptionExtension and as the
ServicePrincipalName for Set-AzKeyVaultAccessPolicy.
Set the key vault access policy for the Azure AD app
To write encryption secrets to a specified Key Vault, Azure Disk Encryption needs the Client ID and the Client Secret
of the Azure Active Directory application that has permissions to write secrets to the Key Vault.
NOTE
Azure Disk Encryption requires you to configure the following access policies to your Azure AD client application: WrapKey
and Set permissions.
Set the key vault access policy for the Azure AD app with Azure PowerShell
Your Azure AD application needs rights to access the keys or secrets in the vault. Use the Set-
AzKeyVaultAccessPolicy cmdlet to grant permissions to the application, using the client ID (which was generated
when the application was registered) as the –ServicePrincipalName parameter value. To learn more, see the blog
post Azure Key Vault - Step by Step.
1. Set the key vault access policy for the AD application with PowerShell.
$keyVaultName = 'MySecureVault'
$aadClientID = 'MyAadAppClientID'
$KVRGname = 'MyKeyVaultResourceGroup'
Set-AzKeyVaultAccessPolicy -VaultName $keyVaultName -ServicePrincipalName $aadClientID -
PermissionsToKeys 'WrapKey' -PermissionsToSecrets 'Set' -ResourceGroupName $KVRGname
Set the key vault access policy for the Azure AD app with Azure CLI
Use az keyvault set-policy to set the access policy. For more information, see Manage Key Vault using CLI 2.0.
Give the service principal you created via the Azure CLI access to get secrets and wrap keys with the following
command:
az keyvault set-policy --name "MySecureVault" --spn "<spn created with CLI/the Azure AD ClientID>" --key-
permissions wrapKey --secret-permissions set
Set the key vault access policy for the Azure AD app with the portal
1. Open the resource group with your key vault.
2. Select your key vault, go to Access Policies , then click Add new .
3. Under Select principal , search for the Azure AD application you created and select it.
4. For Key permissions , check Wrap Key under Cr yptographic Operations .
5. For Secret permissions , check Set under Secret Management Operations .
6. Click OK to save the access policy.
Set key vault advanced access policies
The Azure platform needs access to the encryption keys or secrets in your key vault to make them available to the
VM for booting and decrypting the volumes. Enable disk encryption on the key vault or deployments will fail.
Set key vault advanced access policies with Azure PowerShell
Use the key vault PowerShell cmdlet Set-AzKeyVaultAccessPolicy to enable disk encryption for the key vault.
Enable Key Vault for disk encr yption: EnabledForDiskEncryption is required for Azure Disk encryption.
Enable Key Vault for deployment, if needed: Enables the Microsoft.Compute resource provider to
retrieve secrets from this key vault when this key vault is referenced in resource creation, for example when
creating a virtual machine.
Enable Key Vault for template deployment, if needed: Enables Azure Resource Manager to get
secrets from this key vault when this key vault is referenced in a template deployment.
Set key vault advanced access policies using the Azure CLI
Use az keyvault update to enable disk encryption for the key vault.
Enable Key Vault for disk encr yption: Enabled-for-disk-encryption is required.
Enable Key Vault for deployment, if needed: Allow Virtual Machines to retrieve certificates stored as
secrets from the vault.
Enable Key Vault for template deployment, if needed: Allow Resource Manager to retrieve secrets
from the vault.
Set key vault advanced access policies through the Azure portal
1. Select your keyvault, go to Access Policies , and Click to show advanced access policies .
2. Select the box labeled Enable access to Azure Disk Encr yption for volume encr yption .
3. Select Enable access to Azure Vir tual Machines for deployment and/or Enable Access to Azure
Resource Manager for template deployment , if needed.
4. Click Save .
$Loc = 'MyLocation';
$KVRGname = 'MyKeyVaultResourceGroup';
$KeyVaultName = 'MySecureVault';
New-AzResourceGroup –Name $KVRGname –Location $Loc;
New-AzKeyVault -VaultName $KeyVaultName -ResourceGroupName $KVRGname -Location $Loc;
$KeyVault = Get-AzKeyVault -VaultName $KeyVaultName -ResourceGroupName $KVRGname;
$KeyVaultResourceId = (Get-AzKeyVault -VaultName $KeyVaultName -ResourceGroupName $KVRGname).ResourceId;
$diskEncryptionKeyVaultUrl = (Get-AzKeyVault -VaultName $KeyVaultName -ResourceGroupName
$KVRGname).VaultUri;
$aadClientSecret = 'MyAADClientSecret';
$aadClientSecretSec = ConvertTo-SecureString -String $aadClientSecret -AsPlainText -Force;
$azureAdApplication = New-AzADApplication -DisplayName "<My Application Display Name>" -HomePage "
<https://MyApplicationHomePage>" -IdentifierUris "<https://MyApplicationUri>" -Password $aadClientSecretSec
$servicePrincipal = New-AzADServicePrincipal –ApplicationId $azureAdApplication.ApplicationId;
$aadClientID = $azureAdApplication.ApplicationId;
#Step 3: Enable the vault for disk encryption and set the access policy for the Azure AD application.
#Step 4: Create a new key in the key vault with the Add-AzKeyVaultKey cmdlet.
# Fill in 'MyKeyEncryptionKey' with your value.
$keyEncryptionKeyName = 'MyKeyEncryptionKey';
Add-AzKeyVaultKey -VaultName $KeyVaultName -Name $keyEncryptionKeyName -Destination 'Software';
$keyEncryptionKeyUrl = (Get-AzKeyVaultKey -VaultName $KeyVaultName -Name $keyEncryptionKeyName).Key.kid;
$VMName = 'MySecureVM';
$VMRGName = 'MyVirtualMachineResourceGroup';
Set-AzVMDiskEncryptionExtension -ResourceGroupName $VMRGName -VMName $vmName -AadClientID $aadClientID -
AadClientSecret $aadClientSecret -DiskEncryptionKeyVaultUrl $diskEncryptionKeyVaultUrl -
DiskEncryptionKeyVaultId $KeyVaultResourceId -KeyEncryptionKeyUrl $keyEncryptionKeyUrl -
KeyEncryptionKeyVaultId $KeyVaultResourceId;
$KVRGname = 'MyKeyVaultResourceGroup'
$KeyVaultName= 'MySecureVault'
$Loc = 'MyLocation'
New-AzKeyVault -VaultName $KeyVaultName -ResourceGroupName $KVRGname -Location $Loc
Set-AzKeyVaultAccessPolicy -VaultName $KeyVaultName -ResourceGroupName $KVRGname -EnabledForDiskEncryption
# Create the Azure AD application and associate the certificate with it.
# Fill in "C:\certificates\mycert.pfx", "Password", "<My Application Display Name>", "
<https://MyApplicationHomePage>", and "<https://MyApplicationUri>" with your values.
# MyApplicationHomePage and the MyApplicationUri can be any values you wish
$CertPath = "C:\certificates\mycert.pfx"
$CertPassword = "Password"
$Cert = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2($CertPath, $CertPassword)
$CertValue = [System.Convert]::ToBase64String($cert.GetRawCertData())
$AADClientID = $AzureAdApplication.ApplicationId
$aadClientCertThumbprint= $cert.Thumbprint
$KeyVaultSecretName = "MyAADCert"
$FileContentBytes = get-content $CertPath -Encoding Byte
$FileContentEncoded = [System.Convert]::ToBase64String($fileContentBytes)
$JSONObject = @"
{
"data" : "$filecontentencoded",
"dataType" : "pfx",
"password" : "$CertPassword"
}
"@
$JSONObjectBytes = [System.Text.Encoding]::UTF8.GetBytes($jsonObject)
$JSONEncoded = [System.Convert]::ToBase64String($jsonObjectBytes)
#Set the secret and set the key vault policy for -EnabledForDeployment
$VMName = 'MySecureVM'
$VMRGName = 'MyVirtualMachineResourceGroup'
$CertUrl = (Get-AzKeyVaultSecret -VaultName $KeyVaultName -Name $KeyVaultSecretName).Id
$SourceVaultId = (Get-AzKeyVault -VaultName $KeyVaultName -ResourceGroupName $KVRGName).ResourceId
$VM = Get-AzVM -ResourceGroupName $VMRGName -Name $VMName
$VM = Add-AzVMSecret -VM $VM -SourceVaultId $SourceVaultId -CertificateStore "My" -CertificateUrl $CertUrl
Update-AzVM -VM $VM -ResourceGroupName $VMRGName
#Enable encryption on the VM using Azure AD client ID and the client certificate thumbprint
$KVRGname = 'MyKeyVaultResourceGroup'
$KeyVaultName= 'MySecureVault'
$Loc = 'MyLocation'
New-AzKeyVault -VaultName $KeyVaultName -ResourceGroupName $KVRGname -Location $Loc
Set-AzKeyVaultAccessPolicy -VaultName $KeyVaultName -ResourceGroupName $KVRGname -EnabledForDiskEncryption
# Create the Azure AD application and associate the certificate with it.
# Fill in "C:\certificates\mycert.pfx", "Password", "<My Application Display Name>", "
<https://MyApplicationHomePage>", and "<https://MyApplicationUri>" with your values.
# MyApplicationHomePage and the MyApplicationUri can be any values you wish
$CertPath = "C:\certificates\mycert.pfx"
$CertPassword = "Password"
$Cert = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2($CertPath, $CertPassword)
$CertValue = [System.Convert]::ToBase64String($cert.GetRawCertData())
$AADClientID = $AzureAdApplication.ApplicationId
$aadClientCertThumbprint= $cert.Thumbprint
$KeyVaultSecretName = "MyAADCert"
$FileContentBytes = get-content $CertPath -Encoding Byte
$FileContentEncoded = [System.Convert]::ToBase64String($fileContentBytes)
$JSONObject = @"
{
"data" : "$filecontentencoded",
"dataType" : "pfx",
"password" : "$CertPassword"
}
"@
$JSONObjectBytes = [System.Text.Encoding]::UTF8.GetBytes($jsonObject)
$JSONEncoded = [System.Convert]::ToBase64String($jsonObjectBytes)
#Set the secret and set the key vault policy for deployment
#Setting some variables with the key vault information and generating a KEK
#Setting some variables with the key vault information and generating a KEK
# FIll in 'KEKName'
$KEKName ='KEKName'
$KeyVault = Get-AzKeyVault -VaultName $KeyVaultName -ResourceGroupName $KVRGname
$DiskEncryptionKeyVaultUrl = $KeyVault.VaultUri
$KeyVaultResourceId = $KeyVault.ResourceId
$KEK = Add-AzKeyVaultKey -VaultName $KeyVaultName -Name $KEKName -Destination "Software"
$KeyEncryptionKeyUrl = $KEK.Key.kid
$VMName = 'MySecureVM';
$VMRGName = 'MyVirtualMachineResourceGroup';
$CertUrl = (Get-AzKeyVaultSecret -VaultName $KeyVaultName -Name $KeyVaultSecretName).Id
$SourceVaultId = (Get-AzKeyVault -VaultName $KeyVaultName -ResourceGroupName $KVRGName).ResourceId
$VM = Get-AzVM -ResourceGroupName $VMRGName -Name $VMName
$VM = Add-AzVMSecret -VM $VM -SourceVaultId $SourceVaultId -CertificateStore "My" -CertificateUrl $CertUrl
Update-AzVM -VM $VM -ResourceGroupName $VMRGName
#Enable encryption on the VM using Azure AD client ID and the client certificate thumbprint
Next steps
Enable Azure Disk Encryption with Azure AD on Windows VMs (previous release)
Azure Disk Encryption with Azure AD for Windows
VMs (previous release)
11/2/2020 • 13 minutes to read • Edit Online
The new release of Azure Disk Encr yption eliminates the requirement for providing an Azure AD
application parameter to enable VM disk encr yption. With the new release, you are no longer
required to provide Azure AD credentials during the enable encr yption step. All new VMs must be
encr ypted without the Azure AD application parameters using the new release. To view instructions
to enable VM disk encr yption using the new release, see Azure Disk Encr yption for Windows VMS .
VMs that were already encr ypted with Azure AD application parameters are still suppor ted and
should continue to be maintained with the AAD syntax.
You can enable many disk-encryption scenarios, and the steps may vary according to the scenario. The following
sections cover the scenarios in greater detail for Windows IaaS VMs. Before you can use disk encryption, the Azure
Disk Encryption prerequisites need to be completed.
IMPORTANT
You should take a snapshot and/or create a backup before disks are encrypted. Backups ensure that a recovery
option is possible if an unexpected failure occurs during encryption. VMs with managed disks require a backup
before encryption occurs. Once a backup is made, you can use the Set-AzVMDiskEncryptionExtension cmdlet to
encrypt managed disks by specifying the -skipVmBackup parameter. For more information about how to back up
and restore encrypted VMs, see Back up and restore encrypted Azure VM.
Encrypting or disabling encryption may cause a VM to reboot.
Select the VM, then click on Disks under the Settings heading to verify encryption status in the
portal. In the chart under Encr yption , you'll see if it's enabled.
The following table lists the Resource Manager template parameters for new VMs from the Marketplace scenario
using Azure AD client ID:
PA RA M ET ER DESC RIP T IO N
vmSize Size of the VM. Currently, only Standard A, D, and G series are
supported.
virtualNetworkName Name of the VNet that the VM NIC should belong to.
subnetName Name of the subnet in the VNet that the VM NIC should
belong to.
keyVaultURL URL of the key vault that the BitLocker key should be
uploaded to. You can get it by using the cmdlet
(Get-AzKeyVault -VaultName "MyKeyVault" -
ResourceGroupName
"MyKeyVaultResourceGroupName").VaultURI
or the Azure CLI
az keyvault show --name "MySecureVault" --query
properties.vaultUri
keyEncryptionKeyURL URL of the key encryption key that's used to encrypt the
generated BitLocker key (optional).
$KVRGname = 'MyKeyVaultResourceGroup';
$VMRGName = 'MyVirtualMachineResourceGroup';
$vmName = 'MySecureVM';
$aadClientID = 'My-AAD-client-ID';
$aadClientSecret = 'My-AAD-client-secret';
$KeyVaultName = 'MySecureVault';
$KeyVault = Get-AzKeyVault -VaultName $KeyVaultName -ResourceGroupName $KVRGname;
$diskEncryptionKeyVaultUrl = $KeyVault.VaultUri;
$KeyVaultResourceId = $KeyVault.ResourceId;
Encr ypt a running VM using KEK to wrap the client secret: Azure Disk Encryption lets you specify an
existing key in your key vault to wrap disk encryption secrets that were generated while enabling
encryption. When a key encryption key is specified, Azure Disk Encryption uses that key to wrap the
encryption secrets before writing to Key Vault.
$KVRGname = 'MyKeyVaultResourceGroup';
$VMRGName = 'MyVirtualMachineResourceGroup';
$vmName = ‘MyExtraSecureVM’;
$aadClientID = 'My-AAD-client-ID';
$aadClientSecret = 'My-AAD-client-secret';
$KeyVaultName = 'MySecureVault';
$keyEncryptionKeyName = 'MyKeyEncryptionKey';
$KeyVault = Get-AzKeyVault -VaultName $KeyVaultName -ResourceGroupName $KVRGname;
$diskEncryptionKeyVaultUrl = $KeyVault.VaultUri;
$KeyVaultResourceId = $KeyVault.ResourceId;
$keyEncryptionKeyUrl = (Get-AzKeyVaultKey -VaultName $KeyVaultName -Name
$keyEncryptionKeyName).Key.kid;
NOTE
The syntax for the value of disk-encryption-keyvault parameter is the full identifier string:
/subscriptions/[subscription-id-guid]/resourceGroups/[resource-group-
name]/providers/Microsoft.KeyVault/vaults/[keyvault-name]
The syntax for the value of the key-encryption-key parameter is the full URI to the KEK as in: https://[keyvault-
name].vault.azure.net/keys/[kekname]/[kek-unique-id]
Verify the disks are encr ypted: To check on the encryption status of an IaaS VM, use the Get-
AzVmDiskEncryptionStatus cmdlet.
Disable disk encr yption: To disable the encryption, use the Disable-AzureRmVMDiskEncryption cmdlet.
Verify the disks are encr ypted: To check on the encryption status of an IaaS VM, use the az vm
encryption show command.
Disable encr yption: To disable encryption, use the az vm encryption disable command.
PA RA M ET ER DESC RIP T IO N
keyVaultName Name of the key vault that the BitLocker key should be
uploaded to. You can get it by using the cmdlet
(Get-AzKeyVault -ResourceGroupName
<MyKeyVaultResourceGroupName>). Vaultname
or the Azure CLI command
az keyvault list --resource-group "MySecureGroup"
keyEncryptionKeyURL URL of the key encryption key that's used to encrypt the
generated BitLocker key. This parameter is optional if you
select nokek in the UseExistingKek drop-down list. If you
select kek in the UseExistingKek drop-down list, you must
enter the keyEncryptionKeyURL value.
Encr ypt a running VM using KEK to wrap the client secret: Azure Disk Encryption lets you specify an
existing key in your key vault to wrap disk encryption secrets that were generated while enabling
encryption. When a key encryption key is specified, Azure Disk Encryption uses that key to wrap the
encryption secrets before writing to Key Vault. This example uses "All" for the -VolumeType parameter,
which includes both OS and Data volumes. If you only want to encrypt the OS volume, use "OS" for the -
VolumeType parameter.
$sequenceVersion = [Guid]::NewGuid();
$KVRGname = 'MyKeyVaultResourceGroup';
$VMRGName = 'MyVirtualMachineResourceGroup';
$vmName = 'MyExtraSecureVM';
$aadClientID = 'My-AAD-client-ID';
$aadClientSecret = 'My-AAD-client-secret';
$KeyVaultName = 'MySecureVault';
$keyEncryptionKeyName = 'MyKeyEncryptionKey';
$KeyVault = Get-AzKeyVault -VaultName $KeyVaultName -ResourceGroupName $KVRGname;
$diskEncryptionKeyVaultUrl = $KeyVault.VaultUri;
$KeyVaultResourceId = $KeyVault.ResourceId;
$keyEncryptionKeyUrl = (Get-AzKeyVaultKey -VaultName $KeyVaultName -Name
$keyEncryptionKeyName).Key.kid;
NOTE
The syntax for the value of disk-encryption-keyvault parameter is the full identifier string:
/subscriptions/[subscription-id-guid]/resourceGroups/[resource-group-
name]/providers/Microsoft.KeyVault/vaults/[keyvault-name]
The syntax for the value of the key-encryption-key parameter is the full URI to the KEK as in: https://[keyvault-
name].vault.azure.net/keys/[kekname]/[kek-unique-id]
$VMRGName = 'MyVirtualMachineResourceGroup'
$KVRGname = 'MyKeyVaultResourceGroup';
$AADClientID ='My-AAD-client-ID';
$KeyVaultName = 'MySecureVault';
$VMName = 'MySecureVM';
$KeyVault = Get-AzKeyVault -VaultName $KeyVaultName -ResourceGroupName $KVRGname;
$diskEncryptionKeyVaultUrl = $KeyVault.VaultUri;
$KeyVaultResourceId = $KeyVault.ResourceId;
# Fill in the certificate path and the password so the thumbprint can be set as a variable.
Enable encryption using certificate -based authentication and a KEK with Azure PowerShell
# Fill in 'MyVirtualMachineResourceGroup', 'MyKeyVaultResourceGroup', 'My-AAD-client-ID', 'MySecureVault,,
'MySecureVM’, and "KEKName.
$VMRGName = 'MyVirtualMachineResourceGroup';
$KVRGname = 'MyKeyVaultResourceGroup';
$AADClientID ='My-AAD-client-ID';
$KeyVaultName = 'MySecureVault';
$VMName = 'MySecureVM';
$keyEncryptionKeyName ='KEKName';
$KeyVault = Get-AzKeyVault -VaultName $KeyVaultName -ResourceGroupName $KVRGname;
$diskEncryptionKeyVaultUrl = $KeyVault.VaultUri;
$KeyVaultResourceId = $KeyVault.ResourceId;
$keyEncryptionKeyUrl = (Get-AzKeyVaultKey -VaultName $KeyVaultName -Name $keyEncryptionKeyName).Key.kid;
## Fill in the certificate path and the password so the thumbprint can be read and set as a variable.
# Enable disk encryption using the client certificate thumbprint and a KEK
Disable encryption
You can disable encryption using Azure PowerShell, the Azure CLI, or with a Resource Manager template.
Disable disk encr yption with Azure PowerShell: To disable the encryption, use the Disable-Azure
RmVMDiskEncryption cmdlet.
Disable encr yption with the Azure CLI: To disable encryption, use the az vm encryption disable
command.
Next steps
Azure Disk Encryption overview
Setting up WinRM access for Virtual Machines in
Azure Resource Manager
11/2/2020 • 3 minutes to read • Edit Online
Here are the steps you need to take to set up a VM with WinRM connectivity
1. Create a Key Vault
2. Create a self-signed certificate
3. Upload your self-signed certificate to Key Vault
4. Get the URL for your self-signed certificate in the Key Vault
5. Reference your self-signed certificates URL while creating a VM
$certificateName = "somename"
$jsonObject = @"
{
"data": "$filecontentencoded",
"dataType" :"pfx",
"password": "<password>"
}
"@
$jsonObjectBytes = [System.Text.Encoding]::UTF8.GetBytes($jsonObject)
$jsonEncoded = [System.Convert]::ToBase64String($jsonObjectBytes)
Step 4: Get the URL for your self-signed certificate in the Key Vault
The Microsoft.Compute resource provider needs a URL to the secret inside the Key Vault while provisioning the VM.
This enables the Microsoft.Compute resource provider to download the secret and create the equivalent certificate
on the VM.
NOTE
The URL of the secret needs to include the version as well. An example URL looks like below
https://contosovault.vault.azure.net:443/secrets/contososecret/01h9db0df2cd4300a20ence585a6s7ve
Templates
You can get the link to the URL in the template using the below code
PowerShell
You can get this URL using the below PowerShell command
Enable-PSRemoting -Force
NOTE
You might need to make sure the WinRM service is running if the above does not work. You can do that using
Get-Service WinRM
Once the setup is done, you can connect to the VM using the below command
Access management for cloud resources is a critical function for any organization that is using the cloud. Azure
role-based access control (Azure RBAC) helps you manage who has access to Azure resources, what they can do
with those resources, and what areas they have access to.
Azure RBAC is an authorization system built on Azure Resource Manager that provides fine-grained access
management of Azure resources.
This video provides a quick overview of Azure RBAC.
Role definition
A role definition is a collection of permissions. It's typically just called a role. A role definition lists the operations
that can be performed, such as read, write, and delete. Roles can be high-level, like owner, or specific, like virtual
machine reader.
Azure includes several built-in roles that you can use. For example, the Virtual Machine Contributor role allows a
user to create and manage virtual machines. If the built-in roles don't meet the specific needs of your
organization, you can create your own Azure custom roles.
This video provides a quick overview of built-in roles and custom roles.
Azure has data operations that enable you to grant access to data within an object. For example, if a user has read
data access to a storage account, then they can read the blobs or messages within that storage account.
For more information, see Understand Azure role definitions.
Scope
Scope is the set of resources that the access applies to. When you assign a role, you can further limit the actions
allowed by defining a scope. This is helpful if you want to make someone a Website Contributor, but only for one
resource group.
In Azure, you can specify a scope at four levels: management group, subscription, resource group, or resource.
Scopes are structured in a parent-child relationship. You can assign roles at any of these levels of scope.
You can create role assignments using the Azure portal, Azure CLI, Azure PowerShell, Azure SDKs, or REST APIs.
For more information, see Steps to add a role assignment.
License requirements
Using this feature is free and included in your Azure subscription.
Next steps
Add or remove Azure role assignments using the Azure portal
Understand the different roles
Cloud Adoption Framework: Resource access management in Azure
Apply policies to Windows VMs with Azure Resource
Manager
11/2/2020 • 3 minutes to read • Edit Online
By using policies, an organization can enforce various conventions and rules throughout the enterprise.
Enforcement of the desired behavior can help mitigate risk while contributing to the success of the organization. In
this article, we describe how you can use Azure Resource Manager policies to define the desired behavior for your
organization’s Virtual Machines.
For an introduction to policies, see What is Azure Policy?.
Use a wild card to modify the preceding policy to allow any Windows Server Datacenter image:
{
"field": "Microsoft.Compute/imageSku",
"like": "*Datacenter"
}
Use anyOf to modify the preceding policy to allow any Windows Server 2012 R2 Datacenter or higher image:
{
"anyOf": [
{
"field": "Microsoft.Compute/imageSku",
"like": "2012-R2-Datacenter*"
},
{
"field": "Microsoft.Compute/imageSku",
"like": "2016-Datacenter*"
}
]
}
Managed disks
To require the use of managed disks, use the following policy:
{
"if": {
"anyOf": [
{
"allOf": [
{
"field": "type",
"equals": "Microsoft.Compute/virtualMachines"
},
{
"field": "Microsoft.Compute/virtualMachines/osDisk.uri",
"exists": true
}
]
},
{
"allOf": [
{
"field": "type",
"equals": "Microsoft.Compute/VirtualMachineScaleSets"
},
{
"anyOf": [
{
"field": "Microsoft.Compute/VirtualMachineScaleSets/osDisk.vhdContainers",
"exists": true
},
{
"field": "Microsoft.Compute/VirtualMachineScaleSets/osdisk.imageUrl",
"exists": true
}
]
}
]
}
]
},
"then": {
"effect": "deny"
}
}
{
"if": {
"allOf": [
{
"field": "type",
"in": [
"Microsoft.Compute/virtualMachines",
"Microsoft.Compute/VirtualMachineScaleSets"
]
},
{
"not": {
"field": "Microsoft.Compute/imageId",
"contains": "resourceGroups/CustomImage"
}
}
]
},
"then": {
"effect": "deny"
}
}
{
"field": "Microsoft.Compute/imageId",
"in": ["{imageId1}","{imageId2}"]
}
}
]
},
"then": {
"effect": "deny"
}
}
{
"if": {
"allOf": [
{
"field": "type",
"in":[ "Microsoft.Compute/virtualMachines","Microsoft.Compute/VirtualMachineScaleSets"]
},
{
"field": "Microsoft.Compute/licenseType",
"exists": true
}
]
},
"then": {
"effect": "deny"
}
}
Next steps
After defining a policy rule (as shown in the preceding examples), you need to create the policy definition and
assign it to a scope. The scope can be a subscription, resource group, or resource. To assign policies, see Use
Azure portal to assign and manage resource policies, Use PowerShell to assign policies, or Use Azure CLI to
assign policies.
For an introduction to resource policies, see What is Azure Policy?.
For guidance on how enterprises can use Resource Manager to effectively manage subscriptions, see Azure
enterprise scaffold - prescriptive subscription governance.
Set up Key Vault for virtual machines using Azure
PowerShell
11/2/2020 • 2 minutes to read • Edit Online
NOTE
Azure has two different deployment models you can use to create and work with resources: Azure Resource Manager and
classic. This article covers the use of the Resource Manager deployment model. We recommend the Resource Manager
deployment model for new deployments instead of the classic deployment model.
In Azure Resource Manager stack, secrets/certificates are modeled as resources that are provided by the resource
provider of Key Vault. To learn more about Key Vault, see What is Azure Key Vault?
NOTE
1. In order for Key Vault to be used with Azure Resource Manager virtual machines, the EnabledForDeployment property
on Key Vault must be set to true. You can do this in various clients.
2. The Key Vault needs to be created in the same subscription and location as the Virtual Machine.
For existing key vaults, you can use this PowerShell cmdlet:
Then to enable Key Vault for use with template deployment, run the following command:
{
"type": "Microsoft.KeyVault/vaults",
"name": "ContosoKeyVault",
"apiVersion": "2015-06-01",
"location": "<location-of-key-vault>",
"properties": {
"enabledForDeployment": "true",
....
....
}
}
For other options that you can configure when you create a key vault by using templates, see Create a key vault.
What if an Azure service disruption impacts Azure
VMs
11/2/2020 • 3 minutes to read • Edit Online
At Microsoft, we work hard to make sure that our services are always available to you when you need them. Forces
beyond our control sometimes impact us in ways that cause unplanned service disruptions.
Microsoft provides a Service Level Agreement (SLA) for its services as a commitment for uptime and connectivity.
The SLA for individual Azure services can be found at Azure Service Level Agreements.
Azure already has many built-in platform features that support highly available applications. For more about these
services, read Disaster recovery and high availability for Azure applications.
This article covers a true disaster recovery scenario, when a whole region experiences an outage due to major
natural disaster or widespread service interruption. These are rare occurrences, but you must prepare for the
possibility that there is an outage of an entire region. If an entire region experiences a service disruption, the locally
redundant copies of your data would temporarily be unavailable. If you have enabled geo-replication, three
additional copies of your Azure Storage blobs and tables are stored in a different region. In the event of a complete
regional outage or a disaster in which the primary region is not recoverable, Azure remaps all of the DNS entries to
the geo-replicated region.
To help you handle these rare occurrences, we provide the following guidance for Azure virtual machines in the
case of a service disruption of the entire region where your Azure virtual machine application is deployed.
NOTE
Be aware that you do not have any control over this process, and it will only occur for region-wide service disruptions.
Because of this, you must also rely on other application-specific backup strategies to achieve the highest level of availability.
For more information, see the section on Data strategies for disaster recovery.
Next steps
Start protecting your applications running on Azure virtual machines using Azure Site Recovery
To learn more about how to implement a disaster recovery and high availability strategy, see Disaster
recovery and high availability for Azure applications.
To develop a detailed technical understanding of a cloud platform’s capabilities, see Azure resiliency technical
guidance.
If the instructions are not clear, or if you would like Microsoft to do the operations on your behalf, contact
Customer Support.
What is the Azure Backup service?
11/2/2020 • 3 minutes to read • Edit Online
The Azure Backup service provides simple, secure, and cost-effective solutions to back up your data and recover it
from the Microsoft Azure cloud.
Next steps
Review the architecture and components for different backup scenarios.
Verify support requirements and limitations for backup, and for Azure VM backup.
Back up a virtual machine in Azure with the CLI
11/2/2020 • 5 minutes to read • Edit Online
The Azure CLI is used to create and manage Azure resources from the command line or in scripts. You can protect
your data by taking backups at regular intervals. Azure Backup creates recovery points that can be stored in geo-
redundant recovery vaults. This article details how to back up a virtual machine (VM) in Azure with the Azure CLI.
You can also perform these steps with Azure PowerShell or in the Azure portal.
This quickstart enables backup on an existing Azure VM. If you need to create a VM, you can create a VM with the
Azure CLI.
O P T IO N EXA M P L E/ L IN K
Select the Cloud Shell button on the menu bar at the upper
right in the Azure portal.
By default, the Recovery Services vault is set for Geo-Redundant storage. Geo-Redundant storage ensures your
backup data is replicated to a secondary Azure region that's hundreds of miles away from the primary region. If the
storage redundancy setting needs to be modified, use az backup vault backup-properties set cmdlet.
NOTE
If the VM isn't in the same resource group as that of the vault, then myResourceGroup refers to the resource group where
vault was created. Instead of VM name, provide the VM ID as indicated below.
IMPORTANT
While using CLI to enable backup for multiple VMs at once, ensure that a single policy doesn't have more than 100 VMs
associated with it. This is a recommended best practice. Currently, the PowerShell client doesn't explicitly block if there are
more than 100 VMs, but the check is planned to be added in the future.
The output is similar to the following example, which shows the backup job is InProgress:
When the Status of the backup job reports Completed, your VM is protected with Recovery Services and has a full
recovery point stored.
Clean up deployment
When no longer needed, you can disable protection on the VM, remove the restore points and Recovery Services
vault, then delete the resource group and associated VM resources. If you used an existing VM, you can skip the
final az group delete command to leave the resource group and VM in place.
If you want to try a Backup tutorial that explains how to restore data for your VM, go to Next steps.
az backup protection disable \
--resource-group myResourceGroup \
--vault-name myRecoveryServicesVault \
--container-name myVM \
--item-name myVM \
--delete-backup-data true
az backup vault delete \
--resource-group myResourceGroup \
--name myRecoveryServicesVault \
az group delete --name myResourceGroup
Next steps
In this quickstart, you created a Recovery Services vault, enabled protection on a VM, and created the initial
recovery point. To learn more about Azure Backup and Recovery Services, continue to the tutorials.
Back up multiple Azure VMs
Use Azure portal to back up multiple virtual machines
11/2/2020 • 7 minutes to read • Edit Online
When you back up data in Azure, you store that data in an Azure resource called a Recovery Services vault. The
Recovery Services vault resource is available from the Settings menu of most Azure services. The benefit of having
the Recovery Services vault integrated into the Settings menu of most Azure services is the ease of backing up
data. However, working individually with each database or virtual machine in your business is tedious. What if you
want to back up the data for all virtual machines in one department, or in one location? It's easy to back up
multiple virtual machines by creating a backup policy and applying that policy to the desired virtual machines. This
tutorial explains how to:
Create a Recovery Services vault
Define a backup policy
Apply the backup policy to protect multiple virtual machines
Trigger an on-demand backup job for the protected virtual machines
2. In the All ser vices dialog box, enter Recovery Services. The list of resources filters according to your input.
In the list of resources, select Recover y Ser vices vaults .
The list of Recovery Services vaults in the subscription appears.
3. On the Recover y Ser vices vaults dashboard, select Add .
6. It can take a while to create the Recovery Services vault. Monitor the status notifications in the
Notifications area at the upper-right corner of the portal. After your vault is created, it's visible in the list of
Recovery Services vaults. If you don't see your vault, select Refresh .
When you create a Recovery Services vault, by default the vault has geo-redundant storage. To provide data
resiliency, geo-redundant storage replicates the data multiple times across two Azure regions.
2. On the vault dashboard menu, select Backup to open the Backup menu.
3. On the Backup Goal menu, in the Where is your workload running drop-down menu, choose Azure.
From the What do you want to backup drop-down, choose Virtual machine, and select Backup .
These actions prepare the Recovery Services vault for interacting with a virtual machine. Recovery Services
vaults have a default policy that creates a restore point each day, and retains the restore points for 30 days.
4. To create a new policy, on the Backup policy menu, from the Choose backup policy drop-down menu,
select Create a new policy.
5. The Backup policy pane will open. Fill out the following details:
For Policy Name type Finance. Enter the following changes for the Backup policy:
For Backup frequency set the timezone for Central Time. Since the sports complex is in Texas, the
owner wants the timing to be local. Leave the backup frequency set to Daily at 3:30AM.
For Retention of daily backup point , set the period to 90 days.
For Retention of weekly backup point , use the Monday restore point and retain it for 52 weeks.
For Retention of monthly backup point , use the restore point from First Sunday of the month,
and retain it for 36 months.
Deselect the Retention of yearly backup point option. The leader of Finance doesn't want to keep
data longer than 36 months.
Select OK to create the backup policy.
After creating the backup policy, associate the policy with the virtual machines.
6. Under Vir tual Machines , select Add .
7. The Select vir tual machines pane will open. Select myVM and select OK to deploy the backup policy to
the virtual machines.
All virtual machines that are in the same location, and aren't already associated with a backup policy, appear.
myVMH1 and myVMR1 are selected to be associated with the Finance policy.
8. After the virtual machines have been chosen, select Enable Backup .
When the deployment completes, you'll receive a notification that deployment successfully completed.
Initial backup
You've enabled backup for the Recovery Services vaults, but an initial backup hasn't been created. It's a disaster
recovery best practice to trigger the first backup, so that your data is protected.
To run an on-demand backup job:
1. On the vault dashboard, select 3 under Backup Items , to open the Backup Items menu.
Clean up resources
If you plan to continue on to work with subsequent tutorials, don't clean up the resources created in this tutorial. If
you don't plan to continue, use the following steps to delete all resources created by this tutorial in the Azure
portal.
1. On the myRecover ySer vicesVault dashboard, select 3 under Backup Items to open the Backup Items
menu.
2. On the Backup Items menu, select Azure Vir tual Machine to open the list of virtual machines associated
with the vault.
The Backup Items list opens.
3. In the Backup Items menu, select the ellipsis to open the Context menu.
4. On the context menu, select Stop backup to open Stop Backup menu.
5. In the Stop Backup menu, select the upper drop-down menu and choose Delete Backup Data .
6. In the Type the name of the Backup item dialog, type myVM.
7. Once the backup item is verified (a check mark appears), Stop backup button is enabled. Select Stop
Backup to stop the policy and delete the restore points.
NOTE
Deleted items are retained in the soft delete state for 14 days. Only after that period can the vault be deleted. For
more information, see Delete an Azure Backup Recovery Services vault.
Once the vault is deleted, you'll return to the list of Recovery Services vaults.
Next steps
In this tutorial, you used the Azure portal to:
Create a Recovery Services vault
Set the vault to protect virtual machines
Create a custom backup and retention policy
Assign the policy to protect multiple virtual machines
Trigger an on-demand back up for virtual machines
Continue to the next tutorial to restore an Azure virtual machine from disk.
Restore VMs using CLI
Restore a VM with Azure CLI
11/2/2020 • 9 minutes to read • Edit Online
Azure Backup creates recovery points that are stored in geo-redundant recovery vaults. When you restore from a
recovery point, you can restore the whole VM or individual files. This article explains how to restore a complete VM
using CLI. In this tutorial you learn how to:
List and select recovery points
Restore a disk from a recovery point
Create a VM from the restored disk
For information on using PowerShell to restore a disk and create a recovered VM, see Back up and restore Azure
VMs with PowerShell.
O P T IO N EXA M P L E/ L IN K
Select the Cloud Shell button on the menu bar at the upper
right in the Azure portal.
Prerequisites
This tutorial requires a Linux VM that has been protected with Azure Backup. To simulate an accidental VM deletion
and recovery process, you create a VM from a disk in a recovery point. If you need a Linux VM that has been
protected with Azure Backup, see Back up a virtual machine in Azure with the CLI.
Backup overview
When Azure initiates a backup, the backup extension on the VM takes a point-in-time snapshot. The backup
extension is installed on the VM when the first backup is requested. Azure Backup can also take a snapshot of the
underlying storage if the VM isn't running when the backup takes place.
By default, Azure Backup takes a file system consistent backup. Once Azure Backup takes the snapshot, the data is
transferred to the Recovery Services vault. To maximize efficiency, Azure Backup identifies and transfers only the
blocks of data that have changed since the previous backup.
When the data transfer is complete, the snapshot is removed and a recovery point is created.
Restore a VM disk
IMPORTANT
It's very strongly recommended to use Az CLI version 2.0.74 or later to get all the benefits of a quick restore including
managed disk restore. It's best if you always use the latest version.
2. Restore the disk from your recovery point with az backup restore restore-disks. Replace mystorageaccount
with the name of the storage account you created in the preceding command. Replace
myRecoveryPointName with the recovery point name you obtained in the output from the previous az
backup recoverypoint list command. Also provide the target resource group to which the managed
disks are restored into .
WARNING
If target-resource-group isn't provided, then the managed disks will be restored as unmanaged disks to the given
storage account. This will have significant consequences to the restore time since the time taken to restore the disks
entirely depends on the given storage account. You'll get the benefit of instant restore only when the target-
resource-group parameter is given. If the intention is to restore managed disks as unmanaged then don't provide the
target-resource-group parameter and instead provide the parameter restore-as-unmanaged-disk parameter
as shown below. This parameter is available from az 3.4.0 onwards.
This will restore managed disks as unmanaged disks to the given storage account and won't be leveraging the
'instant' restore functionality. In future versions of CLI, it will be mandatory to provide either the target-resource-
group parameter or restore-as-unmanaged-disk parameter.
Unmanaged disks restore
If the backed-up VM has unmanaged disks and if the intent is to restore disks from the recovery point, you first
provide an Azure storage account. This storage account is used to store the VM configuration and the deployment
template that can be later used to deploy the VM from the restored disks. By default, the unmanaged disks will be
restored to their original storage accounts. If you wish to restore all unmanaged disks to one single place, then the
given storage account can also be used as a staging location for those disks too.
In additional steps, the restored disk is used to create a VM.
1. To create a storage account, use az storage account create. The storage account name must be all lowercase,
and be globally unique. Replace mystorageaccount with your own unique name:
2. Restore the disk from your recovery point with az backup restore restore-disks. Replace mystorageaccount
with the name of the storage account you created in the preceding command. Replace
myRecoveryPointName with the recovery point name you obtained in the output from the previous az
backup recoverypoint list command:
As mentioned above, the unmanaged disks will be restored to their original storage account. This provides the best
restore performance. But if all unmanaged disks need to be restored to given storage account, then use the
relevant flag as shown below.
To monitor the status of restore job, use [az backup job list](/cli/azure/backup/job#az-backup-job-list):
```azurecli-interactive
az backup job list \
--resource-group myResourceGroup \
--vault-name myRecoveryServicesVault \
--output table
The output is similar to the following example, which shows the restore job is InProgress:
When the Status of the restore job reports Completed, the necessary information (VM configuration and the
deployment template) has been restored to the storage account.
The output of this query will give all details, but we're interested only in the storage account contents. We can use
the query capability of Azure CLI to fetch the relevant details
{
"Config Blob Container Name": "myVM-daa1931199fd4a22ae601f46d8812276",
"Config Blob Name": "config-myVM-1fc2d55d-f0dc-4ca6-ad48-aca0fe5d0414.json",
"Config Blob Uri": "https://mystorageaccount.blob.core.windows.net/myVM-
daa1931199fd4a22ae601f46d8812276/config-appvm8-1fc2d55d-f0dc-4ca6-ad48-aca0519c0232.json",
"Job Type": "Recover disks",
"Recovery point time ": "12/25/2019 10:07:11 PM",
"Target Storage Account Name": "mystorageaccount",
"Target resource group": "mystorageaccountRG",
"Template Blob Uri": "https://mystorageaccount.blob.core.windows.net/myVM-
daa1931199fd4a22ae601f46d8812276/azuredeploy1fc2d55d-f0dc-4ca6-ad48-aca0519c0232.json"
}
"https://mystorageaccount.blob.core.windows.net/myVM-daa1931199fd4a22ae601f46d8812276/azuredeploy1fc2d55d-
f0dc-4ca6-ad48-aca0519c0232.json"
The template blob Uri will be of this format and extract the template name
https://<storageAccountName.blob.core.windows.net>/<containerName>/<templateName>
So, the template name from the above example will be azuredeploy1fc2d55d-f0dc-4ca6-ad48-aca0519c0232.json and
the container name is myVM-daa1931199fd4a22ae601f46d8812276
Now get the SAS token for this container and template as detailed here
expiretime=$(date -u -d '30 minutes' +%Y-%m-%dT%H:%MZ)
connection=$(az storage account show-connection-string \
--resource-group mystorageaccountRG \
--name mystorageaccount \
--query connectionString)
token=$(az storage blob generate-sas \
--container-name myVM-daa1931199fd4a22ae601f46d8812276 \
--name azuredeploy1fc2d55d-f0dc-4ca6-ad48-aca0519c0232.json \
--expiry $expiretime \
--permissions r \
--output tsv \
--connection-string $connection)
url=$(az storage blob url \
--container-name myVM-daa1931199fd4a22ae601f46d8812276 \
--name azuredeploy1fc2d55d-f0dc-4ca6-ad48-aca0519c0232.json \
--output tsv \
--connection-string $connection)
To confirm that your VM has been created from your recovered disk, list the VMs in your resource group with az
vm list as follows:
Next steps
In this tutorial, you restored a disk from a recovery point and then created a VM from the disk. You learned how to:
List and select recovery points
Restore a disk from a recovery point
Create a VM from the restored disk
Advance to the next tutorial to learn about restoring individual files from a recovery point.
Restore files to a virtual machine in Azure
Restore files to a virtual machine in Azure
11/2/2020 • 6 minutes to read • Edit Online
Azure Backup creates recovery points that are stored in geo-redundant recovery vaults. When you restore from a
recovery point, you can restore the whole VM or individual files. This article details how to restore individual files.
In this tutorial you learn how to:
List and select recovery points
Connect a recovery point to a VM
Restore files from a recovery point
O P T IO N EXA M P L E/ L IN K
Select the Cloud Shell button on the menu bar at the upper
right in the Azure portal.
Prerequisites
This tutorial requires a Linux VM that has been protected with Azure Backup. To simulate an accidental file deletion
and recovery process, you delete a page from a web server. If you need a Linux VM that runs a webserver and has
been protected with Azure Backup, see Back up a virtual machine in Azure with the CLI.
Backup overview
When Azure initiates a backup, the backup extension on the VM takes a point-in-time snapshot. The backup
extension is installed on the VM when the first backup is requested. Azure Backup can also take a snapshot of the
underlying storage if the VM isn't running when the backup takes place.
By default, Azure Backup takes a file system consistent backup. Once Azure Backup takes the snapshot, the data is
transferred to the Recovery Services vault. To maximize efficiency, Azure Backup identifies and transfers only the
blocks of data that have changed since the previous backup.
When the data transfer is complete, the snapshot is removed and a recovery point is created.
2. To confirm that your web site currently works, open a web browser to the public IP address of your VM.
Leave the web browser window open.
3. Connect to your VM with SSH. Replace publicIpAddress with the public IP address that you obtained in a
previous command:
ssh publicIpAddress
4. Delete the default page from the web server at /var/www/html/index.nginx-debian.html as follows:
sudo rm /var/www/html/index.nginx-debian.html
5. In your web browser, refresh the web page. The web site no longer loads the page, as shown in the following
example:
6. Close the SSH session to your VM as follows:
exit
2. To obtain the script that connects, or mounts, the recovery point to your VM, use az backup restore files
mount-rp. The following example obtains the script for the VM named myVM that's protected in
myRecoveryServicesVault.
Replace myRecoveryPointName with the name of the recovery point that you obtained in the preceding
command:
3. To transfer the script to your VM, use Secure Copy (SCP). Provide the name of your downloaded script, and
replace publicIpAddress with the public IP address of your VM. Make sure you include the trailing : at the
end of the SCP command as follows:
NOTE
Check here to see if you can run the script on your VM before continuing.
1. Connect to your VM with SSH. Replace publicIpAddress with the public IP address of your VM as follows:
ssh publicIpAddress
2. To allow your script to run correctly, add execute permissions with chmod . Enter the name of your own
script:
chmod +x myVM_we_1571974050985163527.sh
3. To mount the recovery point, run the script. Enter the name of your own script:
./myVM_we_1571974050985163527.sh
As the script runs, you're prompted to enter a password to access the recovery point. Enter the password
shown in the output from the previous az backup restore files mount-rp command that generated the
recovery script.
The output from the script gives you the path for the recovery point. The following example output shows
that the recovery point is mounted at /home/azureuser/myVM-20170919213536/Volume1:
Connection succeeded!
Please wait while we attach volumes of the recovery point to this machine...
************ Volumes of the recovery point and their mount paths on this machine ************
4. Use cp to copy the NGINX default web page from the mounted recovery point back to the original file
location. Replace the /home/azureuser/myVM-20170919213536/Volume1 mount point with your own
location:
5. In your web browser, refresh the web page. The web site now loads correctly again, as shown in the
following example:
exit
7. Unmount the recovery point from your VM with az backup restore files unmount-rp. The following example
unmounts the recovery point from the VM named myVM in myRecoveryServicesVault.
Replace myRecoveryPointName with the name of your recovery point that you obtained in the previous
commands:
Next steps
In this tutorial, you connected a recovery point to a VM and restored files for a web server. You learned how to:
List and select recovery points
Connect a recovery point to a VM
Restore files from a recovery point
Advance to the next tutorial to learn about how to back up Windows Server to Azure.
Back up Windows Server to Azure
About Site Recovery
11/2/2020 • 3 minutes to read • Edit Online
Welcome to the Azure Site Recovery service! This article provides a quick service overview.
As an organization you need to adopt a business continuity and disaster recovery (BCDR) strategy that keeps your
data safe, and your apps and workloads online, when planned and unplanned outages occur.
Azure Recovery Services contributes to your BCDR strategy:
Site Recover y ser vice : Site Recovery helps ensure business continuity by keeping business apps and
workloads running during outages. Site Recovery replicates workloads running on physical and virtual
machines (VMs) from a primary site to a secondary location. When an outage occurs at your primary site, you
fail over to secondary location, and access apps from there. After the primary location is running again, you
can fail back to it.
Backup ser vice : The Azure Backup service keeps your data safe and recoverable.
Site Recovery can manage replication for:
Azure VMs replicating between Azure regions.
On-premises VMs, Azure Stack VMs, and physical servers.
Simple BCDR solution Using Site Recovery, you can set up and manage replication,
failover, and failback from a single location in the Azure portal.
Azure VM replication You can set up disaster recovery of Azure VMs from a
primary region to a secondary region.
On-premises VM replication You can replicate on-premises VMs and physical servers to
Azure, or to a secondary on-premises datacenter. Replication
to Azure eliminates the cost and complexity of maintaining a
secondary datacenter.
Workload replication Replicate any workload running on supported Azure VMs, on-
premises Hyper-V and VMware VMs, and Windows/Linux
physical servers.
RTO and RPO targets Keep recovery time objectives (RTO) and recovery point
objectives (RPO) within organizational limits. Site Recovery
provides continuous replication for Azure VMs and VMware
VMs, and replication frequency as low as 30 seconds for
Hyper-V. You can reduce RTO further by integrating with
Azure Traffic Manager.
F EAT URE DETA IL S
Keep apps consistent over failover You can replicate using recovery points with application-
consistent snapshots. These snapshots capture disk data, all
data in memory, and all transactions in process.
Testing without disruption You can easily run disaster recovery drills, without affecting
ongoing replication.
Flexible failovers You can run planned failovers for expected outages with zero-
data loss. Or, unplanned failovers with minimal data loss,
depending on replication frequency, for unexpected disasters.
You can easily fail back to your primary site when it's available
again.
Customized recover y plans Using recovery plans, you can customize and sequence the
failover and recovery of multi-tier applications running on
multiple VMs. You group machines together in a recovery
plan, and optionally add scripts and manual actions. Recovery
plans can be integrated with Azure automation runbooks.
BCDR integration Site Recovery integrates with other BCDR technologies. For
example, you can use Site Recovery to protect the SQL Server
backend of corporate workloads, with native support for SQL
Server AlwaysOn, to manage the failover of availability
groups.
Network integration Site Recovery integrates with Azure for application network
management. For example, to reserve IP addresses, configure
load-balancers, and use Azure Traffic Manager for efficient
network switchovers.
Replication scenarios Replicate Azure VMs from one Azure region to another.
Next steps
Read more about workload support.
Get started with Azure VM replication between regions.
Tutorial: Set up disaster recovery for Azure VMs
11/6/2020 • 5 minutes to read • Edit Online
This tutorial shows you how to set up disaster recovery for Azure VMs using Azure Site Recovery. In this article,
you learn how to:
Verify Azure settings and permissions
Prepare VMs you want to replicate
Create a Recovery Services vault
Enable VM replication
When you enable replication for a VM to set up disaster recovery, the Site Recovery Mobility service extension
installs on the VM, and registers it with Azure Site Recovery. During replication, VM disk writes are sent to a cache
storage account in the source region. Data is sent from there to the target region, and recovery points are
generated from the data. When you fail over a VM during disaster recovery, a recovery point is used to restore the
VM in the target region.
NOTE
Tutorials provide instructions with the simplest default settings. If you want to set up Azure VM disaster recovery with
customized settings, review this article.
If you don’t have an Azure subscription, create a free account before you begin.
Prerequisites
Before you start this tutorial:
Review supported regions. You can set up disaster recovery for Azure VMs between any two regions in the
same geography.
You need one or more Azure VMs. Verify that Windows or Linux VMs are supported.
Review VM compute, storage, and networking requirements.
This tutorial presumes that VMs aren't encrypted. If you want to set up disaster recovery for encrypted VMs,
follow this article.
Prepare VMs
Make sure VMs have outbound connectivity, and the latest root certificates.
Set up VM connectivity
VMs that you want to replicate need outbound network connectivity.
NOTE
Site Recovery doesn't support using an authentication proxy to control network connectivity.
TA G A L LO W
Storage tag Allows data to be written from the VM to the cache storage
account.
Azure AD tag Allows access to all IP addresses that correspond to Azure AD.
AzureSiteRecovery tag Allows access to the Site Recovery service in any region.
GuestAndHybridManagement tag Use if you want to automatically upgrade the Site Recovery
Mobility agent that's running on VMs enabled for replication.
Enable replication
Select the source settings, and enable VM replication.
Select source settings
1. In the vault > Site Recover y page, under Azure vir tual machines , select Enable replication .
2. In Source > Source location , select the source Azure region in which VMs are currently running.
3. In Azure vir tual machine deployment model , leave the default Resource Manager setting.
4. In Source subscription , select the subscription in which VMs are running. You can select any subscription
that's in the same Azure Active Directory (AD) tenant as the vault.
5. In Source resource group , select the resource group containing the VMs.
6. In Disaster recover y between availability zones , leave the default No setting.
7. Select Next .
Select the VMs
Site Recovery retrieves the VMs associated with the selected subscription/resource group.
1. In Vir tual Machines , select the VMs you want to enable for disaster recovery.
2. Select Next .
Review replication settings
1. In Replication settings , review the settings. Site Recovery creates default settings/policy for the target
region. For the purposes of this tutorial, we use the default settings.
2. Select Enable replication .
3. Track replication progress in the notifications.
4. The VMs you enable appear on the vault > Replicated items page.
Next steps
In this tutorial, you enabled disaster recovery for an Azure VM. Now, run a drill to check that failover works as
expected.
Run a disaster recovery drill
Tutorial: Run a disaster recovery drill for Azure VMs
11/6/2020 • 2 minutes to read • Edit Online
Learn how to run a disaster recovery drill to another Azure region, for Azure VMs replicating with Azure Site
Recovery. In this article, you:
Verify prerequisites
Check VM settings before the drill
Run a test failover
Clean up after the drill
NOTE
This tutorial provides minimal steps for running a disaster recovery drill. If you want to run a drill with full infrastructure
testing, learn about Azure VM networking, automation, and troubleshooting.
Prerequisites
Before you start this tutorial, you must enable disaster recovery for one or more Azure VMs. To do this, complete
the first tutorial in this series.
Verify VM settings
1. In the vault > Replicated items , select the VM.
2. On the Over view page, check that the VM is protected and healthy.
3. When you run a test failover, you select an Azure virtual network in the target region. The Azure VM created
after failover is placed in this network.
In this tutorial, we select an existing network when we run the test failover.
We recommend you choose a non-production network for the drill, so that IP addresses and networking
components remain available in production networks.
You can also preconfigure network settings to be used for test failover. Granular settings you can assign
for each NIC include subnet, private IP address, public IP address, load balancer, and network security
group. We're not using this method here, but you can review this article to learn more.
Clean up resources
1. In the Essentials page, select Cleanup test failover .
2. In Test failover cleanup > Notes , record and save any observations associated with the test failover.
3. Select Testing is complete to delete VMs created during the test failover.
Learn how to fail over Azure VMs that are enabled for disaster recovery with Azure Site Recovery, to a secondary
Azure region. After failover, you reprotect VMs in the target region so that they replicate back to the primary
region. In this article, you learn how to:
Check prerequisites
Verify VM settings
Run a failover to the secondary region
Start replicating the VM back to the primary region.
NOTE
This tutorial shows you how to fail over VMs with minimal steps. If you want to run a failover with full settings, learn about
Azure VM networking, automation, and troubleshooting.
Prerequisites
Before you start this tutorial, you should have:
1. Set up replication for one or more Azure VMs. If you haven't, complete the first tutorial in this series to do that.
2. We recommend you run a disaster recovery drill for replicated VMs. Running a drill before you run a full
failover helps ensure everything works as expected, without impacting your production environment.
2. On the VM Over view page, check that the VM is protected and healthy, before you run a failover.
Run a failover
1. On the VM Over view page, select Failover .
2. In Failover , choose a recovery point. The Azure VM in the target region is created using data from this
recovery point.
Latest processed : Uses the latest recovery point processed by Site Recovery. The time stamp is shown.
No time is spent processing data, so it provides a low recovery time objective (RTO).
Latest : Processes all the data sent to Site Recovery, to create a recovery point for each VM before failing
over to it. Provides the lowest recovery point objective (RPO), because all data is replicated to Site
Recovery when the failover is triggered.
Latest app-consistent : This option fails over VMs to the latest app-consistent recovery point. The time
stamp is shown.
Custom : Fail over to particular recovery point. Custom is only available when you fail over a single VM,
and don't use a recovery plan.
NOTE
If you added a disk to a VM after you enabled replication, replication points shows disks available for recovery. For
example, a replication point created before you added a second disk will show as "1 of 2 disks".
3. Select Shut down machine before beginning failover if you want Site Recovery to try to shut down the
source VMs before starting failover. Shutdown helps to ensure no data loss. Failover continues even if
shutdown fails.
8. In Commit , select OK to confirm. Commit deletes all the available recovery points for the VM in Site
Recovery, and you won't be able to change the recovery point.
9. Monitor the commit progress in notifications.
10. Site Recovery doesn't clean up the source VM after failover. You need to do that manually.
Reprotect the VM
After failover, you reprotect the VM in the secondary region, so that it replicates back to the primary region.
1. Make sure that VM Status is Failover committed before you start.
2. Check that you can access the primary region is available, and that you have permissions to create VMs in it.
3. On the VM Over view page, select Re-Protect .
4. In Re-protect , verify the replication direction (secondary to primary region), and review the target settings
for the primary region. Resources marked as new are created by Site Recovery as part of the reprotect
operation.
5. Select OK to start the reprotect process. The process sends initial data to the target location, and then
replicates delta information for the VMs to the target.
6. Monitor reprotect progress in the notifications.
Next steps
In this tutorial, you failed over from the primary region to the secondary, and started replicating VMs back to the
primary region. Now you can fail back from the secondary region to the primary.
Fail back to the primary region
Understanding Azure virtual machine usage
11/2/2020 • 7 minutes to read • Edit Online
By analyzing your Azure usage data, powerful consumption insights can be gained – insights that can enable better cost management and allocation
throughout your organization. This document provides a deep dive into your Azure Compute consumption details. For more details on general Azure
usage, navigate to Understanding your bill.
Usage Date The date when the resource was used 11/23/2017
Meter ID Identifies the top-level service for which this usage Virtual Machines
belongs to
Meter Name This is specific for each service in Azure. For compute, Compute Hours
it is always “Compute Hours”.
Meter Region Identifies the location of the datacenter for certain JA East
services that are priced based on datacenter location.
Unit Identifies the unit that the service is charged in. Hours
Compute resources are billed per hour.
Consumed The amount of the resource that has been consumed 1, 0.5
for that day. For Compute, we bill for each minute the
VM ran for a given hour (up to 6 decimals of
accuracy).
Consumed Service The Azure platform service that you used. Microsoft.Compute
Resource Group The resource group in which the deployed resource is MyRG
running in. For more information, see Azure Resource
Manager overview.
Instance ID The identifier for the resource. The identifier contains /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/
the name you specify for the resource when it was resourceGroups/MyRG/providers/Microsoft.Compute/virtualMachines/
created. For VMs, the Instance ID will contain the
SubscriptionId, ResourceGroupName, and VMName or
(or scale set name for scale set usage).
/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/
resourceGroups/MyRG/providers/Microsoft.Compute/virtualMachineSc
Tags Tag you assign to the resource. Use tags to group {"myDepartment":"RD","myUser":"myName"}
billing records. Learn how to tag your Virtual
Machines using the CLI or PowerShell This is available
for Resource Manager VMs only.
F IEL D M EA N IN G EXA M P L E VA L UES
Additional Info Service-specific metadata. For VMs, we populate the Virtual Machines
following data in the additional info field: {"ImageType":"Canonical","ServiceType":"Standard_DS1_v2","VMName
"UsageType":"ComputeHR"}
Image Type- specific image that you ran. Find the full
list of supported strings below under Image Types. Virtual Machine Scale Sets
{"ImageType":"Canonical","ServiceType":"Standard_DS1_v2","VMName
"UsageType":"ComputeHR"}
Service Type: the size that you deployed.
Image Type
For some images in the Azure gallery, the image type is populated in the Additional Info field. This enables users to understand and track what they
have deployed on their Virtual Machine. The following values that are populated in this field based on the image you have deployed:
BitRock
Canonical FreeBSD
Open Logic
Oracle
SLES for SAP
SQL Server 14 Preview on Windows Server 2012 R2 Preview
SUSE
SUSE Premium
StorSimple Cloud Appliance
Red Hat
Red Hat for SAP Business Applications
Red Hat for SAP HANA
Windows Client BYOL
Windows Server BYOL
Windows Server Preview
Service Type
The service type field in the Additional Info field corresponds to the exact VM size you deployed. Premium storage VMs (SSD-based) and non-premium
storage VMs (HDD-based) are priced the same. If you deploy an SSD-based size, like Standard_DS2_v2, you see the non-SSD size ( Standard\_D2\_v2 VM
) in the Meter Sub-Category column and the SSD-size ( Standard\_DS2\_v2 ) in the Additional Info field.
Region Names
The region name populated in the Resource Location field in the usage details varies from the region name used in the Azure Resource Manager. Here
is a mapping between the region values:
australiaeast AU East
australiasoutheast AU Southeast
brazilsouth BR South
CanadaCentral CA Central
CanadaEast CA East
CentralIndia IN Central
RESO URC E M A N A GER REGIO N N A M E RESO URC E LO C AT IO N IN USA GE DETA IL S
centralus Central US
eastus East US
eastus2 East US 2
GermanyCentral DE Central
GermanyNortheast DE Northeast
japaneast JA East
japanwest JA West
KoreaCentral KR Central
KoreaSouth KR South
SouthIndia IN South
UKNorth US North
uksouth UK South
UKSouth2 UK South 2
ukwest UK West
WestIndia IN West
westus West US
westus2 US West 2
Next steps
To learn more about your usage details, see Understand your bill for Microsoft Azure.
Common PowerShell commands for creating and
managing Azure Virtual Machines
11/2/2020 • 2 minutes to read • Edit Online
This article covers some of the Azure PowerShell commands that you can use to create and manage virtual
machines in your Azure subscription. For more detailed help with specific command-line switches and options, you
can use the Get-Help command.
These variables might be useful for you if running more than one of the commands in this article:
$location - The location of the virtual machine. You can use Get-AzLocation to find a geographical region that
works for you.
$myResourceGroup - The name of the resource group that contains the virtual machine.
$myVM - The name of the virtual machine.
Create a VM - simplified
TA SK C OMMAND
Create a VM configuration
TA SK C OMMAND
Manage VMs
TA SK C OMMAND
Next steps
See the basic steps for creating a virtual machine in Create a Windows VM using Resource Manager and
PowerShell.
Resize a Windows VM
11/2/2020 • 2 minutes to read • Edit Online
$resourceGroup = "myResourceGroup"
$vmName = "myVM"
List the VM sizes that are available on the hardware cluster where the VM is hosted.
If the size you want is listed, run the following commands to resize the VM. If the desired size is not listed, go on to
step 3.
If the size you want is not listed, run the following commands to deallocate the VM, resize it, and restart the VM.
Replace <newVMsize> with the size you want.
$resourceGroup = "myResourceGroup"
$vmName = "myVM"
List the VM sizes that are available on the hardware cluster where the VM is hosted.
If the desired size is listed, run the following commands to resize the VM. If it is not listed, go to the next section.
If the size you want is not listed, continue with the following steps to deallocate all VMs in the availability set, resize
VMs, and restart them.
Stop all VMs in the availability set.
$availabilitySetName = "<availabilitySetName>"
$as = Get-AzAvailabilitySet -ResourceGroupName $resourceGroup -Name $availabilitySetName
$virtualMachines = $as.VirtualMachinesReferences | Get-AzResource | Get-AzVM
$virtualMachines | Stop-AzVM -Force -NoWait
$availabilitySetName = "<availabilitySetName>"
$newSize = "<newVmSize>"
$as = Get-AzAvailabilitySet -ResourceGroupName $resourceGroup -Name $availabilitySetName
$virtualMachines = $as.VirtualMachinesReferences | Get-AzResource | Get-AzVM
$virtualMachines | Foreach-Object { $_.HardwareProfile.VmSize = $newSize }
$virtualMachines | Update-AzVM
$virtualMachines | Start-AzVM
Next steps
For additional scalability, run multiple VM instances and scale out. For more information, see Automatically scale
Windows machines in a Virtual Machine Scale Set.
Change the OS disk used by an Azure VM using
PowerShell
11/2/2020 • 2 minutes to read • Edit Online
If you have an existing VM, but you want to swap the disk for a backup disk or another OS disk, you can use Azure
PowerShell to swap the OS disks. You don't have to delete and recreate the VM. You can even use a managed disk in
another resource group, as long as it isn't already in use.
The VM does need to be stopped\deallocated, then the resource ID of the managed disk can be replaced with the
resource ID of a different managed disk.
Make sure that the VM size and storage type are compatible with the disk you want to attach. For example, if the
disk you want to use is in Premium Storage, then the VM needs to be capable of Premium Storage (like a DS-series
size). Both disks must also be the same size. And ensure that you're not mixing an un-encrypted VM with an
encrypted OS disk, this is not supported. If the VM doesn't use Azure Disk Encryption, then the OS disk being
swapped in shouldn't be using Azure Disk Encryption.
Get a list of disks in a resource group using Get-AzDisk
When you have the name of the disk that you would like to use, set that as the OS disk for the VM. This example
stop\deallocates the VM named myVM and assigns the disk named newDisk as the new OS disk.
# Get the VM
$vm = Get-AzVM -ResourceGroupName myResourceGroup -Name myVM
# Start the VM
Start-AzVM -Name $vm.Name -ResourceGroupName myResourceGroup
Next steps
To create a copy of a disk, see Snapshot a disk.
How to tag a virtual machine in Azure using
PowerShell
11/2/2020 • 4 minutes to read • Edit Online
This article describes different ways to tag a Windows virtual machine in Azure through the Resource Manager
deployment model. Tags are user-defined key/value pairs which can be placed directly on a resource or a resource
group. Azure currently supports up to 50 tags per resource and resource group. Tags may be placed on a resource
at the time of creation or added to an existing resource. Please note that tags are supported for resources created
via the Resource Manager deployment model only. If you want to tag a virtual machine using the Azure CLI, see
How to tag a virtual machine in Azure using the Azure CLI.
This template includes the following tags: Department, Application, and Created By. You can add/edit these tags
directly in the template if you would like different tag names.
As you can see, the tags are defined as key/value pairs, separated by a colon (:). The tags must be defined in this
format:
"tags": {
"Key1" : "Value1",
"Key2" : "Value2"
}
Save the template file after you finish editing it with the tags of your choice.
Next, in the Edit Parameters section, you can fill out the values for your tags.
Click Create to deploy this template with your tag values.
Add a new tag through the portal by defining your own Key/Value pair, and save it.
Your new tag should now appear in the list of tags for your resource.
Tagging with PowerShell
To create, add, and delete tags through PowerShell, you first need to set up your PowerShell environment with
Azure Resource Manager. Once you have completed the setup, you can place tags on Compute, Network, and
Storage resources at creation or after the resource is created via PowerShell. This article will concentrate on
viewing/editing tags placed on Virtual Machines.
First, navigate to a Virtual Machine through the Get-AzVM cmdlet.
If your Virtual Machine already contains tags, you will then see all the tags on your resource:
Tags : {
"Application": "MyApp1",
"Created By": "MyName",
"Department": "MyDepartment",
"Environment": "Production"
}
If you would like to add tags through PowerShell, you can use the Set-AzResource command. Note when updating
tags through PowerShell, tags are updated as a whole. So if you are adding one tag to a resource that already has
tags, you will need to include all the tags that you want to be placed on the resource. Below is an example of how to
add additional tags to a resource through PowerShell Cmdlets.
This first cmdlet sets all of the tags placed on MyTestVM to the $tags variable, using the Get-AzResource and Tags
property.
The second command displays the tags for the given variable.
PS C:\> $tags
Key Value
---- -----
Department MyDepartment
Application MyApp1
Created By MyName
Environment Production
The third command adds an additional tag to the $tags variable. Note the use of the += to append the new
key/value pair to the $tags list.
The fourth command sets all of the tags defined in the $tags variable to the given resource. In this case, it is
MyTestVM.
The fifth command displays all of the tags on the resource. As you can see, Location is now defined as a tag with
MyLocation as the value.
Key Value
---- -----
Department MyDepartment
Application MyApp1
Created By MyName
Environment Production
Location MyLocation
To learn more about tagging through PowerShell, check out the Azure Resource Cmdlets.
By analyzing these tags along with usage, organizations will be able to gain new insights into their consumption
data.
Next steps
To learn more about tagging your Azure resources, see Azure Resource Manager Overview and Using Tags to
organize your Azure Resources.
To see how tags can help you manage your use of Azure resources, see Understanding your Azure Bill and Gain
insights into your Microsoft Azure resource consumption.
Time sync for Windows VMs in Azure
11/2/2020 • 7 minutes to read • Edit Online
Time sync is important for security and event correlation. Sometimes it is used for distributed transactions
implementation. Time accuracy between multiple computer systems is achieved through synchronization.
Synchronization can be affected by multiple things, including reboots and network traffic between the time source
and the computer fetching the time.
Azure is now backed by infrastructure running Windows Server 2016. Windows Server 2016 has improved
algorithms used to correct time and condition the local clock to synchronize with UTC. Windows Server 2016 also
improved the VMICTimeSync service that governs how VMs sync with the host for accurate time. Improvements
include more accurate initial time on VM start or VM restore and interrupt latency correction for samples provided
to Windows Time (W32time).
NOTE
For a quick overview of Windows Time service, take a look at this high-level overview video.
For more information, see Accurate time for Windows Server 2016.
Overview
Accuracy for a computer clock is gauged on how close the computer clock is to the Coordinated Universal Time
(UTC) time standard. UTC is defined by a multinational sample of precise atomic clocks that can only be off by one
second in 300 years. But, reading UTC directly requires specialized hardware. Instead, time servers are synced to
UTC and are accessed from other computers to provide scalability and robustness. Every computer has time
synchronization service running that knows what time servers to use and periodically checks if computer clock
needs to be corrected and adjusts time if needed.
Azure hosts are synchronized to internal Microsoft time servers that take their time from Microsoft-owned Stratum
1 devices, with GPS antennas. Virtual machines in Azure can either depend on their host to pass the accurate time
(host time) on to the VM or the VM can directly get time from a time server, or a combination of both.
Virtual machine interactions with the host can also affect the clock. During memory preserving maintenance, VMs
are paused for up to 30 seconds. For example, before maintenance begins the VM clock shows 10:00:00 AM and
lasts 28 seconds. After the VM resumes, the clock on the VM would still show 10:00:00 AM, which would be 28
seconds off. To correct for this, the VMICTimeSync service monitors what is happening on the host and prompts for
changes to happen on the VMs to compensate.
The VMICTimeSync service operates in either sample or sync mode and will only influence the clock forward. In
sample mode, which requires W32time to be running, the VMICTimeSync service polls the host every 5 seconds
and provides time samples to W32time. Approximately every 30 seconds, the W32time service takes the latest
time sample and uses it to influence the guest's clock. Sync mode activates if a guest has been resumed or if a
guest's clock drifts more than 5 seconds behind the host's clock. In cases where the W32time service is properly
running, the latter case should never happen.
Without time synchronization working, the clock on the VM would accumulate errors. When there is only one VM,
the effect might not be significant unless the workload requires highly accurate timekeeping. But in most cases, we
have multiple, interconnected VMs that use time to track transactions and the time needs to be consistent
throughout the entire deployment. When time between VMs is different, you could see the following effects:
Authentication will fail. Security protocols like Kerberos or certificate-dependent technology rely on time being
consistent across the systems.
It's very hard to figure out what have happened in a system if logs (or other data) don't agree on time. The same
event would look like it occurred at different times, making correlation difficult.
If clock is off, the billing could be calculated incorrectly.
The best results for Windows deployments are achieved by using Windows Server 2016 as the guest operating
system, which ensures you can use the latest improvements in time synchronization.
Configuration options
There are three options for configuring time sync for your Windows VMs hosted in Azure:
Host time and time.windows.com. This is the default configuration used in Azure Marketplace images.
Host-only.
Use another, external time server with or without using host time.
Use the default
By default Windows OS VM images are configured for w32time to sync from two sources:
The NtpClient provider, which gets information from time.windows.com.
The VMICTimeSync service, used to communicate the host time to the VMs and make corrections after the VM
is paused for maintenance. Azure hosts use Microsoft-owned Stratum 1 devices to keep accurate time.
w32time would prefer the time provider in the following order of priority: stratum level, root delay, root dispersion,
time offset. In most cases, w32time on an Azure VM would prefer host time due to evaluation it would do to
compare both time sources.
For domain joined machines the domain itself establishes time sync hierarchy, but the forest root still needs to take
time from somewhere and the following considerations would still hold true.
Host-only
Because time.windows.com is a public NTP server, syncing time with it requires sending traffic over the internet,
varying packet delays can negatively affect quality of the time sync. Removing time.windows.com by switching to
host-only sync can sometimes improve your time sync results.
Switching to host-only time sync makes sense if you experience time sync issues using the default configuration.
Try out the host-only sync to see if that would improve the time sync on VM.
External time server
If you have specific time sync requirements, there is also an option of using external time servers. External time
servers can provide specific time, which can be useful for test scenarios, ensuring time uniformity with machines
hosted in non-Microsoft datacenters, or handling leap seconds in a special way.
You can combine external servers with the VMICTimeSync service and VMICTimeProvider to provide results similar
to the default configuration.
To see what time server the NtpClient time provider is using, at an elevated command prompt type:
If the VM is using the default, the output will look like this:
Here is the output you could see and what it would mean:
time.windows.com - in the default configuration, w32time would get time from time.windows.com. The time
sync quality depends on internet connectivity to it and is affected by packet delays. This is the usual output you
would get on a physical machine.
VM IC Time Synchronization Provider - the VM is syncing time from the host. This is the usual output you
would get on a virtual machine running on Azure.
Your domain server - the current machine is in a domain and the domain defines the time sync hierarchy.
Some other server - w32time was explicitly configured to get the time from that another server. Time sync
quality depends on this time server quality.
Local CMOS Clock - clock is unsynchronized. You can get this output if w32time hasn't had enough time to
start after a reboot or when all the configured time sources are not available.
For w32time to be able to use the new poll intervals, the NtpServers need to be marked as using them. If servers
are annotated with 0x1 bitflag mask, that would override this mechanism and w32time would use
SpecialPollInterval instead. Make sure that specified NTP servers are either using 0x8 flag or no flag at all:
Check what flags are being used for the used NTP servers.
Next steps
Below are links to more details about the time sync:
Windows Time Service Tools and Settings
Windows Server 2016 Improvements
Accurate Time for Windows Server 2016
Support boundary to configure the Windows Time service for high-accuracy environments
Run scripts in your Windows VM
11/2/2020 • 2 minutes to read • Edit Online
To automate tasks or troubleshoot issues, you may need to run commands in a VM. The following article gives a
brief overview of the features that are available to run scripts and commands within your VMs.
Run command
The Run Command feature enables virtual machine and application management and troubleshooting using
scripts, and is available even when the machine is not reachable, for example if the guest firewall doesn't have the
RDP or SSH port open.
Run scripts in Azure virtual machines.
Can be run using Azure portal, REST API, Azure CLI, or PowerShell
Quickly run a script and view output and repeat as needed in the Azure portal.
Script can be typed directly or you can run one of the built-in scripts.
Run PowerShell script in Windows machines and Bash script in Linux machines.
Useful for virtual machine and application management and for running scripts in virtual machines that are
unreachable.
Serial console
The Serial console provides direct access to a VM, similar to having a keyboard connected to the VM.
Run commands in Azure virtual machines.
Can be run using a text-based console to the machine in the Azure portal.
Login to the machine with a local user account.
Useful when access to the virtual machine is needed regardless of the machine's network or operating system
state.
Next steps
Learn more about the different features that are available to run scripts and commands within your VMs.
Custom Script Extension
Run Command
Hybrid Runbook Worker
Serial console
Custom Script Extension for Windows
11/2/2020 • 11 minutes to read • Edit Online
The Custom Script Extension downloads and executes scripts on Azure virtual machines. This extension is useful for post deployment configuration, software
installation, or any other configuration or management tasks. Scripts can be downloaded from Azure storage or GitHub, or provided to the Azure portal at extension
run time. The Custom Script Extension integrates with Azure Resource Manager templates, and can be run using the Azure CLI, PowerShell, Azure portal, or the Azure
Virtual Machine REST API.
This document details how to use the Custom Script Extension using the Azure PowerShell module, Azure Resource Manager templates, and details troubleshooting
steps on Windows systems.
Prerequisites
NOTE
Do not use Custom Script Extension to run Update-AzVM with the same VM as its parameter, since it will wait on itself.
Operating System
The Custom Script Extension for Windows will run on the extension supported extension OSs;
Windows
Windows Server 2008 R2
Windows Server 2012
Windows Server 2012 R2
Windows 10
Windows Server 2016
Windows Server 2016 Core
Windows Server 2019
Windows Server 2019 Core
Script Location
You can configure the extension to use your Azure Blob storage credentials to access Azure Blob storage. The script location can be anywhere, as long as the VM can
route to that end point, such as GitHub or an internal file server.
Internet Connectivity
If you need to download a script externally such as from GitHub or Azure Storage, then additional firewall and Network Security Group ports need to be opened. For
example, if your script is located in Azure Storage, you can allow access using Azure NSG Service Tags for Storage.
If your script is on a local server, then you may still need additional firewall and Network Security Group ports need to be opened.
Tips and Tricks
The highest failure rate for this extension is because of syntax errors in the script, test the script runs without error, and also put in additional logging into the script
to make it easier to find where it failed.
Write scripts that are idempotent. This ensures that if they run again accidentally, it will not cause system changes.
Ensure the scripts don't require user input when they run.
There's 90 minutes allowed for the script to run, anything longer will result in a failed provision of the extension.
Don't put reboots inside the script, this action will cause issues with other extensions that are being installed. Post reboot, the extension won't continue after the
restart.
If you have a script that will cause a reboot, then install applications and run scripts, you can schedule the reboot using a Windows Scheduled Task, or use tools such
as DSC, Chef, or Puppet extensions.
It is not recommended to run a script that will cause a stop or update of the VM Agent. This can let the extension in a Transitioning state, leading to a timeout.
The extension will only run a script once, if you want to run a script on every boot, then you need to use the extension to create a Windows Scheduled Task.
If you want to schedule when a script will run, you should use the extension to create a Windows Scheduled Task.
When the script is running, you will only see a 'transitioning' extension status from the Azure portal or CLI. If you want more frequent status updates of a running
script, you'll need to create your own solution.
Custom Script extension does not natively support proxy servers, however you can use a file transfer tool that supports proxy servers within your script, such as
Curl
Be aware of non-default directory locations that your scripts or commands may rely on, have logic to handle this situation.
Custom Script Extension will run under the LocalSystem Account
If you plan to use the storageAccountName and storageAccountKey properties, these properties must be collocated in protectedSettings.
Extension schema
The Custom Script Extension configuration specifies things like script location and the command to be run. You can store this configuration in configuration files,
specify it on the command line, or specify it in an Azure Resource Manager template.
You can store sensitive data in a protected configuration, which is encrypted and only decrypted inside the virtual machine. The protected configuration is useful when
the execution command includes secrets such as a password.
These items should be treated as sensitive data and specified in the extensions protected setting configuration. Azure VM extension protected setting data is encrypted,
and only decrypted on the target virtual machine.
{
"apiVersion": "2018-06-01",
"type": "Microsoft.Compute/virtualMachines/extensions",
"name": "virtualMachineName/config-app",
"location": "[resourceGroup().location]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', variables('vmName'),copyindex())]",
"[variables('musicstoresqlName')]"
],
"tags": {
"displayName": "config-app"
},
"properties": {
"publisher": "Microsoft.Compute",
"type": "CustomScriptExtension",
"typeHandlerVersion": "1.10",
"autoUpgradeMinorVersion": true,
"settings": {
"fileUris": [
"script location"
],
"timestamp":123456789
},
"protectedSettings": {
"commandToExecute": "myExecutionCommand",
"storageAccountName": "myStorageAccountName",
"storageAccountKey": "myStorageAccountKey",
"managedIdentity" : {}
}
}
}
NOTE
managedIdentity property must not be used in conjunction with storageAccountName or storageAccountKey properties
NOTE
Only one version of an extension can be installed on a VM at a point in time, specifying custom script twice in the same Resource Manager template for the same VM will fail.
NOTE
We can use this schema inside the VirtualMachine resource or as a standalone resource. The name of the resource has to be in this format "virtualMachineName/extensionName", if this
extension is used as a standalone resource in the ARM template.
Property values
NAME VA L UE / EXA M P L E DATA T Y P E
NOTE
These property names are case-sensitive. To avoid deployment problems, use the names as shown here.
The following values can be set in either public or protected settings, the extension will reject any configuration where the values below are set in both public and
protected settings.
commandToExecute
Using public settings maybe useful for debugging, but it's recommended that you use protected settings.
Public settings are sent in clear text to the VM where the script will be executed. Protected settings are encrypted using a key known only to the Azure and the VM. The
settings are saved to the VM as they were sent, that is, if the settings were encrypted they're saved encrypted on the VM. The certificate used to decrypt the encrypted
values is stored on the VM, and used to decrypt settings (if necessary) at runtime.
Property: managedIdentity
NOTE
This property must be specified in protected settings only.
CustomScript (version 1.10 onwards) supports managed identity for downloading file(s) from URLs provided in the "fileUris" setting. It allows CustomScript to access
Azure Storage private blobs or containers without the user having to pass secrets like SAS tokens or storage account keys.
To use this feature, the user must add a system-assigned or user-assigned identity to the VM or VMSS where CustomScript is expected to run, and grant the managed
identity access to the Azure Storage container or blob.
To use the system-assigned identity on the target VM/VMSS, set "managedidentity" field to an empty json object.
Example:
{
"fileUris": ["https://mystorage.blob.core.windows.net/privatecontainer/script1.ps1"],
"commandToExecute": "powershell.exe script1.ps1",
"managedIdentity" : {}
}
To use the user-assigned identity on the target VM/VMSS, configure "managedidentity" field with the client ID or the object ID of the managed identity.
Examples:
{
"fileUris": ["https://mystorage.blob.core.windows.net/privatecontainer/script1.ps1"],
"commandToExecute": "powershell.exe script1.ps1",
"managedIdentity" : { "clientId": "31b403aa-c364-4240-a7ff-d85fb6cd7232" }
}
{
"fileUris": ["https://mystorage.blob.core.windows.net/privatecontainer/script1.ps1"],
"commandToExecute": "powershell.exe script1.ps1",
"managedIdentity" : { "objectId": "12dd289c-0583-46e5-b9b4-115d5c19ef4b" }
}
NOTE
managedIdentity property must not be used in conjunction with storageAccountName or storageAccountKey properties
Template deployment
Azure VM extensions can be deployed with Azure Resource Manager templates. The JSON schema, which is detailed in the previous section can be used in an Azure
Resource Manager template to run the Custom Script Extension during deployment. The following samples show how to use the Custom Script extension:
Tutorial: Deploy virtual machine extensions with Azure Resource Manager templates
Deploy Two Tier Application on Windows and Azure SQL DB
PowerShell deployment
The Set-AzVMCustomScriptExtension command can be used to add the Custom Script extension to an existing virtual machine. For more information, see Set-
AzVMCustomScriptExtension.
Set-AzVMCustomScriptExtension -ResourceGroupName <resourceGroupName> `
-VMName <vmName> `
-Location myLocation `
-FileUri <fileUrl> `
-Run 'myScript.ps1' `
-Name DemoScriptExtension
Additional examples
Using multiple scripts
In this example, you have three scripts that are used to build your server. The commandToExecute calls the first script, then you have options on how the others are
called. For example, you can have a master script that controls the execution, with the right error handling, logging, and state management. The scripts are downloaded
to the local machine for running. For example in 1_Add_Tools.ps1 you would call 2_Add_Features.ps1 by adding .\2_Add_Features.ps1 to the script, and repeat this
process for the other scripts you define in $settings .
$fileUri = @("https://xxxxxxx.blob.core.windows.net/buildServer1/1_Add_Tools.ps1",
"https://xxxxxxx.blob.core.windows.net/buildServer1/2_Add_Features.ps1",
"https://xxxxxxx.blob.core.windows.net/buildServer1/3_CompleteInstall.ps1")
$storageAcctName = "xxxxxxx"
$storageKey = "1234ABCD"
$protectedSettings = @{"storageAccountName" = $storageAcctName; "storageAccountKey" = $storageKey; "commandToExecute" = "powershell -ExecutionPolicy Unrestricted
-File 1_Add_Tools.ps1"};
#run command
Set-AzVMExtension -ResourceGroupName <resourceGroupName> `
-Location <locationName> `
-VMName <vmName> `
-Name "buildserver1" `
-Publisher "Microsoft.Compute" `
-ExtensionType "CustomScriptExtension" `
-TypeHandlerVersion "1.10" `
-Settings $settings `
-ProtectedSettings $protectedSettings `
The response content cannot be parsed because the Internet Explorer engine is not available, or Internet Explorer's first-launch configuration is not complete.
Specify the UseBasicParsing parameter and try again.
Classic VMs
IMPORTANT
Classic VMs will be retired on March 1, 2023.
If you use IaaS resources from ASM, please complete your migration by March 1, 2023. We encourage you to make the switch sooner to take advantage of the many feature
enhancements in Azure Resource Manager.
For more information, see Migrate your IaaS resources to Azure Resource Manager by March 1, 2023.
To deploy the Custom Script Extension on classic VMs, you can use the Azure portal or the Classic Azure PowerShell cmdlets.
Azure portal
Navigate to your Classic VM resource. Select Extensions under Settings .
Click + Add and in the list of resources choose Custom Script Extension .
On the Install extension page, select the local PowerShell file, and fill out any arguments and click Ok .
PowerShell
Use the Set-AzureVMCustomScriptExtension cmdlet can be used to add the Custom Script extension to an existing virtual machine.
# create vm object
$vm = Get-AzureVM -Name <vmName> -ServiceName <cloudServiceName>
# set extension
Set-AzureVMCustomScriptExtension -VM $vm -FileUri $fileUri -Run 'Create-File.ps1'
# update vm
$vm | Update-AzureVM
Extension output is logged to files found under the following folder on the target virtual machine.
C:\WindowsAzure\Logs\Plugins\Microsoft.Compute.CustomScriptExtension
The specified files are downloaded into the following folder on the target virtual machine.
C:\Packages\Plugins\Microsoft.Compute.CustomScriptExtension\1.*\Downloads\<n>
where <n> is a decimal integer, which may change between executions of the extension. The 1.* value matches the actual, current typeHandlerVersion value of the
extension. For example, the actual directory could be C:\Packages\Plugins\Microsoft.Compute.CustomScriptExtension\1.8\Downloads\2 .
When executing the commandToExecute command, the extension sets this directory (for example, ...\Downloads\2 ) as the current working directory. This process
enables the use of relative paths to locate the files downloaded via the fileURIs property. See the table below for examples.
Since the absolute download path may vary over time, it's better to opt for relative script/file paths in the commandToExecute string, whenever possible. For example:
Path information after the first URI segment is kept for files downloaded via the fileUris property list. As shown in the table below, downloaded files are mapped into
download subdirectories to reflect the structure of the fileUris values.
Examples of Downloaded Files
https://someAcct.blob.core.windows.net/aContainer/scripts/myscript.ps1
./scripts/myscript.ps1 C:\Packages\Plugins\Microsoft.Compute.CustomScriptExtension\1.8\Downloa
https://someAcct.blob.core.windows.net/aContainer/topLevel.ps1
./topLevel.ps1 C:\Packages\Plugins\Microsoft.Compute.CustomScriptExtension\1.8\Downloa
1 The absolute directory paths change over the lifetime of the VM, but not within a single execution of the CustomScript extension.
Support
If you need more help at any point in this article, you can contact the Azure experts on the MSDN Azure and Stack Overflow forums. You can also file an Azure support
incident. Go to the Azure support site and select Get support. For information about using Azure Support, read the Microsoft Azure support FAQ.
Run PowerShell scripts in your Windows VM by using
Run Command
11/2/2020 • 4 minutes to read • Edit Online
The Run Command feature uses the virtual machine (VM) agent to run PowerShell scripts within an Azure
Windows VM. You can use these scripts for general machine or application management. They can help you to
quickly diagnose and remediate VM access and network issues and get the VM back to a good state.
Benefits
You can access your virtual machines in multiple ways. Run Command can run scripts on your virtual machines
remotely by using the VM agent. You use Run Command through the Azure portal, REST API, or PowerShell for
Windows VMs.
This capability is useful in all scenarios where you want to run a script within a virtual machine. It's one of the only
ways to troubleshoot and remediate a virtual machine that doesn't have the RDP or SSH port open because of
improper network or administrative user configuration.
Restrictions
The following restrictions apply when you're using Run Command:
Output is limited to the last 4,096 bytes.
The minimum time to run a script is about 20 seconds.
Scripts run as System on Windows.
One script at a time can run.
Scripts that prompt for information (interactive mode) are not supported.
You can't cancel a running script.
The maximum time a script can run is 90 minutes. After that, it will time out.
Outbound connectivity from the VM is required to return the results of the script.
It is not recommended to run a script that will cause a stop or update of the VM Agent. This can let the
extension in a Transitioning state, leading to a timeout.
NOTE
To function correctly, Run Command requires connectivity (port 443) to Azure public IP addresses. If the extension doesn't
have access to these endpoints, the scripts might run successfully but not return the results. If you're blocking traffic on the
virtual machine, you can use service tags to allow traffic to Azure public IP addresses by using the AzureCloud tag.
Available commands
This table shows the list of commands available for Windows VMs. You can use the RunPowerShellScript
command to run any custom script that you want. When you're using the Azure CLI or PowerShell to run a
command, the value that you provide for the --command-id or -CommandId parameter must be one of the
following listed values. When you specify a value that is not an available command, you receive this error:
ResetRDPCer t Removes the TLS/SSL certificate tied to the RDP listener and
restores the RDP listener security to default. Use this script if
you see any issues with the certificate.
Azure CLI
The following example uses the az vm run-command command to run a shell script on an Azure Windows VM.
# script.ps1
# param(
# [string]$arg1,
# [string]$arg2
# )
# Write-Host This is a sample script with parameters $arg1 and $arg2
Azure portal
Go to a VM in the Azure portal and select Run command under OPERATIONS . You see a list of the available
commands to run on the VM.
Choose a command to run. Some of the commands might have optional or required input parameters. For those
commands, the parameters are presented as text fields for you to provide the input values. For each command,
you can view the script that's being run by expanding View script . RunPowerShellScript is different from the
other commands, because it allows you to provide your own custom script.
NOTE
The built-in commands are not editable.
After you choose the command, select Run to run the script. After the script finishes, it returns the output and any
errors in the output window. The following screenshot shows an example output from running the RDPSettings
command.
PowerShell
The following example uses the Invoke-AzVMRunCommand cmdlet to run a PowerShell script on an Azure VM.
The cmdlet expects the script referenced in the -ScriptPath parameter to be local to where the cmdlet is being
run.
Next steps
To learn about other ways to run scripts and commands remotely in your VM, see Run scripts in your Windows
VM.
Use the D: drive as a data drive on a Windows VM
11/2/2020 • 2 minutes to read • Edit Online
If your application needs to use the D drive to store data, follow these instructions to use a different drive letter for
the temporary disk. Never use the temporary disk to store data that you need to keep.
If you resize or Stop (Deallocate) a virtual machine, this may trigger placement of the virtual machine to a new
hypervisor. A planned or unplanned maintenance event may also trigger this placement. In this scenario, the
temporary disk will be reassigned to the first available drive letter. If you have an application that specifically
requires the D: drive, you need to follow these steps to temporarily move the pagefile.sys, attach a new data disk
and assign it the letter D and then move the pagefile.sys back to the temporary drive. Once complete, Azure will
not take back the D: if the VM moves to a different hypervisor.
For more information about how Azure uses the temporary disk, see Understanding the temporary drive on
Microsoft Azure Virtual Machines
Next steps
You can increase the storage available to your virtual machine by attaching an additional data disk.
Change the availability set for a VM
11/2/2020 • 2 minutes to read • Edit Online
The following steps describe how to change the availability set of a VM using Azure PowerShell. A VM can only be
added to an availability set when it is created. To change the availability set, you need to delete and then recreate
the virtual machine.
This article applies to both Linux and Windows VMs.
This article was last tested on 2/12/2019 using the Azure Cloud Shell and the Az PowerShell module version 1.2.0.
This example does not check to see if the VM is attached to a load balancer. If your VM is attached to a load
balancer, you will need to update the script to handle that case.
# Set variables
$resourceGroup = "myResourceGroup"
$vmName = "myVM"
$newAvailSetName = "myAvailabilitySet"
# For a Linux VM, change the last parameter from -Windows to -Linux
Set-AzVMOSDisk `
-VM $newVM -CreateOption Attach `
-ManagedDiskId $originalVM.StorageProfile.OsDisk.ManagedDisk.Id `
-Name $originalVM.StorageProfile.OsDisk.Name `
-Windows
# Add Data Disks
foreach ($disk in $originalVM.StorageProfile.DataDisks) {
Add-AzVMDataDisk -VM $newVM `
-Name $disk.Name `
-ManagedDiskId $disk.ManagedDisk.Id `
-Caching $disk.Caching `
-Lun $disk.Lun `
-DiskSizeInGB $disk.DiskSizeGB `
-CreateOption Attach
}
# Recreate the VM
New-AzVM `
-ResourceGroupName $resourceGroup `
-Location $originalVM.Location `
-VM $newVM `
-DisableBginfoExtension
Next steps
Add additional storage to your VM by adding an additional data disk.
Download the template for a VM
11/2/2020 • 2 minutes to read • Edit Online
When you create a VM in Azure using the portal or PowerShell, a Resource Manager template is automatically
created for you. You can use this template to quickly duplicate a deployment. The template contains information
about all of the resources in a resource group. For a virtual machine, this means the template contains everything
that is created in support of the VM in that resource group, including the networking resources.
Next steps
To learn more about deploying resources using templates, see Resource Manager template walkthrough.
Azure Virtual Machine Agent overview
11/2/2020 • 4 minutes to read • Edit Online
The Microsoft Azure Virtual Machine Agent (VM Agent) is a secure, lightweight process that manages virtual
machine (VM) interaction with the Azure Fabric Controller. The VM Agent has a primary role in enabling and
executing Azure virtual machine extensions. VM Extensions enable post-deployment configuration of VM, such as
installing and configuring software. VM extensions also enable recovery features such as resetting the
administrative password of a VM. Without the Azure VM Agent, VM extensions cannot be run.
This article details installation and detection of the Azure Virtual Machine Agent.
"resources": [{
"name": "[parameters('virtualMachineName')]",
"type": "Microsoft.Compute/virtualMachines",
"apiVersion": "2016-04-30-preview",
"location": "[parameters('location')]",
"dependsOn": ["[concat('Microsoft.Network/networkInterfaces/', parameters('networkInterfaceName'))]"],
"properties": {
"osProfile": {
"computerName": "[parameters('virtualMachineName')]",
"adminUsername": "[parameters('adminUsername')]",
"adminPassword": "[parameters('adminPassword')]",
"windowsConfiguration": {
"provisionVmAgent": "false"
}
If you do not have the Agents installed, you cannot use some Azure services, such as Azure Backup or Azure
Security. These services require an extension to be installed. If you have deployed a VM without the WinGA, you
can install the latest version of the agent later.
Manual installation
The Windows VM agent can be manually installed with a Windows installer package. Manual installation may be
necessary when you create a custom VM image that is deployed to Azure. To manually install the Windows VM
Agent, download the VM Agent installer. The VM Agent is supported on Windows Server 2008 (64 bit) and later.
NOTE
It is important to update the AllowExtensionOperations option after manually installing the VMAgent on a VM that was
deployed from image without ProvisionVMAgent enable.
$vm.OSProfile.AllowExtensionOperations = $true
$vm | Update-AzVM
Prerequisites
The Windows VM Agent needs at least Windows Server 2008 SP2 (64-bit) to run, with the .NET Framework
4.0. See Minimum version support for virtual machine agents in Azure.
Ensure your VM has access to IP address 168.63.129.16. For more information, see What is IP address
168.63.129.16.
Ensure that DHCP is enabled inside the guest VM. This is required to get the host or fabric address from
DHCP for the IaaS VM Agent and extensions to work. If you need a static private IP, you should configure it
through the Azure portal or PowerShell, and make sure the DHCP option inside the VM is enabled. Learn
more about setting up a static IP address with PowerShell.
Get-AzVM
The following condensed example output shows the ProvisionVMAgent property nested inside OSProfile . This
property can be used to determine if the VM agent has been deployed to the VM:
OSProfile :
ComputerName : myVM
AdminUsername : myUserName
WindowsConfiguration :
ProvisionVMAgent : True
EnableAutomaticUpdates : True
The following script can be used to return a concise list of VM names and the state of the VM Agent:
$vms = Get-AzVM
Manual Detection
When logged in to a Windows VM, Task Manager can be used to examine running processes. To check for the
Azure VM Agent, open Task Manager, click the Details tab, and look for a process name
WindowsAzureGuestAgent.exe . The presence of this process indicates that the VM agent is installed.
Upgrade the VM Agent
The Azure VM Agent for Windows is automatically upgraded on images deployed from the Azure Marketplace. As
new VMs are deployed to Azure, they receive the latest VM agent at VM provision time. If you have installed the
agent manually or are deploying custom VM images you will need to manually update to include the new VM
agent at image creation time.
OSProfile .
For more information on Virtual Machine Scale Set certificates, see Virtual Machine Scale Sets - How do I remove
deprecated certificates?
Next steps
For more information about VM extensions, see Azure virtual machine extensions and features overview.
Guidance for mitigating speculative execution side-
channel vulnerabilities
11/6/2020 • 7 minutes to read • Edit Online
NOTE
Since this document was first published, multiple variants of this vulnerability class have been disclosed. Microsoft continues
to be heavily invested in protecting our customers and providing guidance. This page will be updated as we continue to
release further fixes.
On November 12, 2019, Intel published a technical advisory around Intel® Transactional Synchronization Extensions
(Intel® TSX) Transaction Asynchronous Abort (TAA) vulnerability that is assigned CVE-2019-11135. This vulnerability affects
Intel® Core® processors and Intel® Xeon® processors. Microsoft Azure has released operating system updates and is
deploying new microcode, as it is made available by Intel, throughout our fleet to protect our customers against these new
vulnerabilities. Azure is closely working with Intel to test and validate the new microcode prior to its official release on the
platform.
Customers that are running untrusted code within their VM need to take action to protect against these
vulnerabilities by reading below for additional guidance on all speculative execution side-channel vulnerabilities (Microsoft
Advisories ADV 180002, 180018, and 190013).
Other customers should evaluate these vulnerabilities from a Defense in Depth perspective and consider the security and
performance implications of their chosen configuration.
Azure Cloud Services Enable auto update or ensure you are running the newest
Guest OS.
O F F ERIN G REC O M M EN DED A C T IO N
Azure Linux Virtual Machines Install updates from your operating system provider. For more
information, see Linux later in this document.
Other Azure PaaS Services There is no action needed for customers using these services.
Azure automatically keeps your OS versions up-to-date.
If the number of logical processors is greater than physical processors (cores), then hyper-threading is enabled. If
you are running a hyper-threaded VM, please contact Azure Support to get hyper-threading disabled. Once hyper-
threading is disabled, suppor t will require a full VM reboot . Please refer to Core count to understand why your
VM core count decreased.
Step 2 : In parallel to Step 1, follow the instructions in KB4072698 to verify protections are enabled using the
SpeculationControl PowerShell module.
NOTE
If you previously downloaded this module, you will need to install the newest version.
The output of the PowerShell script should have the below values to validate enabled protections against these
vulnerabilities:
If the output shows MDS mitigation is enabled: False , please contact Azure Support for available mitigation
options.
Step 3 : To enable Kernel Virtual Address Shadowing (KVAS) and Branch Target Injection (BTI) OS support, follow
the instructions in KB4072698 to enable protections using the Session Manager registry keys. A reboot is required.
Step 4 : For deployments that are using nested virtualization (D3 and E3 only): These instructions apply inside the
VM you are using as a Hyper-V host.
1. Follow the instructions in KB4072698 to enable protections using the MinVmVersionForCpuBasedMitigations
registry keys.
2. Set the hypervisor scheduler type to Core by following the instructions here.
Linux
Enabling the set of additional security features inside requires that the target operating system be fully up-to-date.
Some mitigations will be enabled by default. The following section describes the features which are off by default
and/or reliant on hardware support (microcode). Enabling these features may cause a performance impact.
Reference your operating system provider’s documentation for further instructions
Step 1: Disable hyper-threading on the VM - Customers running untrusted code on a hyper-threaded VM will
need to disable hyper-threading or move to a non-hyper-threaded VM. Reference this doc for a list of hyper-
threaded VM sizes (where ratio of vCPU to Core is 2:1). To check if you are running a hyper-threaded VM, run the
lscpu command in the Linux VM.
If you are running a hyper-threaded VM, please contact Azure Support to get hyper-threading disabled. Once
hyper-threading is disabled, suppor t will require a full VM reboot . Please refer to Core count to understand
why your VM core count decreased.
Step 2 : To mitigate against any of the below speculative execution side-channel vulnerabilities, refer to your
operating system provider’s documentation:
Redhat and CentOS
SUSE
Ubuntu
Core count
When a hyper-threaded VM is created, Azure allocates 2 threads per core - these are called vCPUs. When hyper-
threading is disabled, Azure removes a thread and surfaces up single threaded cores (physical cores). The ratio of
vCPU to CPU is 2:1, so once hyper-threading is disabled, the CPU count in the VM will appear to have decreased by
half. For example, a D8_v3 VM is a hyper-threaded VM running on 8 vCPUs (2 threads per core x 4 cores). When
hyper-threading is disabled, CPUs will drop to 4 physical cores with 1 thread per core.
Next steps
This article provides guidance to the below speculative execution side-channel attacks that affect many modern
processors:
Spectre Meltdown:
CVE-2017-5715 - Branch Target Injection (BTI)
CVE-2017-5754 - Kernel Page Table Isolation (KPTI)
CVE-2018-3639 – Speculative Store Bypass (KPTI)
CVE-2019-1125 – Windows Kernel Information – variant of Spectre Variant 1
L1 Terminal Fault (L1TF):
CVE-2018-3615 - Intel Software Guard Extensions (Intel SGX)
CVE-2018-3620 - Operating Systems (OS) and System Management Mode (SMM)
CVE-2018-3646 – impacts Virtual Machine Manager (VMM)
Microarchitectural Data Sampling:
CVE-2019-11091 - Microarchitectural Data Sampling Uncacheable Memory (MDSUM)
CVE-2018-12126 - Microarchitectural Store Buffer Data Sampling (MSBDS)
CVE-2018-12127 - Microarchitectural Load Port Data Sampling (MLPDS)
CVE-2018-12130 - Microarchitectural Fill Buffer Data Sampling (MFBDS)
Transactional Synchronization Extensions (Intel® TSX) Transaction Asynchronous Abort:
CVE-2019-11135 – TSX Transaction Asynchronous Abort (TAA)
Azure Instance Metadata service
11/2/2020 • 20 minutes to read • Edit Online
The Azure Instance Metadata Service (IMDS) provides information about currently running virtual machine
instances and can be used to manage and configure your virtual machines. This information includes the SKU,
storage, network configurations, and upcoming maintenance events. For a complete list of the data that is
available, see metadata APIs. Instance Metadata Service is available for running virtual machine and virtual
machine scale set instances. All APIs support VMs created/managed using Azure Resource Manager. Only the
Attested and Network endpoints support Classic (non-ARM) VMs, and Attested does so only to a limited extent.
Azure's IMDS is a REST Endpoint that is available at a well-known non-routable IP address ( 169.254.169.254 ), it can
be accessed only from within the VM. Communication between the VM and IMDS never leaves the Host. It is best
practice to have your HTTP clients bypass web proxies within the VM when querying IMDS and treat
169.254.169.254 the same as 168.63.129.16 .
Security
The Instance Metadata Service endpoint is accessible only from within the running virtual machine instance on a
non-routable IP address. In addition, any request with a X-Forwarded-For header is rejected by the service.
Requests must also contain a Metadata: true header to ensure that the actual request was directly intended and
not a part of unintentional redirection.
IMPORTANT
Instance Metadata Service is not a channel for sensitive data. The end point is open to all processes on the VM. Information
exposed through this service should be considered as shared information to all applications running inside the VM.
Usage
Accessing Azure Instance Metadata Service
To access Instance Metadata Service, create a VM from Azure Resource Manager or the Azure portal, and follow the
samples below. More examples of how to query IMDS can be found at Azure Instance Metadata Samples.
Below is the sample code to retrieve all metadata for an instance, to access specific data source, see Metadata API
section.
Request
NOTE
The -NoProxy flag is only available in PowerShell 6 or greater. You may omit the flag if you don't have a proxy setup.
Response
NOTE
The response is a JSON string. We pipe our REST query through the ConvertTo-Json cmdlet for pretty-printing.
{
"compute": {
"azEnvironment": "AZUREPUBLICCLOUD",
"isHostCompatibilityLayerVm": "true",
"location": "westus",
"name": "examplevmname",
"offer": "Windows",
"osType": "linux",
"placementGroupId": "f67c14ab-e92c-408c-ae2d-da15866ec79a",
"plan": {
"name": "planName",
"product": "planProduct",
"publisher": "planPublisher"
},
"platformFaultDomain": "36",
"platformUpdateDomain": "42",
"publicKeys": [{
"keyData": "ssh-rsa 0",
"path": "/home/user/.ssh/authorized_keys0"
},
{
"keyData": "ssh-rsa 1",
"path": "/home/user/.ssh/authorized_keys1"
}
],
"publisher": "RDFE-Test-Microsoft-Windows-Server-Group",
"resourceGroupName": "macikgo-test-may-23",
"resourceId": "/subscriptions/8d10da13-8125-4ba9-a717-bf7490507b3d/resourceGroups/macikgo-test-may-
23/providers/Microsoft.Compute/virtualMachines/examplevmname",
"securityProfile": {
"secureBootEnabled": "true",
"virtualTpmEnabled": "false"
},
"sku": "Windows-Server-2012-R2-Datacenter",
"storageProfile": {
"dataDisks": [{
"caching": "None",
"createOption": "Empty",
"diskSizeGB": "1024",
"image": {
"uri": ""
},
"lun": "0",
"managedDisk": {
"id": "/subscriptions/8d10da13-8125-4ba9-a717-bf7490507b3d/resourceGroups/macikgo-test-may-
23/providers/Microsoft.Compute/disks/exampledatadiskname",
"storageAccountType": "Standard_LRS"
},
"name": "exampledatadiskname",
"vhd": {
"uri": ""
},
"writeAcceleratorEnabled": "false"
}],
"imageReference": {
"id": "",
"offer": "UbuntuServer",
"publisher": "Canonical",
"sku": "16.04.0-LTS",
"version": "latest"
},
"osDisk": {
"osDisk": {
"caching": "ReadWrite",
"createOption": "FromImage",
"diskSizeGB": "30",
"diffDiskSettings": {
"option": "Local"
},
"encryptionSettings": {
"enabled": "false"
},
"image": {
"uri": ""
},
"managedDisk": {
"id": "/subscriptions/8d10da13-8125-4ba9-a717-bf7490507b3d/resourceGroups/macikgo-test-may-
23/providers/Microsoft.Compute/disks/exampleosdiskname",
"storageAccountType": "Standard_LRS"
},
"name": "exampleosdiskname",
"osType": "Linux",
"vhd": {
"uri": ""
},
"writeAcceleratorEnabled": "false"
}
},
"subscriptionId": "8d10da13-8125-4ba9-a717-bf7490507b3d",
"tags": "baz:bash;foo:bar",
"version": "15.05.22",
"vmId": "02aab8a4-74ef-476e-8182-f6d2ba4166a6",
"vmScaleSetName": "crpteste9vflji9",
"vmSize": "Standard_A3",
"zone": ""
}
}
Data output
By default, the Instance Metadata Service returns data in JSON format ( Content-Type: application/json ). However,
some APIs are able to return data in different formats if requested. The following table is a reference of other data
formats APIs may support.
To access a non-default response format, specify the requested format as a query string parameter in the request.
For example:
Versioning
The Instance Metadata Service is versioned and specifying the API version in the HTTP request is mandatory.
The supported API versions are:
2017-03-01
2017-04-02
2017-08-01
2017-10-01
2017-12-01
2018-02-01
2018-04-02
2018-10-01
2019-02-01
2019-03-11
2019-04-30
2019-06-01
2019-06-04
2019-08-01
2019-08-15
2019-11-01
2020-06-01
Note when new version is released, it will take a while to roll out to all regions.
As newer versions are added, older versions can still be accessed for compatibility if your scripts have
dependencies on specific data formats.
When no version is specified, an error is returned with a list of the newest supported versions.
NOTE
The response is a JSON string. The following example indicates the error condition when version is not specified, the
response is pretty-printed for readability.
Request
Response
{
"error": "Bad request. api-version was not specified in the request. For more information refer to
aka.ms/azureimds",
"newest-versions": [
"2018-10-01",
"2018-04-02",
"2018-02-01"
]
}
Metadata APIs
Metadata Service contains multiple APIs representing different data sources.
Instance API
Instance API exposes the important metadata for the VM instances, including the VM, network, and storage. The
following categories can be accessed through instance/compute:
Response
5c08b38e-4d57-4c23-ac45-aca61037f084
Response
Response
NOTE
The response is a JSON string. The following example response is pretty-printed for readability.
{
"azEnvironment": "AzurePublicCloud",
"customData": "",
"location": "centralus",
"name": "negasonic",
"offer": "lampstack",
"osType": "Linux",
"placementGroupId": "",
"plan": {
"name": "5-6",
"product": "lampstack",
"product": "lampstack",
"publisher": "bitnami"
},
"platformFaultDomain": "0",
"platformUpdateDomain": "0",
"provider": "Microsoft.Compute",
"publicKeys": [],
"publisher": "bitnami",
"resourceGroupName": "myrg",
"resourceId": "/subscriptions/xxxxx-xxxx-xxxx-xxxx-
xxxxxxxxxxx/resourceGroups/myrg/providers/Microsoft.Compute/virtualMachines/negasonic",
"sku": "5-6",
"storageProfile": {
"dataDisks": [
{
"caching": "None",
"createOption": "Empty",
"diskSizeGB": "1024",
"image": {
"uri": ""
},
"lun": "0",
"managedDisk": {
"id": "/subscriptions/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx/resourceGroups/macikgo-test-may-
23/providers/Microsoft.Compute/disks/exampledatadiskname",
"storageAccountType": "Standard_LRS"
},
"name": "exampledatadiskname",
"vhd": {
"uri": ""
},
"writeAcceleratorEnabled": "false"
}
],
"imageReference": {
"id": "",
"offer": "UbuntuServer",
"publisher": "Canonical",
"sku": "16.04.0-LTS",
"version": "latest"
},
"osDisk": {
"caching": "ReadWrite",
"createOption": "FromImage",
"diskSizeGB": "30",
"diffDiskSettings": {
"option": "Local"
},
"encryptionSettings": {
"enabled": "false"
},
"image": {
"uri": ""
},
"managedDisk": {
"id": "/subscriptions/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx/resourceGroups/macikgo-test-may-
23/providers/Microsoft.Compute/disks/exampleosdiskname",
"storageAccountType": "Standard_LRS"
},
"name": "exampleosdiskname",
"osType": "Linux",
"vhd": {
"uri": ""
},
"writeAcceleratorEnabled": "false"
}
},
"subscriptionId": "xxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx",
"tags": "Department:IT;Environment:Test;Role:WebRole",
"version": "7.1.1902271506",
"vmId": "13f56399-bd52-4150-9748-7190aae1ff21",
"vmScaleSetName": "",
"vmSize": "Standard_A1_v2",
"zone": "1"
}
Response
AzurePublicCloud
The cloud and the values of the Azure Environment are listed below.
C LO UD A Z URE EN VIRO N M EN T
Network Metadata
Network metadata is part of the instance API. The following Network categories are available through the
instance/network endpoint.
Response
NOTE
The response is a JSON string. The following example response is pretty-printed for readability.
{
"interface": [
{
"ipv4": {
"ipAddress": [
{
"privateIpAddress": "10.1.0.4",
"publicIpAddress": "X.X.X.X"
}
],
"subnet": [
{
"address": "10.1.0.0",
"prefix": "24"
}
]
},
"ipv6": {
"ipAddress": []
},
"macAddress": "000D3AF806EC"
}
]
}
Storage Metadata
Storage metadata is part of the instance API under instance/compute/storageProfile endpoint. It provides details
about the storage disks associated with the VM.
The storage profile of a VM is divided into three categories: image reference, OS disk, and data disks.
The image reference object contains the following information about the OS image:
DATA DESC RIP T IO N
id Resource ID
The OS disk object contains the following information about the OS disk used by the VM:
The data disks array contains a list of data disks attached to the VM. Each data disk object contains the following
information:
The following example shows how to query the VM's storage information.
Request
Response
NOTE
The response is a JSON string. The following example response is pretty-printed for readability.
{
"dataDisks": [
{
"caching": "None",
"createOption": "Empty",
"diskSizeGB": "1024",
"image": {
"uri": ""
},
"lun": "0",
"managedDisk": {
"id": "/subscriptions/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx/resourceGroups/macikgo-test-may-
23/providers/Microsoft.Compute/disks/exampledatadiskname",
"storageAccountType": "Standard_LRS"
},
"name": "exampledatadiskname",
"vhd": {
"uri": ""
},
"writeAcceleratorEnabled": "false"
}
],
"imageReference": {
"id": "",
"offer": "UbuntuServer",
"publisher": "Canonical",
"sku": "16.04.0-LTS",
"version": "latest"
},
"osDisk": {
"caching": "ReadWrite",
"createOption": "FromImage",
"diskSizeGB": "30",
"diffDiskSettings": {
"option": "Local"
},
"encryptionSettings": {
"enabled": "false"
},
"image": {
"uri": ""
},
"managedDisk": {
"id": "/subscriptions/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx/resourceGroups/macikgo-test-may-
23/providers/Microsoft.Compute/disks/exampleosdiskname",
"storageAccountType": "Standard_LRS"
},
"name": "exampleosdiskname",
"osType": "Linux",
"vhd": {
"uri": ""
},
"writeAcceleratorEnabled": "false"
}
}
VM Tags
VM tags are included the instance API under instance/compute/tags endpoint. Tags may have been applied to your
Azure VM to logically organize them into a taxonomy. The tags assigned to a VM can be retrieved by using the
request below.
Request
Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -NoProxy -Uri
"http://169.254.169.254/metadata/instance/compute/tags?api-version=2018-10-01&format=text"
Response
Department:IT;Environment:Test;Role:WebRole
The tags field is a string with the tags delimited by semicolons. This output can be a problem if semicolons are
used in the tags themselves. If a parser is written to programmatically extract the tags, you should rely on the
tagsList field. The tagsList field is a JSON array with no delimiters, and consequently, easier to parse.
Request
Response
[
{
"name": "Department",
"value": "IT"
},
{
"name": "Environment",
"value": "Test"
},
{
"name": "Role",
"value": "WebRole"
}
]
Attested Data
Part of the scenario served by Instance Metadata Service is to provide guarantees that the data provided is coming
from Azure. We sign part of this information so that marketplace images can be sure that it's their image running
on Azure.
Sample 1: Getting attested Data
NOTE
All API responses are JSON strings. The following example responses are pretty-printed for readability.
Request
NOTE
Due to IMDS's caching mechanism, a previously cached nonce value may be returned.
Api-version is a mandatory field. Refer to the usage section for supported API versions. Nonce is an optional 10-
digit string. If not provided, IMDS returns the current UTC timestamp in its place.
Response
NOTE
The response is a JSON string. The following example response is pretty-printed for readability.
{
"encoding":"pkcs7","signature":"MIIEEgYJKoZIhvcNAQcCoIIEAzCCA/8CAQExDzANBgkqhkiG9w0BAQsFADCBugYJKoZIhvcNAQcBo
IGsBIGpeyJub25jZSI6IjEyMzQ1NjY3NjYiLCJwbGFuIjp7Im5hbWUiOiIiLCJwcm9kdWN0IjoiIiwicHVibGlzaGVyIjoiIn0sInRpbWVTdGF
tcCI6eyJjcmVhdGVkT24iOiIxMS8yMC8xOCAyMjowNzozOSAtMDAwMCIsImV4cGlyZXNPbiI6IjExLzIwLzE4IDIyOjA4OjI0IC0wMDAwIn0sI
nZtSWQiOiIifaCCAj8wggI7MIIBpKADAgECAhBnxW5Kh8dslEBA0E2mIBJ0MA0GCSqGSIb3DQEBBAUAMCsxKTAnBgNVBAMTIHRlc3RzdWJkb21
haW4ubWV0YWRhdGEuYXp1cmUuY29tMB4XDTE4MTEyMDIxNTc1N1oXDTE4MTIyMDIxNTc1NlowKzEpMCcGA1UEAxMgdGVzdHN1YmRvbWFpbi5tZ
XRhZGF0YS5henVyZS5jb20wgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAML/tBo86ENWPzmXZ0kPkX5dY5QZ150mA8lommszE71x2sCLonz
v4/UWk4H+jMMWRRwIea2CuQ5RhdWAHvKq6if4okKNt66fxm+YTVz9z0CTfCLmLT+nsdfOAsG1xZppEapC0Cd9vD6NCKyE8aYI1pliaeOnFjG0W
vMY04uWz2MdAgMBAAGjYDBeMFwGA1UdAQRVMFOAENnYkHLa04Ut4Mpt7TkJFfyhLTArMSkwJwYDVQQDEyB0ZXN0c3ViZG9tYWluLm1ldGFkYXR
hLmF6dXJlLmNvbYIQZ8VuSofHbJRAQNBNpiASdDANBgkqhkiG9w0BAQQFAAOBgQCLSM6aX5Bs1KHCJp4VQtxZPzXF71rVKCocHy3N9PTJQ9Fpn
d+bYw2vSpQHg/AiG82WuDFpPReJvr7Pa938mZqW9HUOGjQKK2FYDTg6fXD8pkPdyghlX5boGWAMMrf7bFkup+lsT+n2tRw2wbNknO1tQ0wICtq
y2VqzWwLi45RBwTGB6DCB5QIBATA/MCsxKTAnBgNVBAMTIHRlc3RzdWJkb21haW4ubWV0YWRhdGEuYXp1cmUuY29tAhBnxW5Kh8dslEBA0E2mI
BJ0MA0GCSqGSIb3DQEBCwUAMA0GCSqGSIb3DQEBAQUABIGAld1BM/yYIqqv8SDE4kjQo3Ul/IKAVR8ETKcve5BAdGSNkTUooUGVniTXeuvDj5N
kmazOaKZp9fEtByqqPOyw/nlXaZgOO44HDGiPUJ90xVYmfeK6p9RpJBu6kiKhnnYTelUk5u75phe5ZbMZfBhuPhXmYAdjc7Nmw97nx8NnprQ="
}
The signature blob is a pkcs7 signed version of document. It contains the certificate used for signing along with
certain VM-specific details. For ARM VMs, this includes vmId, sku, nonce, subscriptionId, timeStamp for creation
and expiry of the document and the plan information about the image. The plan information is only populated for
Azure Marketplace images. For Classic (non-ARM) VMs, only the vmId is guaranteed to be populated. The
certificate can be extracted from the response and used to validate that the response is valid and is coming from
Azure. The document contains the following fields:
timestamp/createdOn The UTC timestamp for when the signed document was
created
timestamp/expiresOn The UTC timestamp for when the signed document expires
Verify that the signature is from Microsoft Azure and check the certificate chain for errors.
NOTE
Due to IMDS's caching mechanism, a previously cached nonce value may be returned.
The nonce in the signed document can be compared if you provided a nonce parameter in the initial request.
NOTE
The certificate for Public cloud and each sovereign cloud will be different.
C LO UD C ERT IF IC AT E
NOTE
There is a known issue around the certificate used for signing. The certificates may not have an exact match of
metadata.azure.com for public cloud. Hence the certification validation should allow a common name from any
.metadata.azure.com subdomain.
In cases where the intermediate certificate cannot be downloaded due to network constraints during validation, the
intermediate certificate can be pinned. However, Azure will roll over the certificates as per standard PKI practice.
The pinned certificates would need to be updated when rollover happens. Whenever a change to update the
intermediate certificate is planned, the Azure blog will be updated and Azure customers will be notified. The
intermediate certificates can be found here. The intermediate certificates for each of the regions can be different.
NOTE
The intermediate certificate for Azure China 21Vianet will be from DigiCert Global Root CA instead of Baltimore. Also if you
had pinned the intermediate certificates for Azure China as part of root chain authority change, the intermediate certificates
will have to be updated.
route print
NOTE
The following example output from a Windows Server VM with Failover Cluster enabled contains only the IPv4 Route Table
for simplicity.
IPv4 Route Table
===========================================================================
Active Routes:
Network Destination Netmask Gateway Interface Metric
0.0.0.0 0.0.0.0 10.0.1.1 10.0.1.10 266
10.0.1.0 255.255.255.192 On-link 10.0.1.10 266
10.0.1.10 255.255.255.255 On-link 10.0.1.10 266
10.0.1.15 255.255.255.255 On-link 10.0.1.10 266
10.0.1.63 255.255.255.255 On-link 10.0.1.10 266
127.0.0.0 255.0.0.0 On-link 127.0.0.1 331
127.0.0.1 255.255.255.255 On-link 127.0.0.1 331
127.255.255.255 255.255.255.255 On-link 127.0.0.1 331
169.254.0.0 255.255.0.0 On-link 169.254.1.156 271
169.254.1.156 255.255.255.255 On-link 169.254.1.156 271
169.254.255.255 255.255.255.255 On-link 169.254.1.156 271
224.0.0.0 240.0.0.0 On-link 127.0.0.1 331
224.0.0.0 240.0.0.0 On-link 169.254.1.156 271
255.255.255.255 255.255.255.255 On-link 127.0.0.1 331
255.255.255.255 255.255.255.255 On-link 169.254.1.156 271
255.255.255.255 255.255.255.255 On-link 10.0.1.10 266
Run the following command and use the address of the Interface for Network Destination ( 0.0.0.0 ) which is (
10.0.1.10 ) in this example.
Regional Availability
The service is generally available in all Azure Clouds.
L A N GUA GE EXA M P L E
C# https://github.com/Microsoft/azureimds/blob/master/IMDSSa
mple.cs
L A N GUA GE EXA M P L E
Go https://github.com/Microsoft/azureimds/blob/master/imdssa
mple.go
Java https://github.com/Microsoft/azureimds/blob/master/imdssa
mple.java
NodeJS https://github.com/Microsoft/azureimds/blob/master/IMDSSa
mple.js
Perl https://github.com/Microsoft/azureimds/blob/master/IMDSSa
mple.pl
PowerShell https://github.com/Microsoft/azureimds/blob/master/IMDSSa
mple.ps1
Puppet https://github.com/keirans/azuremetadata
Python https://github.com/Microsoft/azureimds/blob/master/IMDSSa
mple.py
Ruby https://github.com/Microsoft/azureimds/blob/master/IMDSSa
mple.rb
H T T P STAT US C O DE REA SO N
200 OK
429 Too Many Requests The API currently supports a maximum of 5 queries per
second
Nested virtualization is supported in several Azure virtual machine families. This capability provides great flexibility
in supporting scenarios such as development, testing, training, and demonstration environments.
This article steps through enabling Hyper-V on an Azure VM and configuring Internet connectivity to that guest
virtual machine.
NOTE
For detailed instructions on creating a new virtual machine, see Create and Manage Windows VMs with the Azure PowerShell
module
WARNING
This command restarts the Azure VM. You will lose your RDP connection during the restart process.
3. View the properties of the switch and note the ifIndex for the new adapter.
Get-NetAdapter
NOTE
Take note of the "ifIndex" for the virtual switch you just created.
1. Open Hyper-V Manager and create a new virtual machine. Configure the virtual machine to use the new
Internal network you created.
Enabling automatic VM guest patching for your Windows VMs helps ease update management by safely and
automatically patching virtual machines to maintain security compliance.
Automatic VM guest patching has the following characteristics:
Patches classified as Critical or Security are automatically downloaded and applied on the VM.
Patches are applied during off-peak hours in the VM's time zone.
Patch orchestration is managed by Azure and patches are applied following availability-first principles.
Virtual machine health, as determined through platform health signals, is monitored to detect patching failures.
Works for all VM sizes.
IMPORTANT
Automatic VM guest patching is currently in Public Preview. An opt-in procedure is needed to use the public preview
functionality described below. This preview version is provided without a service level agreement, and is not recommended for
production workloads. Certain features might not be supported or might have constrained capabilities. For more information,
see Supplemental Terms of Use for Microsoft Azure Previews.
Supported OS images
Only VMs created from certain OS platform images are currently supported in the preview. Custom images are
currently not supported in the preview.
The following platform SKUs are currently supported (and more are added periodically):
P UB L ISH ER O S O F F ER SK U
Manual:
This mode disables Automatic Updates on the Windows virtual machine.
This mode does not support availability-first patching.
This mode should be set when using custom patching solutions.
To use this mode set the property osProfile.windowsConfiguration.enableAutomaticUpdates=false , and set the
property osProfile.windowsConfiguration.patchSettings.patchMode=Manual in the VM template.
NOTE
The property osProfile.windowsConfiguration.enableAutomaticUpdates can currently only be set when the VM is first
created. Switching from Manual to an Automatic mode or from either Automatic modes to Manual mode is currently not
supported. Switching from AutomaticByOS mode to AutomaticByPlatfom mode is supported
POST on
`/subscriptions/{subscriptionId}/providers/Microsoft.Features/providers/Microsoft.Compute/features/InGuestAutoP
atchVMPreview/register?api-version=2015-12-01`
GET on
`/subscriptions/{subscriptionId}/providers/Microsoft.Features/providers/Microsoft.Compute/features/InGuestAutoP
atchVMPreview?api-version=2015-12-01`
Once the feature is registered for your subscription, complete the opt-in process by propagating the change into
the Compute resource provider.
POST on `/subscriptions/{subscriptionId}/providers/Microsoft.Compute/register?api-version=2020-06-01`
Azure PowerShell
Use the Register-AzProviderFeature cmdlet to enable the preview for your subscription.
Once the feature is registered for your subscription, complete the opt-in process by propagating the change into
the Compute resource provider.
Once the feature is registered for your subscription, complete the opt-in process by propagating the change into
the Compute resource provider.
PUT on
`/subscriptions/subscription_id/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVi
rtualMachine?api-version=2020-06-01`
{
"properties": {
"osProfile": {
"windowsConfiguration": {
"provisionVMAgent": true,
"enableAutomaticUpdates": true,
"patchSettings": {
"patchMode": "AutomaticByPlatform"
}
}
}
}
}
Azure PowerShell
Use the Set-AzVMOperatingSystem cmdlet to enable automatic VM guest patching when creating or updating a
VM.
GET on
`/subscriptions/subscription_id/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVi
rtualMachine/instanceView?api-version=2020-06-01`
Azure PowerShell
Use the Get-AzVM cmdlet with the -Status parameter to access the instance view for your VM.
PowerShell currently only provides information on the patch extension. Information about patchStatus will also be
available soon through PowerShell.
Azure CLI 2.0
Use az vm get-instance-view to access the instance view for your VM.
REST API
POST on
`/subscriptions/subscription_id/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVi
rtualMachine/assessPatches?api-version=2020-06-01`
Azure PowerShell
Use the Invoke-AzVmPatchAssessment cmdlet to assess available patches for your virtual machine.
Next steps
Learn more about creating and managing Windows virtual machines
Handling planned maintenance notifications
11/2/2020 • 7 minutes to read • Edit Online
Azure periodically performs updates to improve the reliability, performance, and security of the host infrastructure
for virtual machines. Updates are changes like patching the hosting environment or upgrading and
decommissioning hardware. A majority of these updates are completed without any impact to the hosted virtual
machines. However, there are cases where updates do have an impact:
If the maintenance does not require a reboot, Azure uses in-place migration to pause the VM while the host
is updated. These types maintenance operations are applied fault domain by fault domain. Progress is
stopped if any warning health signals are received.
If maintenance requires a reboot, you get a notice of when the maintenance is planned. You are given a time
window of about 35 days where you can start the maintenance yourself, when it works for you.
Planned maintenance that requires a reboot is scheduled in waves. Each wave has different scope (regions).
A wave starts with a notification to customers. By default, the notification is sent to the subscription admin and
co-admins. You can add more recipients and messaging options like email, SMS, and webhooks, using Activity
Log Alerts.
Once a notification goes out, a self-service window is made available. During this window, you can query which
of your virtual machines are affected and start maintenance based on your own scheduling needs. The self-
service window is typically about 35 days.
After the self-service window, a scheduled maintenance window begins. At some point during this window,
Azure schedules and applies the required maintenance to your virtual machine.
The goal in having two windows is to give you enough time to start maintenance and reboot your virtual machine
while knowing when Azure will automatically start maintenance.
You can use the Azure portal, PowerShell, REST API, and CLI to query for the maintenance windows for your VMs
and start self-service maintenance.
NOTE
Self-service maintenance might not be available for all of your VMs. To determine if proactive redeploy is available for your
VM, look for the Star t now in the maintenance status. Self-service maintenance is currently not available for Cloud Services
(Web/Worker Role) and Service Fabric.
Self-service maintenance is not recommended for deployments using availability sets . Availability sets are
already only updated one update domain at a time.
Let Azure trigger the maintenance. For maintenance that requires reboot, maintenance will be done update
domain by update domain. The update domains do not necessarily receive the maintenance sequentially, and
that there is a 30-minute pause between update domains.
If a temporary loss of some capacity (1 update domain) is a concern, you can add instances during the
maintenance period.
For maintenance that does not require reboot, updates are applied at the fault domain level.
Don't use self-service maintenance in the following scenarios:
If you shut down your VMs frequently, either manually, using DevTest Labs, using auto-shutdown, or following a
schedule, it could revert the maintenance status and therefore cause additional downtime.
On short-lived VMs that you know will be deleted before the end of the maintenance wave.
For workloads with a large state stored in the local (ephemeral) disk that is desired to be maintained upon
update.
For cases where you resize your VM often, as it could revert the maintenance status.
If you have adopted scheduled events that enable proactive failover or graceful shutdown of your workload, 15
minutes before start of maintenance shutdown
Use self-service maintenance, if you are planning to run your VM uninterrupted during the scheduled maintenance
phase and none of the counter-indications mentioned above are applicable.
It is best to use self-service maintenance in the following cases:
You need to communicate an exact maintenance window to your management or end-customer.
You need to complete the maintenance by a given date.
You need to control the sequence of maintenance, for example, multi-tier application to guarantee safe recovery.
More than 30 minutes of VM recovery time is needed between two update domains (UDs). To control the time
between update domains, you must trigger maintenance on your VMs one update domain (UD) at a time.
FAQ
Q: Why do you need to reboot my vir tual machines now?
A: While the majority of updates and upgrades to the Azure platform do not impact virtual machine's availability,
there are cases where we can't avoid rebooting virtual machines hosted in Azure. We have accumulated several
changes that require us to restart our servers that will result in virtual machines reboot.
Q: If I follow your recommendations for High Availability by using an Availability Set, am I safe?
A: Virtual machines deployed in an availability set or virtual machine scale sets have the notion of Update Domains
(UD). When performing maintenance, Azure honors the UD constraint and will not reboot virtual machines from
different UD (within the same availability set). Azure also waits for at least 30 minutes before moving to the next
group of virtual machines.
For more information about high availability, see Availability for virtual machines in Azure.
Q: How do I get notified about planned maintenance?
A: A planned maintenance wave starts by setting a schedule to one or more Azure regions. Soon after, an email
notification is sent to the subscription admin and co-admins (one email per subscription). Additional channels and
recipients for this notification could be configured using Activity Log Alerts. In case you deploy a virtual machine to
a region where planned maintenance is already scheduled, you will not receive the notification but rather need to
check the maintenance state of the VM.
Q: I don't see any indication of planned maintenance in the por tal, PowerShell, or CLI. What is
wrong?
A: Information related to planned maintenance is available during a planned maintenance wave only for the VMs
that are going to be impacted by it. In other words, if you see not data, it could be that the maintenance wave has
already completed (or not started) or that your virtual machine is already hosted in an updated server.
Q: Is there a way to know exactly when my vir tual machine will be impacted?
A: When setting the schedule, we define a time window of several days. However, the exact sequencing of servers
(and VMs) within this window is unknown. Customers who would like to know the exact time for their VMs can use
scheduled events and query from within the virtual machine and receive a 15-minute notification before a VM
reboot.
Q: How long will it take you to reboot my vir tual machine?
A: Depending on the size of your VM, reboot may take up to several minutes during the self-service maintenance
window. During the Azure initiated reboots in the scheduled maintenance window, the reboot will typically take
about 25 minutes. Note that in case you use Cloud Services (Web/Worker Role), Virtual Machine Scale Sets, or
availability sets, you will be given 30 minutes between each group of VMs (UD) during the scheduled maintenance
window.
Q: What is the experience in the case of Vir tual Machine Scale Sets?
A: Planned maintenance is now available for Virtual Machine Scale Sets. For instructions on how to initiate self-
service maintenance refer planned maintenance for virtual machine scale sets document.
Q: What is the experience in the case of Cloud Ser vices (Web/Worker Role) and Ser vice Fabric?
A: While these platforms are impacted by planned maintenance, customers using these platforms are considered
safe given that only VMs in a single Upgrade Domain (UD) will be impacted at any given time. Self-service
maintenance is currently not available for Cloud Services (Web/Worker Role) and Service Fabric.
Q: I don’t see any maintenance information on my VMs. What went wrong?
A: There are several reasons why you’re not seeing any maintenance information on your VMs:
1. You are using a subscription marked as Microsoft internal.
2. Your VMs are not scheduled for maintenance. It could be that the maintenance wave has ended, canceled, or
modified so that your VMs are no longer impacted by it.
3. You have deallocated VM and then started it. This can cause VM to move to a location which does not have
planned maintenance wave scheduled. So the VM will not show maintenance information any more.
4. You don’t have the Maintenance column added to your VM list view. While we have added this column to the
default view, customers who configured to see non-default columns must manually add the Maintenance
column to their VM list view.
Q: My VM is scheduled for maintenance for the second time. Why?
A: There are several use cases where you will see your VM scheduled for maintenance after you have already
completed your maintenance-redeploy:
1. We have canceled the maintenance wave and restarted it with a different payload. It could be that we've
detected faulted payload and we simply need to deploy an additional payload.
2. Your VM was service healed to another node due to a hardware fault.
3. You have selected to stop (deallocate) and restart the VM.
4. You have auto shutdown turned on for the VM.
Next steps
You can handle planned maintenance using the Azure CLI, Azure PowerShell or portal.
Handling planned maintenance notifications using
the Azure CLI
11/2/2020 • 2 minutes to read • Edit Online
This ar ticle applies to vir tual machines running both Linux and Windows.
You can use the CLI to see when VMs are scheduled for maintenance. Planned maintenance information is
available from az vm get-instance-view.
Maintenance information is returned only if there is maintenance planned.
Output
"maintenanceRedeployStatus": {
"additionalProperties": {},
"isCustomerInitiatedMaintenanceAllowed": true,
"lastOperationMessage": null,
"lastOperationResultCode": "None",
"maintenanceWindowEndTime": "2018-06-04T16:30:00+00:00",
"maintenanceWindowStartTime": "2018-05-21T16:30:00+00:00",
"preMaintenanceWindowEndTime": "2018-05-19T12:30:00+00:00",
"preMaintenanceWindowStartTime": "2018-05-14T12:30:00+00:00"
Start maintenance
The following call will start maintenance on a VM if IsCustomerInitiatedMaintenanceAllowed is set to true.
Classic deployments
IMPORTANT
Classic VMs will be retired on March 1, 2023.
If you use IaaS resources from ASM, please complete your migration by March 1, 2023. We encourage you to make the
switch sooner to take advantage of the many feature enhancements in Azure Resource Manager.
For more information, see Migrate your IaaS resources to Azure Resource Manager by March 1, 2023.
If you still have legacy VMs that were deployed using the classic deployment model, you can use the Azure classic
CLI to query for VMs and initiate maintenance.
Make sure you are in the correct mode to work with classic VM by typing:
To start maintenance on your classic VM named myVM in the myService service and myDeployment deployment,
type:
Next steps
You can also handle planned maintenance using the Azure PowerShell or portal.
Handling planned maintenance notifications using
the portal
11/2/2020 • 2 minutes to read • Edit Online
This ar ticle applies to vir tual machines running both Linux and Windows.
Once a planned maintenance wave is scheduled, you can check for a list of virtual machines that are impacted.
You can use the Azure portal and look for VMs scheduled for maintenance.
1. Sign in to the Azure portal.
2. In the left navigation, click Vir tual Machines .
3. In the Virtual Machines pane, select Edit columns button to open the list of available columns.
4. Select and add the following columns:
Maintenance status : Shows the maintenance status for the VM. The following are the potential values:
VA L UE DESC RIP T IO N
Retry later You have initiated maintenance with no success. You will
be able to use the self-service maintenance option at a
later time.
Maintenance - Self-ser vice window : Shows the time window when you can self-start maintenance on
your VMs.
Maintenance - Scheduled window : Shows the time window when Azure will maintain your VM in order
to complete maintenance.
Next steps
You can also handle planned maintenance using the Azure CLI or PowerShell.
Handling planned maintenance using PowerShell
11/2/2020 • 2 minutes to read • Edit Online
This ar ticle applies to vir tual machines running both Linux and Windows.
You can use Azure PowerShell to see when VMs are scheduled for maintenance. Planned maintenance
information is available from the Get-AzVM cmdlet when you use the -status parameter.
Maintenance information is returned only if there is maintenance planned. If no maintenance is scheduled that
impacts the VM, the cmdlet does not return any maintenance information.
Output
MaintenanceRedeployStatus :
IsCustomerInitiatedMaintenanceAllowed : True
PreMaintenanceWindowStartTime : 5/14/2018 12:30:00 PM
PreMaintenanceWindowEndTime : 5/19/2018 12:30:00 PM
MaintenanceWindowStartTime : 5/21/2018 4:30:00 PM
MaintenanceWindowEndTime : 6/4/2018 4:30
LastOperationResultCode : None
VA L UE DESC RIP T IO N
You can also get the maintenance status for all VMs in a resource group by using Get-AzVM and not specifying a
VM.
The following PowerShell example takes your subscription ID and returns a list of VMs that are scheduled for
maintenance.
function MaintenanceIterator
{
Select-AzSubscription -SubscriptionId $args[0]
$rgList= Get-AzResourceGroup
Classic deployments
IMPORTANT
Classic VMs will be retired on March 1, 2023.
If you use IaaS resources from ASM, please complete your migration by March 1, 2023. We encourage you to make the
switch sooner to take advantage of the many feature enhancements in Azure Resource Manager.
For more information, see Migrate your IaaS resources to Azure Resource Manager by March 1, 2023.
If you still have legacy VMs that were deployed using the classic deployment model, you can use PowerShell to
query for VMs and initiate maintenance.
To get the maintenance status of a VM, type:
Manage platform updates, that don't require a reboot, using maintenance control. Azure frequently updates its
infrastructure to improve reliability, performance, security or launch new features. Most updates are transparent to
users. Some sensitive workloads, like gaming, media streaming, and financial transactions, can't tolerate even few
seconds of a VM freezing or disconnecting for maintenance. Maintenance control gives you the option to wait on
platform updates and apply them within a 35-day rolling window.
Maintenance control lets you decide when to apply updates to your isolated VMs and Azure dedicated hosts.
With maintenance control, you can:
Batch updates into one update package.
Wait up to 35 days to apply updates.
Automate platform updates for your maintenance window using Azure Functions.
Maintenance configurations work across subscriptions and resource groups.
Limitations
VMs must be on a dedicated host, or be created using an isolated VM size.
After 35 days, an update will automatically be applied.
User must have Resource Contributor access.
Management options
You can create and manage maintenance configurations using any of the following options:
Azure CLI
Azure PowerShell
Azure portal
For an Azure Functions sample, see Scheduling Maintenance Updates with Maintenance Control and Azure
Functions.
Next steps
To learn more, see Maintenance and updates.
Control updates with Maintenance Control and the
Azure CLI
11/2/2020 • 4 minutes to read • Edit Online
Maintenance control lets you decide when to apply updates to your isolated VMs and Azure dedicated hosts. This
topic covers the Azure CLI options for Maintenance control. For more about benefits of using Maintenance control,
its limitations, and other management options, see Managing platform updates with Maintenance Control.
az group create \
--location eastus \
--name myMaintenanceRG
az maintenance configuration create \
-g myMaintenanceRG \
--name myConfig \
--maintenanceScope host\
--location eastus
Dedicated host
To apply a configuration to a dedicated host, you need to include --resource-type hosts , --resource-parent-name
with the name of the host group, and --resource-parent-type hostGroups .
The parameter --resource-id is the ID of the host. You can use az vm host get-instance-view to get the ID of your
dedicated host.
Check configuration
You can verify that the configuration was applied correctly, or check to see what configuration is currently applied
using az maintenance assignment list .
Isolated VM
Dedicated host
If there are updates, only one will be returned, even if there are multiple updates pending. The data for this update
will be returned in an object:
[
{
"impactDurationInSec": 9,
"impactType": "Freeze",
"maintenanceScope": "Host",
"notBefore": "2020-03-03T07:23:04.905538+00:00",
"resourceId": "/subscriptions/9120c5ff-e78e-4bd0-b29f-
75c19cadd078/resourcegroups/DemoRG/providers/Microsoft.Compute/hostGroups/demoHostGroup/hosts/myHost",
"status": "Pending"
}
]
Isolated VM
Check for pending updates for an isolated VM. In this example, the output is formatted as a table for readability.
Dedicated host
To check for pending updates for a dedicated host. In this example, the output is formatted as a table for
readability. Replace the values for the resources with your own.
Apply updates
Use az maintenance apply update to apply pending updates. On success, this command will return JSON
containing the details of the update.
Isolated VM
Create a request to apply updates to an isolated VM.
az maintenance applyupdate create \
--subscription 1111abcd-1a11-1a2b-1a12-123456789abc \
--resource-group myMaintenanceRG \
--resource-name myVM \
--resource-type virtualMachines \
--provider-name Microsoft.Compute
Dedicated host
Apply updates to a dedicated host.
Status : Completed
ResourceId : /subscriptions/12ae7457-4a34-465c-94c1-
17c058c2bd25/resourcegroups/TestShantS/providers/Microsoft.Comp
ute/virtualMachines/DXT-test-04-iso
LastUpdateTime : 1/1/2020 12:00:00 AM
Id : /subscriptions/12ae7457-4a34-465c-94c1-
17c058c2bd25/resourcegroups/TestShantS/providers/Microsoft.Comp
ute/virtualMachines/DXT-test-04-iso/providers/Microsoft.Maintenance/applyUpdates/default
Name : default
Type : Microsoft.Maintenance/applyUpdates
LastUpdateTime will be the time when the update got complete, either initiated by you or by the platform in case
self-maintenance window was not used. If there has never been an update applied through maintenance control it
will show default value.
Isolated VM
Dedicated host
az maintenance applyupdate get \
--subscription 1111abcd-1a11-1a2b-1a12-123456789abc \
--resource-group myMaintenanceRG \
--resource-name myHost \
--resource-type hosts \
--provider-name Microsoft.Compute \
--resource-parent-name myHostGroup \
--resource-parent-type hostGroups \
--apply-update-name myUpdateName \
--query "{LastUpdate:lastUpdateTime, Name:name, ResourceGroup:resourceGroup, Status:status}" \
--output table
Next steps
To learn more, see Maintenance and updates.
Control updates with Maintenance Control and Azure
PowerShell
11/2/2020 • 5 minutes to read • Edit Online
Maintenance control lets you decide when to apply updates to your isolated VMs and Azure dedicated hosts. This
topic covers the Azure PowerShell options for Maintenance control. For more about benefits of using Maintenance
control, its limitations, and other management options, see Managing platform updates with Maintenance Control.
If you are installing locally, make sure you open your PowerShell prompt as an administrator.
You may also be asked to confirm that you want to install from an untrusted repository. Type Y or select Yes to
All to install the module.
New-AzResourceGroup `
-Location eastus `
-Name myMaintenanceRG
$config = New-AzMaintenanceConfiguration `
-ResourceGroup myMaintenanceRG `
-Name myConfig `
-MaintenanceScope host `
-Location eastus
Using -MaintenanceScope host ensures that the maintenance configuration is used for controlling updates to the
host.
If you try to create a configuration with the same name, but in a different location, you will get an error.
Configuration names must be unique to your resource group.
You can query for available maintenance configurations using Get-AzMaintenanceConfiguration.
Get-AzMaintenanceConfiguration | Format-Table -Property Name,Id
IMPORTANT
Scheduled window feature is currently in Public Preview. This preview version is provided without a service level agreement,
and is not recommended for production workloads. Certain features might not be supported or might have constrained
capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.
$config = New-AzMaintenanceConfiguration `
-ResourceGroup $RGName `
-Name $MaintenanceConfig `
-MaintenanceScope Host `
-Location $location `
-StartDateTime "2020-10-01 00:00" `
-TimeZone "Pacific Standard Time" `
-Duration "05:00" `
-RecurEvery "Month Fourth Monday"
IMPORTANT
Maintenance duration must be 2 hours or longer. Maintenance recurrence must be set to at least occur once in 35-days.
Maintenance recurrence can be expressed as daily, weekly, or monthly schedules. Daily schedule examples are
recurEvery: Day, recurEvery: 3Days. Weekly schedule examples are recurEvery: 3Weeks, recurEvery: Week
Saturday,Sunday. Monthly schedule examples are recurEvery: Month day23,day24, recurEvery: Month Last Sunday,
recurEvery: Month Fourth Monday.
New-AzConfigurationAssignment `
-ResourceGroupName myResourceGroup `
-Location eastus `
-ResourceName myVM `
-ResourceType VirtualMachines `
-ProviderName Microsoft.Compute `
-ConfigurationAssignmentName $config.Name `
-MaintenanceConfigurationId $config.Id
Dedicated host
To apply a configuration to a dedicated host, you also need to include -ResourceType hosts , -ResourceParentName
with the name of the host group, and -ResourceParentType hostGroups .
New-AzConfigurationAssignment `
-ResourceGroupName myResourceGroup `
-Location eastus `
-ResourceName myHost `
-ResourceType hosts `
-ResourceParentName myHostGroup `
-ResourceParentType hostGroups `
-ProviderName Microsoft.Compute `
-ConfigurationAssignmentName $config.Name `
-MaintenanceConfigurationId $config.Id
{
"maintenanceScope": "Host",
"impactType": "Freeze",
"status": "Pending",
"impactDurationInSec": 9,
"notBefore": "2020-02-21T16:47:44.8728029Z",
"properties": {
"resourceId": "/subscriptions/39c6cced-4d6c-4dd5-af86-
57499cd3f846/resourcegroups/Ignite2019/providers/Microsoft.Compute/virtualMachines/MCDemo3"
}
Isolated VM
Check for pending updates for an isolated VM. In this example, the output is formatted as a table for readability.
Get-AzMaintenanceUpdate `
-ResourceGroupName myResourceGroup `
-ResourceName myVM `
-ResourceType VirtualMachines `
-ProviderName Microsoft.Compute | Format-Table
Dedicated host
To check for pending updates for a dedicated host. In this example, the output is formatted as a table for readability.
Replace the values for the resources with your own.
Get-AzMaintenanceUpdate `
-ResourceGroupName myResourceGroup `
-ResourceName myHost `
-ResourceType hosts `
-ResourceParentName myHostGroup `
-ResourceParentType hostGroups `
-ProviderName Microsoft.Compute | Format-Table
Apply updates
Use New-AzApplyUpdate to apply pending updates.
Isolated VM
Create a request to apply updates to an isolated VM.
New-AzApplyUpdate `
-ResourceGroupName myResourceGroup `
-ResourceName myVM `
-ResourceType VirtualMachines `
-ProviderName Microsoft.Compute
On success, this command will return a PSApplyUpdate object. You can use the Name attribute in the
Get-AzApplyUpdate command to check the update status. See Check update status.
Dedicated host
Apply updates to a dedicated host.
New-AzApplyUpdate `
-ResourceGroupName myResourceGroup `
-ResourceName myHost `
-ResourceType hosts `
-ResourceParentName myHostGroup `
-ResourceParentType hostGroups `
-ProviderName Microsoft.Compute
Status : Completed
ResourceId : /subscriptions/12ae7457-4a34-465c-94c1-
17c058c2bd25/resourcegroups/TestShantS/providers/Microsoft.Comp
ute/virtualMachines/DXT-test-04-iso
LastUpdateTime : 1/1/2020 12:00:00 AM
Id : /subscriptions/12ae7457-4a34-465c-94c1-
17c058c2bd25/resourcegroups/TestShantS/providers/Microsoft.Comp
ute/virtualMachines/DXT-test-04-iso/providers/Microsoft.Maintenance/applyUpdates/default
Name : default
Type : Microsoft.Maintenance/applyUpdates
LastUpdateTime will be the time when the update got complete, either initiated by you or by the platform in case
self-maintenance window was not used. If there has never been an update applied through maintenance control it
will show default value.
Isolated VM
Check for updates to a specific virtual machine.
Get-AzApplyUpdate `
-ResourceGroupName myResourceGroup `
-ResourceName myVM `
-ResourceType VirtualMachines `
-ProviderName Microsoft.Compute `
-ApplyUpdateName default
Dedicated host
Check for updates to a dedicated host.
Get-AzApplyUpdate `
-ResourceGroupName myResourceGroup `
-ResourceName myHost `
-ResourceType hosts `
-ResourceParentName myHostGroup `
-ResourceParentType hostGroups `
-ProviderName Microsoft.Compute `
-ApplyUpdateName myUpdateName
Remove-AzMaintenanceConfiguration `
-ResourceGroupName myResourceGroup `
-Name $config.Name
Next steps
To learn more, see Maintenance and updates.
Control updates with Maintenance Control and the
Azure portal
11/2/2020 • 2 minutes to read • Edit Online
Maintenance control lets you decide when to apply updates to your isolated VMs and Azure dedicated hosts. This
topic covers the Azure portal options for Maintenance control. For more about benefits of using Maintenance
control, its limitations, and other management options, see Managing platform updates with Maintenance Control.
3. Click Add .
4. Choose a subscription and resource group, provide a name for the configuration, and choose a region. Click
Next .
5. Add tags and values. Click Next .
Check configuration
You can verify that the configuration was applied correctly or check to see any maintenance configuration that is
currently assigned using Maintenance Configurations . The Type column shows whether the configuration is
assigned to an isolated VM or Azure dedicated host.
You can also check the configuration for a specific virtual machine on its properties page. Click Maintenance to
see the configuration assigned to that virtual machine.
Check for pending updates
There are also two ways to check if updates are pending for a maintenance configuration. In Maintenance
Configurations , on the details for the configuration, click Assignments and check Maintenance status .
You can also check a specific host using Vir tual Machines or properties of the dedicated host.
Apply updates
You can apply pending updates on demand using Vir tual Machines . On the VM details, click Maintenance and
click Apply maintenance now .
Check the status of applying updates
You can check on the progress of the updates for a configuration in Maintenance Configurations or using
Vir tual Machines . On the VM details, click Maintenance . In the following example, the Maintenance state
shows an update is Pending .
Delete a maintenance configuration
To delete a configuration, open the configuration details and click Delete .
Next steps
To learn more, see Maintenance and updates.
Azure Metadata Service: Scheduled Events for
Windows VMs
11/2/2020 • 7 minutes to read • Edit Online
Scheduled Events is an Azure Metadata Service that gives your application time to prepare for virtual machine
(VM) maintenance. It provides information about upcoming maintenance events (for example, reboot) so that
your application can prepare for them and limit disruption. It's available for all Azure Virtual Machines types,
including PaaS and IaaS on both Windows and Linux.
For information about Scheduled Events on Linux, see Scheduled Events for Linux VMs.
NOTE
Scheduled Events is generally available in all Azure Regions. See Version and Region Availability for latest release
information.
The basics
Metadata Service exposes information about running VMs by using a REST endpoint that's accessible from within
the VM. The information is available via a nonroutable IP so that it's not exposed outside the VM.
Scope
Scheduled events are delivered to:
Standalone Virtual Machines.
All the VMs in a cloud service.
All the VMs in an availability set.
All the VMs in a scale set placement group.
NOTE
Scheduled Events for all virtual machines (VMs) in a Fabric Controller (FC) tenant are delivered to all VMs in a FC tenant. FC
tenant equates to a standalone VM, an entire Cloud Service, an entire Availability Set, and a Placement Group for a VM
Scale Set (VMSS) regardless of Availability Zone usage.
As a result, check the Resources field in the event to identify which VMs are affected.
Endpoint discovery
For VNET enabled VMs, Metadata Service is available from a static nonroutable IP, 169.254.169.254 . The full
endpoint for the latest version of Scheduled Events is:
http://169.254.169.254/metadata/scheduledevents?api-version=2019-08-01
If the VM is not created within a Virtual Network, the default cases for cloud services and classic VMs, additional
logic is required to discover the IP address to use. To learn how to discover the host endpoint, see this sample.
Version and region availability
The Scheduled Events service is versioned. Versions are mandatory; the current version is 2019-01-01 .
NOTE
Previous preview releases of Scheduled Events supported {latest} as the api-version. This format is no longer supported and
will be deprecated in the future.
A response contains an array of scheduled events. An empty array means that currently no events are scheduled.
In the case where there are scheduled events, the response contains an array of events.
{
"DocumentIncarnation": {IncarnationID},
"Events": [
{
"EventId": {eventID},
"EventType": "Reboot" | "Redeploy" | "Freeze" | "Preempt" | "Terminate",
"ResourceType": "VirtualMachine",
"Resources": [{resourceName}],
"EventStatus": "Scheduled" | "Started",
"NotBefore": {timeInUTC},
"Description": {eventDescription},
"EventSource" : "Platform" | "User",
}
]
}
Event properties
P RO P ERT Y DESC RIP T IO N
Example:
602d9444-d2cd-49c7-8624-8643e7171297
P RO P ERT Y DESC RIP T IO N
Values:
Freeze : The Virtual Machine is scheduled to pause
for a few seconds. CPU and network connectivity may
be suspended, but there is no impact on memory or
open files.
Reboot : The Virtual Machine is scheduled for reboot
(non-persistent memory is lost).
Redeploy : The Virtual Machine is scheduled to move
to another node (ephemeral disks are lost).
Preempt : The Spot Virtual Machine is being deleted
(ephemeral disks are lost).
Terminate : The virtual machine is scheduled to be
deleted.
Values:
VirtualMachine
Example:
["FrontEnd_IN_0", "BackEnd_IN_0"]
Values:
Scheduled : This event is scheduled to start after the
time specified in the NotBefore property.
Started : This event has started.
Example:
Mon, 19 Sep 2016 18:29:47 GMT
Example:
Host server is undergoing maintenance.
P RO P ERT Y DESC RIP T IO N
Example:
Platform : This event is initiated by platform.
User : This event is initiated by user.
Event scheduling
Each event is scheduled a minimum amount of time in the future based on the event type. This time is reflected in
an event's NotBefore property.
EVEN T T Y P E M IN IM UM N OT IC E
Freeze 15 minutes
Reboot 15 minutes
Redeploy 10 minutes
Preempt 30 seconds
NOTE
In some cases, Azure is able to predict host failure due to degraded hardware and will attempt to mitigate disruption to
your service by scheduling a migration. Affected virtual machines will receive a scheduled event with a NotBefore that is
typically a few days in the future. The actual time varies depending on the predicted failure risk assessment. Azure tries to
give 7 days' advance notice when possible, but the actual time varies and might be smaller if the prediction is that there is a
high chance of the hardware failing imminently. To minimize risk to your service in case the hardware fails before the
system-initiated migration, we recommend that you self-redeploy your virtual machine as soon as possible.
Polling frequency
You can poll the endpoint for updates as frequently or infrequently as you like. However, the longer the time
between requests, the more time you potentially lose to react to an upcoming event. Most events have 5 to 15
minutes of advance notice, although in some cases advance notice might be as little as 30 seconds. To ensure that
you have as much time as possible to take mitigating actions, we recommend that you poll the service once per
second.
Start an event
After you learn of an upcoming event and finish your logic for graceful shutdown, you can approve the
outstanding event by making a POST call to Metadata Service with EventId . This call indicates to Azure that it
can shorten the minimum notification time (when possible).
The following JSON sample is expected in the POST request body. The request should contain a list of
StartRequests . Each StartRequest contains EventId for the event you want to expedite:
{
"StartRequests" : [
{
"EventId": {EventId}
}
]
}
Bash sample
NOTE
Acknowledging an event allows the event to proceed for all Resources in the event, not just the VM that acknowledges
the event. Therefore, you can choose to elect a leader to coordinate the acknowledgement, which might be as simple as the
first machine in the Resources field.
Python sample
The following sample queries Metadata Service for scheduled events and approves each outstanding event:
#!/usr/bin/python
import json
import socket
import urllib2
metadata_url = "http://169.254.169.254/metadata/scheduledevents?api-version=2019-08-01"
this_host = socket.gethostname()
def get_scheduled_events():
req = urllib2.Request(metadata_url)
req.add_header('Metadata', 'true')
resp = urllib2.urlopen(req)
data = json.loads(resp.read())
return data
def handle_scheduled_events(data):
for evt in data['Events']:
eventid = evt['EventId']
status = evt['EventStatus']
resources = evt['Resources']
eventtype = evt['EventType']
resourcetype = evt['ResourceType']
notbefore = evt['NotBefore'].replace(" ", "_")
description = evt['Description']
eventSource = evt['EventSource']
if this_host in resources:
print("+ Scheduled Event. This host " + this_host +
" is scheduled for " + eventtype +
" by " + eventSource +
" with description " + description +
" not before " + notbefore)
# Add logic for handling events here
def main():
data = get_scheduled_events()
handle_scheduled_events(data)
if __name__ == '__main__':
main()
Next steps
Watch Scheduled Events on Azure Friday to see a demo.
Review the Scheduled Events code samples in the Azure Instance Metadata Scheduled Events GitHub
repository.
Read more about the APIs that are available in the Instance Metadata Service.
Learn about planned maintenance for Windows virtual machines in Azure.
Monitoring Scheduled Events
11/2/2020 • 7 minutes to read • Edit Online
Updates are applied to different parts of Azure every day, to keep the services running on them secure, and up-to-
date. In addition to planned updates, unplanned events may also occur. For example, if any hardware degradation
or fault is detected, Azure services may need to perform unplanned maintenance. Using live migration, memory
preserving updates and generally keeping a strict bar on the impact of updates, in most cases these events are
almost transparent to customers, and they have no impact or at most cause a few seconds of virtual machine
freeze. However, for some applications, even a few seconds of virtual machine freeze could cause an impact.
Knowing in advance about upcoming Azure maintenance is important, to ensure the best experience for those
applications. Scheduled Events service provides you a programmatic interface to be notified about upcoming
maintenance, and enables you to gracefully handle the maintenance.
In this article, we will show how you can use scheduled events to be notified about maintenance events that could
be affecting your VMs and build some basic automation that can help with monitoring and analysis.
Prerequisites
For this example, you will need to create a Windows Virtual Machine in an Availability Set. Scheduled Events
provide notifications about changes that can affect any of the virtual machines in your availability set, Cloud
Service, Virtual Machine Scale Set or standalone VMs. We will be running a service that polls for scheduled events
on one of the VMs that will act as a collector, to get events for all of the other VMs in the availability set.
Don't delete the group resource group at the end of the tutorial.
You will also need to create a Log Analytics workspace that we will use to aggregate information from the VMs in
the availability set.
New-AzVm `
-ResourceGroupName "myResourceGroupAvailability" `
-Name "myCollectorVM" `
-Location "East US" `
-VirtualNetworkName "myVnet" `
-SubnetName "mySubnet" `
-SecurityGroupName "myNetworkSecurityGroup" `
-OpenPorts 3389 `
-PublicIpAddressName "myPublicIpAddress3" `
-AvailabilitySetName "myAvailabilitySet" `
-Credential $cred
.\SchService.ps1 -Setup
.\SchService.ps1 -Start
The service will now start polling every 10 seconds for any scheduled events and approve the events to expedite
the maintenance. Freeze, Reboot, Redeploy, and Preempt are the events captured by Schedule events. Note that you
can extend the script to trigger some mitigations prior to approving the event.
Validate the service status and make sure it is running.
.\SchService.ps1 -status
When events are captured by the Schedule Event service, it will get logged in the application even log with Event
Status, Event Type, Resources (VM name) and NotBefore (minimum notice period). You can locate the events with ID
1234 in the Application Event Log.
NOTE
In this example, the virtual machines were are in an availability set, which enabled us to designate a single virtual machine as
the collector to listen and route scheduled events to our log analytics works space. If you have standalone virtual machines,
you can run the service on every virtual machine, and then connect them individually to your log analytics workspace.
For our set up, we chose Windows, but you can design a similar solution on Linux.
At any point you can stop/remove the Scheduled Event Service by using the switches –stop and –remove .
4. Leave ERROR , WARNING , and INFORMATION selected and then select Save to save the settings.
NOTE
There will be some delay, and it may take up to 10 minutes before the log is available.
2. Select Save , and then type logQuery for the name, leave Quer y as the type, type VMLogs as the Categor y ,
and then select Save .
Next steps
To learn more, see the Scheduled events service page on GitHub.
Azure virtual machine scale set automatic OS image
upgrades
11/2/2020 • 13 minutes to read • Edit Online
Enabling automatic OS image upgrades on your scale set helps ease update management by safely and
automatically upgrading the OS disk for all instances in the scale set.
Automatic OS upgrade has the following characteristics:
Once configured, the latest OS image published by image publishers is automatically applied to the scale set
without user intervention.
Upgrades batches of instances in a rolling manner each time a new image is published by the publisher.
Integrates with application health probes and Application Health extension.
Works for all VM sizes, and for both Windows and Linux images.
You can opt out of automatic upgrades at any time (OS Upgrades can be initiated manually as well).
The OS Disk of a VM is replaced with the new OS Disk created with latest image version. Configured extensions
and custom data scripts are run, while persisted data disks are retained.
Extension sequencing is supported.
Automatic OS image upgrade can be enabled on a scale set of any size.
Supported OS images
Only certain OS platform images are currently supported. Custom images are supported if the scale set uses
custom images through Shared Image Gallery.
The following platform SKUs are currently supported (and more are added periodically):
P UB L ISH ER O S O F F ER SK U
NOTE
It can take up to 3 hours for a scale set to trigger the first image upgrade rollout after the scale set is first configured for
automatic OS upgrades. This is a one-time delay per scale set. Subsequent image rollouts are triggered on the scale set
within 30-60 minutes.
PUT or PATCH on
`/subscriptions/subscription_id/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachineScale
Sets/myScaleSet?api-version=2019-12-01`
{
"properties": {
"upgradePolicy": {
"automaticOSUpgradePolicy": {
"enableAutomaticOSUpgrade": true
}
}
}
}
Azure PowerShell
Use the Update-AzVmss cmdlet to configure automatic OS image upgrades for your scale set. The following
example configures automatic upgrades for the scale set named myScaleSet in the resource group named
myResourceGroup:
NOTE
After configuring automatic OS image upgrades for your scale set, you must also bring the scale set VMs to the latest scale
set model if your scale set uses the 'Manual' upgrade policy.
NOTE
When using Automatic OS Upgrades with Service Fabric, the new OS image is rolled out Update Domain by Update Domain
to maintain high availability of the services running in Service Fabric. To utilize Automatic OS Upgrades in Service Fabric your
cluster must be configured to use the Silver Durability Tier or higher. For more information on the durability characteristics of
Service Fabric clusters, please see this documentation.
GET on
`/subscriptions/subscription_id/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachineScale
Sets/myScaleSet/osUpgradeHistory?api-version=2019-12-01`
The GET call returns properties similar to the following example output:
{
"value": [
{
"properties": {
"runningStatus": {
"code": "RollingForward",
"startTime": "2018-07-24T17:46:06.1248429+00:00",
"completedTime": "2018-04-21T12:29:25.0511245+00:00"
},
"progress": {
"successfulInstanceCount": 16,
"failedInstanceCount": 0,
"inProgressInstanceCount": 4,
"pendingInstanceCount": 0
},
"startedBy": "Platform",
"targetImageReference": {
"publisher": "MicrosoftWindowsServer",
"offer": "WindowsServer",
"sku": "2016-Datacenter",
"version": "2016.127.20180613"
},
"rollbackInfo": {
"successfullyRolledbackInstanceCount": 0,
"failedRolledbackInstanceCount": 0
}
},
"type": "Microsoft.Compute/virtualMachineScaleSets/rollingUpgrades",
"location": "westeurope"
}
]
}
Azure PowerShell
Use the Get-AzVmss cmdlet to check OS upgrade history for your scale set. The following example details how you
review the OS upgrade status for a scale set named myScaleSet in the resource group named myResourceGroup:
GET on
`/subscriptions/subscription_id/providers/Microsoft.Compute/locations/{location}/publishers/{publisherName}/ar
tifacttypes/vmimage/offers/{offer}/skus/{skus}/versions?api-version=2019-12-01`
Azure PowerShell
az vm image list --location "westus" --publisher "Canonical" --offer "UbuntuServer" --sku "16.04-LTS" --all
NOTE
Manual trigger of OS image upgrades does not provide automatic rollback capabilities. If an instance does not recover its
health after an upgrade operation, its previous OS disk can't be restored.
REST API
Use the Start OS Upgrade API call to start a rolling upgrade to move all virtual machine scale set instances to the
latest available image OS version. Instances that are already running the latest available OS version are not
affected. The following example details how you can start a rolling OS upgrade on a scale set named myScaleSet in
the resource group named myResourceGroup:
POST on
`/subscriptions/subscription_id/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachineScale
Sets/myScaleSet/osRollingUpgrade?api-version=2019-12-01`
Azure PowerShell
Use the Start-AzVmssRollingOSUpgrade cmdlet to check OS upgrade history for your scale set. The following
example details how you can start a rolling OS upgrade on a scale set named myScaleSet in the resource group
named myResourceGroup:
Next steps
For more examples on how to use automatic OS upgrades with scale sets, review the GitHub repo.
Preview: Maintenance control for Azure virtual
machine scale sets
11/2/2020 • 2 minutes to read • Edit Online
Manage automatic OS image upgrades for your virtual machine scale sets using maintenance control.
Maintenance control lets you decide when to apply updates to OS disks in your virtual machine scale sets through
an easier and more predictable experience.
Maintenance configurations work across subscriptions and resource groups.
The entire workflow comes down to these steps:
Create a maintenance configuration.
Associate a virtual machine scale set to a maintenance configuration.
Enable automatic OS upgrades.
IMPORTANT
Maintenance control for OS image upgrades on Azure virtual machine scale sets is currently in Public Preview. This preview
version is provided without a service level agreement, and is not recommended for production workloads. Certain features
might not be supported or might have constrained capabilities. For more information, see Supplemental Terms of Use for
Microsoft Azure Previews.
Limitations
VMs must be in a scale set.
User must have Resource Contributor access.
Maintenance duration must be 5 hours or longer in the maintenance configuration.
Maintenance recurrence must be set to 'Day' in the maintenance configuration
Management options
You can create and manage maintenance configurations using any of the following options:
Azure PowerShell
Next steps
Learn how to control when maintenance is applied to your Azure virtual machine scale sets using Maintenance
control and PowerShell.
Virtual machine scale set maintenance control by using PowerShell
Preview: Maintenance control for OS image upgrades
on Azure virtual machine scale sets using PowerShell
11/2/2020 • 2 minutes to read • Edit Online
Maintenance control lets you decide when to apply automatic guest OS image upgrades to your virtual machine
scale sets. This topic covers the Azure PowerShell options for Maintenance control. For more information on using
Maintenance control, see Maintenance control for Azure virtual machine scale sets.
IMPORTANT
Maintenance control for OS image upgrades on Azure virtual machine scale sets is currently in Public Preview. This preview
version is provided without a service level agreement, and is not recommended for production workloads. Certain features
might not be supported or might have constrained capabilities. For more information, see Supplemental Terms of Use for
Microsoft Azure Previews.
If you're installing locally, make sure you open your PowerShell prompt as an administrator.
You may also be asked to confirm that you want to install from an untrusted repository. Type Y or select Yes to
All to install the module.
Connect-AzAccount
Set-AzContext 00a000aa-0a00-0a0a-00aa-a00a000aaa00
$RGName="myMaintenanceRG"
$MaintenanceConfig="myMaintenanceConfig"
$location="eastus2"
$vmss="myMaintenanceVMSS"
$config = New-AzMaintenanceConfiguration `
-ResourceGroup $RGName `
-Name $MaintenanceConfig `
-MaintenanceScope OSImage `
-Location $location `
-StartDateTime "2020-10-01 00:00" `
-TimeZone "Pacific Standard Time" `
-Duration "05:00" `
-RecurEvery "Day"
IMPORTANT
Maintenance duration must be 5 hours or longer. Maintenance recurrence must be set to Day.
Using -MaintenanceScope OSImage ensures that the maintenance configuration is used for controlling updates to
the guest OS.
If you try to create a configuration with the same name, but in a different location, you'll get an error. Configuration
names must be unique to your resource group.
You can query for available maintenance configurations using Get-AzMaintenanceConfiguration.
New-AzConfigurationAssignment `
-ResourceGroupName $RGName `
-Location $location `
-ResourceName $vmss `
-ResourceType VirtualMachineScaleSets `
-ProviderName Microsoft.Compute `
-ConfigurationAssignmentName $config.Name`
-MaintenanceConfigurationId $config.Id
Virtual machines and other compute resources require an agent to collect monitoring data required to measure
the performance and availability of their guest operating system and workloads. This article describes the agents
used by Azure Monitor and helps you determine which you need to meet the requirements for your particular
environment.
NOTE
Azure Monitor currently has multiple agents because of recent consolidation of Azure Monitor and Log Analytics. While
there may be overlap in their features, each has unique capabilities. Depending on your requirements, you may need one
or more of the agents on your virtual machines.
You may have a specific set of requirements that can't be completely met with a single agent for a particular
virtual machine. For example, you may want to use metric alerts which requires Azure diagnostics extension but
also want to leverage the functionality of Azure Monitor for VMs which requires the Log Analytics agent and the
Dependency agent. In cases such as this, you can use multiple agents, and this is a common scenario for
customers who require functionality from each.
Summary of agents
The following tables provide a quick comparison of the Azure Monitor agents for Windows and Linux. Further
detail on each is provided in the section below.
NOTE
The Azure Monitor agent is currently in preview with limited capabilities. This table will be updated
Windows agents
A Z URE M O N ITO R DIA GN O ST IC S LO G A N A LY T IC S DEP EN DEN C Y
A GEN T ( P REVIEW ) EXT EN SIO N ( WA D) A GEN T A GEN T
Data collected Event Logs Event Logs Event Logs Process dependencies
Performance ETW events Performance Network connection
Performance File based logs metrics
File based logs IIS logs
IIS logs Insights and
.NET app logs solutions
Crash dumps Other services
Agent diagnostics
logs
A Z URE M O N ITO R DIA GN O ST IC S LO G A N A LY T IC S DEP EN DEN C Y
A GEN T ( P REVIEW ) EXT EN SIO N ( WA D) A GEN T A GEN T
Data sent to Azure Monitor Logs Azure Storage Azure Monitor Logs Azure Monitor Logs
Azure Monitor Azure Monitor (through Log
Metrics Metrics Analytics agent)
Event Hub
Ser vices and Log Analytics Metrics explorer Azure Monitor for Azure Monitor for
features Metrics explorer VMs VMs
suppor ted Log Analytics Service Map
Azure Automation
Azure Security Center
Azure Sentinel
Linux agents
A Z URE M O N ITO R DIA GN O ST IC S
A GEN T EXT EN SIO N T EL EGRA F LO G A N A LY T IC S DEP EN DEN C Y
( P REVIEW ) ( L A D) A GEN T A GEN T A GEN T
Data sent to Azure Monitor Azure Storage Azure Monitor Azure Monitor Azure Monitor
Logs Event Hub Metrics Logs Logs
Azure Monitor (through Log
Metrics Analytics agent)
Ser vices and Log Analytics Metrics explorer Azure Monitor Azure Monitor
features Metrics explorer for VMs for VMs
suppor ted Log Analytics Service Map
Azure
Automation
Azure Security
Center
Azure Sentinel
NOTE
The Log Analytics agent for Windows is often referred to as Microsoft Monitoring Agent (MMA). The Log Analytics agent
for Linux is often referred to as OMS agent.
Telegraf agent
The InfluxData Telegraf agent is used to collect performance data from Linux computers to Azure Monitor
Metrics.
Use Telegraf agent if you need to:
Send data to Azure Monitor Metrics to analyze it with metrics explorer and to take advantage of features such
as near real-time metric alerts and autoscale (Linux only).
Dependency agent
The Dependency agent collects discovered data about processes running on the virtual machine and external
process dependencies.
Use the Dependency agent if you need to:
Use the Map feature Azure Monitor for VMs or the Service Map solution.
Consider the following when using the Dependency agent:
The Dependency agent requires the Log Analytics agent to be installed on the same virtual machine.
On Linux VMs, the Log Analytics agent must be installed before the Azure Diagnostic Extension.
Windows Server X X X X
2019
Windows Server X X X X
2016
O P ERAT IO N S A Z URE M O N ITO R LO G A N A LY T IC S DIA GN O ST IC S
SY ST EM A GEN T A GEN T DEP EN DEN C Y A GEN T EXT EN SIO N
Windows Server X
2016 Core
Windows Server X X X X
2012 R2
Windows Server X X X X
2012
Windows Server X X X
2008 R2
Windows 10 X X X X
Enterprise
(including multi-
session) and Pro
(Server scenarios
only)
Windows 8 X X
Enterprise and Pro
(Server scenarios
only)
Windows 7 SP1 X X
(Server scenarios
only)
Linux
O P ERAT IO N S A Z URE M O N ITO R LO G A N A LY T IC S DIA GN O ST IC S
SY ST EM A GEN T A GEN T DEP EN DEN C Y A GEN T EXT EN SIO N
Amazon Linux X
2017.09
CentOS Linux 8 X
CentOS Linux 7 X X X
CentOS Linux 6 X
Debian 10 X
Debian 9 X X x X
Debian 8 X X X
O P ERAT IO N S A Z URE M O N ITO R LO G A N A LY T IC S DIA GN O ST IC S
SY ST EM A GEN T A GEN T DEP EN DEN C Y A GEN T EXT EN SIO N
Debian 7 X
OpenSUSE 13.1+ X
Oracle Linux 7 X X X
Oracle Linux 6 X
7.5 3.10.0-862
7.4 3.10.0-693
6.9 2.6.32-696
6.9 2.6.32-696.30.1
2.6.32-696.18.7
16.04.3 4.15.*
16.04 4.13.*
4.11.*
4.10.*
4.8.*
4.4.*
12 SP3 4.4.*
12 SP2 4.4.*
Debian 9 4.9
Next steps
Get more details on each of the agents at the following:
Overview of the Log Analytics agent
Azure Diagnostics extension overview
Collect custom metrics for a Linux VM with the InfluxData Telegraf agent
Azure Monitor Frequently Asked Questions
11/2/2020 • 41 minutes to read • Edit Online
This Microsoft FAQ is a list of commonly asked questions about Azure Monitor. If you have any additional questions,
go to the discussion forum and post your questions. When a question is frequently asked, we add it to this article so
that it can be found quickly and easily.
General
What is Azure Monitor?
Azure Monitor is a service in Azure that provides performance and availability monitoring for applications and
services in Azure, other cloud environments, or on-premises. Azure Monitor collects data from multiple sources into
a common data platform where it can be analyzed for trends and anomalies. Rich features in Azure Monitor assist
you in quickly identifying and responding to critical situations that may affect your application.
What's the difference between Azure Monitor, Log Analytics, and Application Insights?
In September 2018, Microsoft combined Azure Monitor, Log Analytics, and Application Insights into a single service
to provide powerful end-to-end monitoring of your applications and the components they rely on. Features in Log
Analytics and Application Insights have not changed, although some features have been rebranded to Azure
Monitor in order to better reflect their new scope. The log data engine and query language of Log Analytics is now
referred to as Azure Monitor Logs. See Azure Monitor terminology updates.
What does Azure Monitor cost?
Features of Azure Monitor that are automatically enabled such as collection of metrics and activity logs are
provided at no cost. There is a cost associated with other features such as log queries and alerting. See the Azure
Monitor pricing page for detailed pricing information.
How do I enable Azure Monitor?
Azure Monitor is enabled the moment that you create a new Azure subscription, and Activity log and platform
metrics are automatically collected. Create diagnostic settings to collect more detailed information about the
operation of your Azure resources, and add monitoring solutions and insights to provide additional analysis on
collected data for particular services.
How do I access Azure Monitor?
Access all Azure Monitor features and data from the Monitor menu in the Azure portal. The Monitoring section of
the menu for different Azure services provides access to the same tools with data filtered to a particular resource.
Azure Monitor data is also accessible for a variety of scenarios using CLI, PowerShell, and a REST API.
Is there an on-premises version of Azure Monitor?
No. Azure Monitor is a scalable cloud service that processes and stores large amounts of data, although Azure
Monitor can monitor resources that are on-premises and in other clouds.
Can Azure Monitor monitor on-premises resources?
Yes, in addition to collecting monitoring data from Azure resources, Azure Monitor can collect data from virtual
machines and applications in other clouds and on-premises. See Sources of monitoring data for Azure Monitor.
Does Azure Monitor integrate with System Center Operations Manager?
You can connect your existing System Center Operations Manager management group to Azure Monitor to collect
data from agents into Azure Monitor Logs. This allows you to use log queries and solution to analyze data collected
from agents. You can also configure existing System Center Operations Manager agents to send data directly to
Azure Monitor. See Connect Operations Manager to Azure Monitor.
What IP addresses does Azure Monitor use?
See IP addresses used by Application Insights and Log Analytics for a listing of the IP addresses and ports required
for agents and other external resources to access Azure Monitor.
Monitoring data
Where does Azure Monitor get its data?
Azure Monitor collects data from a variety of sources including logs and metrics from Azure platform and
resources, custom applications, and agents running on virtual machines. Other services such as Azure Security
Center and Network Watcher collect data into a Log Analytics workspace so it can be analyzed with Azure Monitor
data. You can also send custom data to Azure Monitor using the REST API for logs or metrics. See Sources of
monitoring data for Azure Monitor.
What data is collected by Azure Monitor?
Azure Monitor collects data from a variety of sources into logs or metrics. Each type of data has its own relative
advantages, and each supports a particular set of features in Azure Monitor. There is a single metrics database for
each Azure subscription, while you can create multiple Log Analytics workspaces to collect logs depending on your
requirements. See Azure Monitor data platform.
Is there a maximum amount of data that I can collect in Azure Monitor?
There is no limit to the amount of metric data you can collect, but this data is stored for a maximum of 93 days. See
Retention of Metrics. There is no limit on the amount of log data that you can collect, but it may be affected by the
pricing tier you choose for the Log Analytics workspace. See pricing details.
How do I access data collected by Azure Monitor?
Insights and solutions provide a custom experience for working with data stored in Azure Monitor. You can work
directly with log data using a log query written in Kusto Query Language (KQL). In the Azure portal, you can write
and run queries and interactively analyze data using Log Analytics. Analyze metrics in the Azure portal with the
Metrics Explorer. See Analyze log data in Azure Monitor and Getting started with Azure Metrics Explorer.
Logs
What's the difference between Azure Monitor Logs and Azure Data Explorer?
Azure Data Explorer is a fast and highly scalable data exploration service for log and telemetry data. Azure Monitor
Logs is built on top of Azure Data Explorer and uses the same Kusto Query Language (KQL) with some minor
differences. See Azure Monitor log query language differences.
How do I retrieve log data?
All data is retrieved from a Log Analytics workspace using a log query written using Kusto Query Language (KQL).
You can write your own queries or use solutions and insights that include log queries for a particular application or
service. See Overview of log queries in Azure Monitor.
Can I delete data from a Log Analytics workspace?
Data is removed from a workspace according to its retention period. You can delete specific data for privacy or
compliance reasons. See How to export and delete private data for more information.
What is a Log Analytics workspace?
All log data collected by Azure Monitor is stored in a Log Analytics workspace. A workspace is essentially a
container where log data is collected from a variety of sources. You may have a single Log Analytics workspace for
all your monitoring data or may have requirements for multiple workspaces. See Designing your Azure Monitor
Logs deployment.
Can you move an existing Log Analytics workspace to another Azure subscription?
You can move a workspace between resource groups or subscriptions but not to a different region. See Move a Log
Analytics workspace to different subscription or resource group.
Why can't I see Query Explorer and Save buttons in Log Analytics?
Quer y Explorer , Save and New aler t rule buttons are not available when the query scope is set to a specific
resource. To create alerts, save or load a query, Log Analytics must be scoped to a workspace. To open Log Analytics
in workspace context, select Logs from the Azure Monitor menu. The last used workspace is selected, but you can
select any other workspace. See Log query scope and time range in Azure Monitor Log Analytics
Why am I getting the error: "Register resource provider 'Microsoft.Insights' for this subscription to enable this
query" when opening Log Analytics from a VM?
Many resource providers are automatically registered, but you may need to manually register some resource
providers. The scope for registration is always the subscription. See Resource providers and types for more
information.
Why am I getting no access error message when opening Log Analytics from a VM?
To view VM Logs, you need to be granted with read permission to the workspaces that stores the VM logs. In these
cases, your administrator must grant you with to permissions in Azure.
Metrics
Why are metrics from the guest OS of my Azure virtual machine not showing up in Metrics explorer?
Platform metrics are collected automatically for Azure resources. You must perform some configuration though to
collect metrics from the guest OS of a virtual machine. For a Windows VM, install the diagnostic extension and
configure the Azure Monitor sink as described in Install and configure Windows Azure diagnostics extension (WAD).
For Linux, install the Telegraf agent as described in Collect custom metrics for a Linux VM with the InfluxData
Telegraf agent.
Alerts
What is an alert in Azure Monitor?
Alerts proactively notify you when important conditions are found in your monitoring data. They allow you to
identify and address issues before the users of your system notice them. There are multiple kinds of alerts:
Metric - Metric value exceeds a threshold.
Log query - Results of a log query match defined criteria.
Activity log - Activity log event matches defined criteria.
Web test - Results of availability test match defined criteria.
See Overview of alerts in Microsoft Azure.
What is an action group?
An action group is a collection of notifications and actions that can be triggered by an alert. Multiple alerts can use a
single action group allowing you to leverage common sets of notifications and actions. See Create and manage
action groups in the Azure portal.
What is an action rule?
An action rule allows you to modify the behavior of a set of alerts that match a certain criteria. This allows you to
perform such requirements as disable alert actions during a maintenance window. You can also apply an action
group to a set of alerts rather than applying them directly to the alert rules. See Action rules.
Agents
Does Azure Monitor require an agent?
An agent is only required to collect data from the operating system and workloads in virtual machines. The virtual
machines can be located in Azure, another cloud environment, or on-premises. See Overview of the Azure Monitor
agents.
What's the difference between the Azure Monitor agents?
Azure Diagnostic extension is for Azure virtual machines and collects data to Azure Monitor Metrics, Azure Storage,
and Azure Event Hubs. The Log Analytics agent is for virtual machines in Azure, another cloud environment, or on-
premises and collects data to Azure Monitor Logs. The Dependency agent requires the Log Analytics agent and
collected process details and dependencies. See Overview of the Azure Monitor agents.
Does my agent traffic use my ExpressRoute connection?
Traffic to Azure Monitor uses the Microsoft peering ExpressRoute circuit. See ExpressRoute documentation for a
description of the different types of ExpressRoute traffic.
How can I confirm that the Log Analytics agent is able to communicate with Azure Monitor?
From Control Panel on the agent computer, select Security & Settings , **Microsoft Monitoring Agent. Under the
Azure Log Analytics (OMS) tab, a green check mark icon confirms that the agent is able to communicate with
Azure Monitor. A yellow warning icon means the agent is having issues. One common reason is the Microsoft
Monitoring Agent service has stopped. Use service control manager to restart the service.
How do I stop the Log Analytics agent from communicating with Azure Monitor?
For agents connected to Log Analytics directly, open the Control Panel and select Security & Settings , Microsoft
Monitoring Agent . Under the Azure Log Analytics (OMS) tab, remove all workspaces listed. In System Center
Operations Manager, remove the computer from the Log Analytics managed computers list. Operations Manager
updates the configuration of the agent to no longer report to Log Analytics.
How much data is sent per agent?
The amount of data sent per agent depends on:
The solutions you have enabled
The number of logs and performance counters being collected
The volume of data in the logs
See Manage usage and costs with Azure Monitor Logs for details.
For computers that are able to run the WireData agent, use the following query to see how much data is being sent:
WireData
| where ProcessName == "C:\\Program Files\\Microsoft Monitoring Agent\\Agent\\MonitoringHost.exe"
| where Direction == "Outbound"
| summarize sum(TotalBytes) by Computer
How much network bandwidth is used by the Microsoft Management Agent (MMA ) when sending data to Azure
Monitor?
Bandwidth is a function on the amount of data sent. Data is compressed as it is sent over the network.
How can I be notified when data collection from the Log Analytics agent stops?
Use the steps described in create a new log alert to be notified when data collection stops. Use the following
settings for the alert rule:
Define aler t condition : Specify your Log Analytics workspace as the resource target.
Aler t criteria
Signal Name : Custom log search
Search quer y :
Heartbeat | summarize LastCall = max(TimeGenerated) by Computer | where LastCall < ago(15m)
Aler t logic : Based on number of results, Condition Greater than, Threshold value 0
Evaluated based on : Period (in minutes) 30, Frequency (in minutes) 10
Define aler t details
Name : Data collection stopped
Severity : Warning
Specify an existing or new Action Group so that when the log alert matches criteria, you are notified if you have a
heartbeat missing for more than 15 minutes.
What are the firewall requirements for Azure Monitor agents?
See Network firewall requirementsfor details on firewall requirements.
Visualizations
Why can't I see View Designer?
View Designer is only available for users assigned with Contributor permissions or higher in the Log Analytics
workspace.
Application Insights
Configuration problems
I'm having trouble setting up my:
.NET app
Monitoring an already-running app
Azure diagnostics
Java web app
I get no data from my server:
Set firewall exceptions
Set up an ASP.NET server
Set up a Java server
How many Application Insights resources should I deploy:
How to design your Application Insights deployment: One versus many Application Insights resources?
Can I use Application Insights with ...?
Web apps on an IIS server in Azure VM or Azure virtual machine scale set
Web apps on an IIS server - on-premises or in a VM
Java web apps
Node.js apps
Web apps on Azure
Cloud Services on Azure
App servers running in Docker
Single-page web apps
SharePoint
Windows desktop app
Other platforms
Is it free?
Yes, for experimental use. In the basic pricing plan, your application can send a certain allowance of data each
month free of charge. The free allowance is large enough to cover development, and publishing an app for a small
number of users. You can set a cap to prevent more than a specified amount of data from being processed.
Larger volumes of telemetry are charged by the Gb. We provide some tips on how to limit your charges.
The Enterprise plan incurs a charge for each day that each web server node sends telemetry. It is suitable if you
want to use Continuous Export on a large scale.
Read the pricing plan.
How much does it cost?
Open the Usage and estimated costs page in an Application Insights resource. There's a chart of recent
usage. You can set a data volume cap, if you want.
Open the Azure Billing blade to see your bills across all resources.
What does Application Insights modify in my project?
The details depend on the type of project. For a web application:
Adds these files to your project:
ApplicationInsights.config
ai.js
Installs these NuGet packages:
Application Insights API - the core API
Application Insights API for Web Applications - used to send telemetry from the server
Application Insights API for JavaScript Applications - used to send telemetry from the client
The packages include these assemblies:
Microsoft.ApplicationInsights
Microsoft.ApplicationInsights.Platform
Inserts items into:
Web.config
packages.config
(New projects only - if you add Application Insights to an existing project, you have to do this manually.) Inserts
snippets into the client and server code to initialize them with the Application Insights resource ID. For example,
in an MVC app, code is inserted into the master page Views/Shared/_Layout.cshtml
How do I upgrade from older SDK versions?
See the release notes for the SDK appropriate to your type of application.
How can I change which Azure resource my project sends data to?
In Solution Explorer, right-click ApplicationInsights.config and choose Update Application Insights . You can
send the data to an existing or new resource in Azure. The update wizard changes the instrumentation key in
ApplicationInsights.config, which determines where the server SDK sends your data. Unless you deselect "Update
all," it will also change the key where it appears in your web pages.
Can I use providers('Microsoft.Insights', 'components').apiVersions[0] in my Azure Resource Manager
deployments?
We do not recommend using this method of populating the API version. The newest version can represent preview
releases which may contain breaking changes. Even with newer non-preview releases, the API versions are not
always backwards compatible with existing templates, or in some cases the API version may not be available to all
subscriptions.
What is Status Monitor?
A desktop app that you can use in your IIS web server to help configure Application Insights in web apps. It doesn't
collect telemetry: you can stop it when you are not configuring an app.
Learn more.
What telemetry is collected by Application Insights?
From server web apps:
HTTP requests
Dependencies. Calls to: SQL Databases; HTTP calls to external services; Azure Cosmos DB, table, blob storage,
and queue.
Exceptions and stack traces.
Performance Counters - If you use Status Monitor, Azure monitoring for App Services, Azure monitoring for VM
or virtual machine scale set, or the Application Insights collectd writer.
Custom events and metrics that you code.
Trace logs if you configure the appropriate collector.
From client web pages:
Page view counts
AJAX calls Requests made from a running script.
Page view load data
User and session counts
Authenticated user IDs
From other sources, if you configure them:
Azure diagnostics
Import to Analytics
Log Analytics
Logstash
Can I filter out or modify some telemetry?
Yes, in the server you can write:
Telemetry Processor to filter or add properties to selected telemetry items before they are sent from your app.
Telemetry Initializer to add properties to all items of telemetry.
Learn more for ASP.NET or Java.
How are city, country/region, and other geo location data calculated?
We look up the IP address (IPv4 or IPv6) of the web client using GeoLite2.
Browser telemetry: We collect the sender's IP address.
Server telemetry: The Application Insights module collects the client IP address. It is not collected if
X-Forwarded-For is set.
To learn more about how IP address and geolocation data are collected in Application Insights refer to this
article.
You can configure the ClientIpHeaderTelemetryInitializer to take the IP address from a different header. In some
systems, for example, it is moved by a proxy, load balancer, or CDN to X-Originating-IP . Learn more.
You can use Power BI to display your request telemetry on a map.
How long is data retained in the portal? Is it secure?
Take a look at Data Retention and Privacy.
What happens to Application Insight's telemetry when a server or device loses connection with Azure?
All of our SDKs, including the web SDK, includes "reliable transport" or "robust transport". When the server or
device loses connection with Azure, telemetry is stored locally on the file system (Server SDKs) or in HTML5 Session
Storage (Web SDK). The SDK will periodically retry to send this telemetry until our ingestion service considers it
"stale" (48-hours for logs, 30 minutes for metrics). Stale telemetry will be dropped. In some cases, such as when
local storage is full, retry will not occur.
Could personal data be sent in the telemetry?
This is possible if your code sends such data. It can also happen if variables in stack traces include personal data.
Your development team should conduct risk assessments to ensure that personal data is properly handled. Learn
more about data retention and privacy.
All octets of the client web address are always set to 0 after the geo location attributes are looked up.
My Instrumentation Key is visible in my web page source.
This is common practice in monitoring solutions.
It can't be used to steal your data.
It could be used to skew your data or trigger alerts.
We have not heard that any customer has had such problems.
You could:
Use two separate Instrumentation Keys (separate Application Insights resources), for client and server data. Or
Write a proxy that runs in your server, and have the web client send data through that proxy.
How do I see POST data in Diagnostic search?
We don't log POST data automatically, but you can use a TrackTrace call: put the data in the message parameter. This
has a longer size limit than the limits on string properties, though you can't filter on it.
Should I use single or multiple Application Insights resources?
Use a single resource for all the components or roles in a single business system. Use separate resources for
development, test, and release versions, and for independent applications.
See the discussion here
Example - cloud service with worker and web roles
How do I dynamically change the instrumentation key?
Discussion here
Example - cloud service with worker and web roles
What are the User and Session counts?
The JavaScript SDK sets a user cookie on the web client, to identify returning users, and a session cookie to
group activities.
If there is no client-side script, you can set cookies at the server.
If one real user uses your site in different browsers, or using in-private/incognito browsing, or different
machines, then they will be counted more than once.
To identify a logged-in user across machines and browsers, add a call to setAuthenticatedUserContext().
Have I enabled everything in Application Insights?
W H AT Y O U SH O UL D SEE H O W TO GET IT W H Y Y O U WA N T IT
Server app perf: response times, ... Add Application Insights to your project Detect perf issues
or Install AI Status Monitor on server
(or write your own code to track
dependencies)
Dependency telemetry Install AI Status Monitor on server Diagnose issues with databases or other
external components
Get stack traces from exceptions Insert TrackException calls in your code Detect and diagnose exceptions
(but some are reported automatically)
Search log traces Add a logging adapter Diagnose exceptions, perf issues
Client usage basics: page views, JavaScript initializer in web pages Usage analytics
sessions, ...
Client custom metrics Tracking calls in web pages Enhance user experience
NOTE
If the resource you are creating in a new region is replacing a classic resource we recommend exploring the benefits of
creating a new workspace-based resource or alternatively migrating your existing resource to workspace-based.
Automation
Configuring Application Insights
You can write PowerShell scripts using Azure Resource Monitor to:
Create and update Application Insights resources.
Set the pricing plan.
Get the instrumentation key.
Add a metric alert.
Add an availability test.
You can't set up a Metric Explorer report or set up continuous export.
Querying the telemetry
Use the REST API to run Analytics queries.
How can I set an alert on an event?
Azure alerts are only on metrics. Create a custom metric that crosses a value threshold whenever your event occurs.
Then set an alert on the metric. You'll get a notification whenever the metric crosses the threshold in either
direction; you won't get a notification until the first crossing, no matter whether the initial value is high or low; there
is always a latency of a few minutes.
Are there data transfer charges between an Azure web app and Application Insights?
If your Azure web app is hosted in a data center where there is an Application Insights collection endpoint, there
is no charge.
If there is no collection endpoint in your host data center, then your app's telemetry will incur Azure outgoing
charges.
This doesn't depend on where your Application Insights resource is hosted. It just depends on the distribution of our
endpoints.
Can I send telemetry to the Application Insights portal?
We recommend you use our SDKs and use the SDK API. There are variants of the SDK for various platforms. These
SDKs handle buffering, compression, throttling, retries, and so on. However, the ingestion schema and endpoint
protocol are public.
Can I monitor an intranet web server?
Yes, but you will need to allow traffic to our services by either firewall exceptions or proxy redirects.
QuickPulse https://rt.services.visualstudio.com:443
ApplicationIdProvider https://dc.services.visualstudio.com:443
TelemetryChannel https://dc.services.visualstudio.com:443
<ApplicationInsights>
...
<TelemetryModules>
<Add
Type="Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.QuickPulse.QuickPulseTelemetryModule,
Microsoft.AI.PerfCounterCollector">
<QuickPulseServiceEndpoint>https://rt.services.visualstudio.com/QuickPulseService.svc</QuickPulseServiceEndpoin
t>
</Add>
</TelemetryModules>
...
<TelemetryChannel>
<EndpointAddress>https://dc.services.visualstudio.com/v2/track</EndpointAddress>
</TelemetryChannel>
...
<ApplicationIdProvider
Type="Microsoft.ApplicationInsights.Extensibility.Implementation.ApplicationId.ApplicationInsightsApplicationId
Provider, Microsoft.ApplicationInsights">
<ProfileQueryEndpoint>https://dc.services.visualstudio.com/api/profiles/{0}/appId</ProfileQueryEndpoint>
</ApplicationIdProvider>
...
</ApplicationInsights>
NOTE
ApplicationIdProvider is available starting in v2.6.0.
Proxy passthrough
Proxy passthrough can be achieved by configuring either a machine level or application level proxy. For more
information see dotnet's article on DefaultProxy.
Example Web.config:
<system.net>
<defaultProxy>
<proxy proxyaddress="http://xx.xx.xx.xx:yyyy" bypassonlocal="true"/>
</defaultProxy>
</system.net>
OpenTelemetry
What is OpenTelemetry
A new open-source standard for observability. Learn more at https://opentelemetry.io/.
Why is Microsoft / Azure Monitor investing in OpenTelemetry?
We believe it better serves our customers for three reasons:
1. Enable support for more customer scenarios.
2. Instrument without fear of vendor lock-in.
3. Increase customer transparency and engagement.
It also aligns with Microsoft’s strategy to embrace open source.
What additional value does OpenTelemetry give me?
In addition to the reasons above, OpenTelemetry is more efficient at-scale and provides consistent
design/configurations across languages.
How can I test out OpenTelemetry?
Sign up to join our Azure Monitor Application Insights early adopter community at https://aka.ms/AzMonOtel.
What does GA mean in the context of OpenTelemetry?
The OpenTelemetry community defines Generally Available (GA) here. However, OpenTelemetry “GA” does not
mean feature parity with the existing Application Insights SDKs. Azure Monitor will continue to recommend our
current Application Insights SDKs for customers requiring features such as pre-aggregated metrics, live metrics,
adaptive sampling, profiler, and snapshot debugger until the OpenTelemetry SDKs reach feature maturity.
Can I use Preview builds in production environments?
It’s not recommended. See Supplemental Terms of Use for Microsoft Azure Previews for more information.
What’s the difference between OpenTelemetry SDK and auto -instrumentation?
The OpenTelemetry specification defines SDK. In short, “SDK” is a language-specific package that collects telemetry
data across the various components of your application and sends the data to Azure Monitor via an exporter.
The concept of auto-instrumentation (sometimes referred to as bytecode injection, codeless, or agent-based) refers
to the capability to instrument your application without changing your code. For example, check out the
OpenTelemetry Java Auto-instrumentation Readme for more information.
What’s the OpenTelemetry Collector?
The OpenTelemetry Collector is described in its GitHub readme. Currently Microsoft does not utilize the
OpenTelemetry Collector and depends on direct exporters that send to Azure Monitor’s Application Insights.
What’s the difference between OpenCensus and OpenTelemetry?
OpenCensus is the precursor to OpenTelemetry. Microsoft helped bring together OpenTracing and OpenCensus to
create OpenTelemetry, a single observability standard for the world. Azure Monitor’s current production-
recommended Python SDK is based on OpenCensus, but eventually all Azure Monitor’s SDKs will be based on
OpenTelemetry.
Option 2
Re-enable collection for these properties for every container log line.
If the first option is not convenient due to query changes involved, you can re-enable collecting these fields by
enabling the setting log_collection_settings.enrich_container_logs in the agent config map as described in the
data collection configuration settings.
NOTE
The second option is not recommended with large clusters that have more than 50 nodes because it generates API server
calls from every node in the cluster to perform this enrichment. This option also increases data size for every log line collected.
console.log(json.stringify({
"Hello": "This example has multiple lines:",
"Docker/Moby": "will not break this into multiple lines",
"and you will receive":"all of them in log analytics",
"as one": "log entry"
}));
This data will look like the following example in Azure Monitor for logs when you query for it:
LogEntry : ({"Hello": "This example has multiple lines:","Docker/Moby": "will not break this into multiple
lines", "and you will receive":"all of them in log analytics", "as one": "log entry"}
For a detailed look at the issue, review the following GitHub link.
How do I resolve Azure AD errors when I enable live logs?
You may see the following error: The reply url specified in the request does not match the reply urls
configured for the application: '<application ID>' . The solution to solve it can be found in the article How to
view container data in real time with Azure Monitor for containers.
Why can't I upgrade cluster after onboarding?
If after you enable Azure Monitor for containers for an AKS cluster, you delete the Log Analytics workspace the
cluster was sending its data to, when attempting to upgrade the cluster it will fail. To work around this, you will have
to disable monitoring and then re-enable it referencing a different valid workspace in your subscription. When you
try to perform the cluster upgrade again, it should process and complete successfully.
Which ports and domains do I need to open/allow for the agent?
See the Network firewall requirements for the proxy and firewall configuration information required for the
containerized agent with Azure, Azure US Government, and Azure China 21Vianet clouds.
Next steps
If your question isn't answered here, you can refer to the following forums to additional questions and answers.
Log Analytics
Application Insights
For general feedback on Azure Monitor please visit the feedback forum.
Azure Diagnostics extension overview
11/2/2020 • 4 minutes to read • Edit Online
Azure Diagnostics extension is an agent in Azure Monitor that collects monitoring data from the guest operating
system of Azure compute resources including virtual machines. This article provides an overview of Azure
Diagnostics extension including specific functionality that it supports and options for installation and configuration.
NOTE
Azure Diagnostics extension is one of the agents available to collect monitoring data from the guest operating system of
compute resources. See Overview of the Azure Monitor agents for a description of the different agents and guidance on
selecting the appropriate agents for your requirements.
Primary scenarios
The primary scenarios addressed by the diagnostics extension are:
Collect guest metrics into Azure Monitor Metrics.
Send guest logs and metrics to Azure storage for archiving.
Send guest logs and metrics to Azure event hubs to send outside of Azure.
Costs
There is no cost for Azure Diagnostic Extension, but you may incur charges for the data ingested. Check Azure
Monitor pricing for the destination where you're collecting data.
Data collected
The following tables list the data that can be collected by the Windows and Linux diagnostics extension.
Windows diagnostics extension (WAD)
DATA SO URC E DESC RIP T IO N
IIS Logs Usage information for IIS web sites running on the guest
operating system.
.NET EventSource logs Code writing events using the .NET EventSource class
Manifest based ETW logs Event Tracing for Windows events generated by any process.
Crash dumps (logs) Information about the state of the process if an application
crashes.
Data destinations
The Azure Diagnostic extension for both Windows and Linux always collect data into an Azure Storage account. See
Install and configure Windows Azure diagnostics extension (WAD) and Use Linux Diagnostic Extension to monitor
metrics and logs for a list of specific tables and blobs where this data is collected.
Configure one or more data sinks to send data to other additional destinations. The following sections list the sinks
available for the Windows and Linux diagnostics extension.
Windows diagnostics extension (WAD)
DEST IN AT IO N DESC RIP T IO N
Azure Monitor Metrics Collect performance data to Azure Monitor Metrics. See Send
Guest OS metrics to the Azure Monitor metric database.
Event hubs Use Azure Event Hubs to send data outside of Azure. See
Streaming Azure Diagnostics data to Event Hubs
Azure Storage blobs Write to data to blobs in Azure Storage in addition to tables.
DEST IN AT IO N DESC RIP T IO N
You can also collect WAD data from storage into a Log Analytics workspace to analyze it with Azure Monitor Logs
although the Log Analytics agent is typically used for this functionality. It can send data directly to a Log Analytics
workspace and supports solutions and insights that provide additional functionality. See Collect Azure diagnostic
logs from Azure Storage.
Linux diagnostics extension (LAD)
LAD writes data to tables in Azure Storage. It supports the sinks in the following table.
Event hubs Use Azure Event Hubs to send data outside of Azure.
Azure Storage blobs Write to data to blobs in Azure Storage in addition to tables.
Azure Monitor Metrics Install the Telegraf agent in addition to LAD. See Collect
custom metrics for a Linux VM with the InfluxData Telegraf
agent.
Other documentation
Azure Cloud Service (classic) Web and Worker Roles
Introduction to Cloud Service Monitoring
Enabling Azure Diagnostics in Azure Cloud Services
Application Insights for Azure cloud services
Trace the flow of a Cloud Services application with Azure Diagnostics
Azure Service Fabric
Monitor and diagnose services in a local machine development setup
Next steps
Learn to use Performance Counters in Azure Diagnostics.
If you have trouble with diagnostics starting or finding your data in Azure storage tables, see TroubleShooting
Azure Diagnostics
Send guest OS metrics to the Azure Monitor metric
store by using an Azure Resource Manager template
for a Windows virtual machine
11/2/2020 • 5 minutes to read • Edit Online
Performance data from the guest OS of Azure virtual machines is not collected automatically like other platform
metrics. Install the Azure Monitor diagnostics extension to collect guest OS metrics into the metrics database so it
can be used with all features of Azure Monitor Metrics, including near-real time alerting, charting, routing, and
access from a REST API. This article describes the process for sending Guest OS performance metrics for a
Windows virtual machine to the metrics database using a Resource Manager template.
NOTE
For details on configuring the diagnostics extension to collect guest OS metrics using the Azure portal, see Install and
configure Windows Azure diagnostics extension (WAD).
If you're new to Resource Manager templates, learn about template deployments and their structure and syntax.
Prerequisites
Your subscription must be registered with Microsoft.Insights.
You need to have either Azure PowerShell or Azure Cloud Shell installed.
Your VM resource must be in a region that supports custom metrics.
Add this Managed Service Identity (MSI) extension to the template at the top of the resources section. The
extension ensures that Azure Monitor accepts the metrics that are being emitted.
Add the identity configuration to the VM resource to ensure that Azure assigns a system identity to the MSI
extension. This step ensures that the VM can emit guest metrics about itself to Azure Monitor.
// Find this section
"subnet": {
"id": "[variables('subnetRef')]"
}
}
}
]
}
},
{
"apiVersion": "2017-03-30",
"type": "Microsoft.Compute/virtualMachines",
"name": "[variables('vmName')]",
"location": "[resourceGroup().location]",
// add these 3 lines below
"identity": {
"type": "SystemAssigned"
},
//end of added lines
"dependsOn": [
"[resourceId('Microsoft.Storage/storageAccounts/', variables('storageAccountName'))]",
"[resourceId('Microsoft.Network/networkInterfaces/', variables('nicName'))]"
],
"properties": {
"hardwareProfile": {
...
Add the following configuration to enable the Diagnostics extension on a Windows virtual machine. For a simple
Resource Manager-based virtual machine, we can add the extension configuration to theresourcesarray for the
virtual machine. The line "sinks"— "AzMonSink" and the corresponding "SinksConfig" later in the section—enable
the extension to emit metrics directly to Azure Monitor. Feel free to add or remove performance counters as
needed.
"networkProfile": {
"networkInterfaces": [
{
"id": "[resourceId('Microsoft.Network/networkInterfaces',variables('nicName'))]"
}
]
},
"diagnosticsProfile": {
"bootDiagnostics": {
"enabled": true,
"storageUri": "[reference(resourceId('Microsoft.Storage/storageAccounts/',
variables('storageAccountName'))).primaryEndpoints.blob]"
}
}
},
//Start of section to add
"resources": [
{
"type": "Microsoft.Compute/virtualMachines/extensions",
"name": "[concat(variables('vmName'), '/', 'Microsoft.Insights.VMDiagnosticsSettings')]",
"apiVersion": "2017-12-01",
"location": "[resourceGroup().location]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', variables('vmName'))]"
],
"properties": {
"publisher": "Microsoft.Azure.Diagnostics",
"type": "IaaSDiagnostics",
"typeHandlerVersion": "1.12",
"autoUpgradeMinorVersion": true,
"settings": {
"WadCfg": {
"WadCfg": {
"DiagnosticMonitorConfiguration": {
"overallQuotaInMB": 4096,
"DiagnosticInfrastructureLogs": {
"scheduledTransferLogLevelFilter": "Error"
},
"Directories": {
"scheduledTransferPeriod": "PT1M",
"IISLogs": {
"containerName": "wad-iis-logfiles"
},
"FailedRequestLogs": {
"containerName": "wad-failedrequestlogs"
}
},
"PerformanceCounters": {
"scheduledTransferPeriod": "PT1M",
"sinks": "AzMonSink",
"PerformanceCounterConfiguration": [
{
"counterSpecifier": "\\Memory\\Available Bytes",
"sampleRate": "PT15S"
},
{
"counterSpecifier": "\\Memory\\% Committed Bytes In Use",
"sampleRate": "PT15S"
},
{
"counterSpecifier": "\\Memory\\Committed Bytes",
"sampleRate": "PT15S"
}
]
},
"WindowsEventLog": {
"scheduledTransferPeriod": "PT1M",
"DataSource": [
{
"name": "Application!*"
}
]
},
"Logs": {
"scheduledTransferPeriod": "PT1M",
"scheduledTransferLogLevelFilter": "Error"
}
},
"SinksConfig": {
"Sink": [
{
"name" : "AzMonSink",
"AzureMonitor" : {}
}
]
}
},
"StorageAccount": "[variables('storageAccountName')]"
},
"protectedSettings": {
"storageAccountName": "[variables('storageAccountName')]",
"storageAccountKey": "[listKeys(variables('accountid'),'2015-06-15').key1]",
"storageAccountEndPoint": "https://core.windows.net/"
}
}
}
]
//End of section to add
5. To create a new resource group for the VM that's being deployed, run the following command:
NOTE
Remember to use an Azure region that is enabled for custom metrics.
6. Run the following commands to deploy the VM using the Resource Manager template.
NOTE
If you wish to update an existing VM, simply add -Mode Incremental to the end of the following command.
7. After your deployment succeeds, the VM should be in the Azure portal, emitting metrics to Azure Monitor.
NOTE
You might run into errors around the selected vmSkuSize. If this happens, go back to your azuredeploy.json file, and
update the default value of the vmSkuSize parameter. In this case, we recommend trying "Standard_DS1_v2").
Next steps
Learn more about custom metrics.
Send guest OS metrics to the Azure Monitor metric
store by using an Azure Resource Manager template
for a Windows virtual machine scale set
11/2/2020 • 6 minutes to read • Edit Online
NOTE
This article has been updated to use the new Azure PowerShell Az module. You can still use the AzureRM module, which will
continue to receive bug fixes until at least December 2020. To learn more about the new Az module and AzureRM
compatibility, see Introducing the new Azure PowerShell Az module. For Az module installation instructions, see Install Azure
PowerShell.
By using the Azure Monitor Windows Azure Diagnostics (WAD) extension, you can collect metrics and logs from the
guest operating system (guest OS) that runs as part of a virtual machine, cloud service, or Azure Service Fabric
cluster. The extension can send telemetry to many different locations listed in the previously linked article.
This article describes the process to send guest OS performance metrics for a Windows virtual machine scale set to
the Azure Monitor data store. Starting with Windows Azure Diagnostics version 1.11, you can write metrics directly
to the Azure Monitor metrics store, where standard platform metrics are already collected. By storing them in this
location, you can access the same actions that are available for platform metrics. Actions include near real-time
alerting, charting, routing, access from the REST API, and more. In the past, the Windows Azure Diagnostics
extension wrote to Azure Storage but not the Azure Monitor data store.
If you're new to Resource Manager templates, learn about template deployments and their structure and syntax.
Prerequisites
Your subscription must be registered with Microsoft.Insights.
You need to have Azure PowerShell installed, or you can use Azure Cloud Shell.
Your VM resource must be in a region that supports custom metrics.
"variables": {
//add this line
"storageAccountName": "[concat('storage', uniqueString(resourceGroup().id))]",
Find the virtual machine scale set definition in the resources section and add the identity section to the
configuration. This addition ensures that Azure assigns it a system identity. This step also ensures that the VMs in
the scale set can emit guest metrics about themselves to Azure Monitor:
{
"type": "Microsoft.Compute/virtualMachineScaleSets",
"name": "[variables('namingInfix')]",
"location": "[resourceGroup().location]",
"apiVersion": "2017-03-30",
//add these lines below
"identity": {
"type": "systemAssigned"
},
//end of lines to add
In the virtual machine scale set resource, find the vir tualMachineProfile section. Add a new profile called
extensionsProfile to manage extensions.
In the extensionProfile , add a new extension to the template as shown in the VMSS-WAD-extension section.
This section is the managed identities for Azure resources extension that ensures the metrics being emitted are
accepted by Azure Monitor. The name field can contain any name.
The following code from the MSI extension also adds the diagnostics extension and configuration as an extension
resource to the virtual machine scale set resource. Feel free to add or remove performance counters as needed:
"extensionProfile": {
"extensions": [
// BEGINNING of added code
// Managed identities for Azure resources
{
{
"name": "VMSS-WAD-extension",
"properties": {
"publisher": "Microsoft.ManagedIdentity",
"type": "ManagedIdentityExtensionForWindows",
"typeHandlerVersion": "1.0",
"autoUpgradeMinorVersion": true,
"settings": {
"port": 50342
},
"protectedSettings": {}
}
},
// add diagnostic extension. (Remove this comment after pasting.)
{
"name": "[concat('VMDiagnosticsVmExt','_vmNodeType0Name')]",
"properties": {
"type": "IaaSDiagnostics",
"autoUpgradeMinorVersion": true,
"protectedSettings": {
"storageAccountName": "[variables('storageAccountName')]",
"storageAccountKey": "[listKeys(resourceId('Microsoft.Storage/storageAccounts',
variables('storageAccountName')),'2015-05-01-preview').key1]",
"storageAccountEndPoint": "https://core.windows.net/"
},
"publisher": "Microsoft.Azure.Diagnostics",
"settings": {
"WadCfg": {
"DiagnosticMonitorConfiguration": {
"overallQuotaInMB": "50000",
"PerformanceCounters": {
"scheduledTransferPeriod": "PT1M",
"sinks": "AzMonSink",
"PerformanceCounterConfiguration": [
{
"counterSpecifier": "\\Memory\\% Committed Bytes In Use",
"sampleRate": "PT15S"
},
{
"counterSpecifier": "\\Memory\\Available Bytes",
"sampleRate": "PT15S"
},
{
"counterSpecifier": "\\Memory\\Committed Bytes",
"sampleRate": "PT15S"
}
]
},
"EtwProviders": {
"EtwEventSourceProviderConfiguration": [
{
"provider": "Microsoft-ServiceFabric-Actors",
"scheduledTransferKeywordFilter": "1",
"scheduledTransferPeriod": "PT5M",
"DefaultEvents": {
"eventDestination": "ServiceFabricReliableActorEventTable"
}
},
{
"provider": "Microsoft-ServiceFabric-Services",
"scheduledTransferPeriod": "PT5M",
"DefaultEvents": {
"eventDestination": "ServiceFabricReliableServiceEventTable"
}
}
],
"EtwManifestProviderConfiguration": [
{
"provider": "cbd93bc2-71e5-4566-b3a7-595d8eeca6e8",
"provider": "cbd93bc2-71e5-4566-b3a7-595d8eeca6e8",
"scheduledTransferLogLevelFilter": "Information",
"scheduledTransferKeywordFilter": "4611686018427387904",
"scheduledTransferPeriod": "PT5M",
"DefaultEvents": {
"eventDestination": "ServiceFabricSystemEventTable"
}
}
]
}
},
"SinksConfig": {
"Sink": [
{
"name": "AzMonSink",
"AzureMonitor": {}
}
]
}
},
"StorageAccount": "[variables('storageAccountName')]"
},
"typeHandlerVersion": "1.11"
}
}
]
}
}
}
},
//end of added code plus a few brackets. Be sure that the number and type of brackets match properly when
done.
{
"type": "Microsoft.Insights/autoscaleSettings",
...
Add a dependsOn for the storage account to ensure it's created in the correct order:
"dependsOn": [
"[concat('Microsoft.Network/loadBalancers/', variables('loadBalancerName'))]",
"[concat('Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'))]"
//add this line below
"[concat('Microsoft.Storage/storageAccounts/', variables('storageAccountName'))]"
"resources": [
// add this code
{
"type": "Microsoft.Storage/storageAccounts",
"name": "[variables('storageAccountName')]",
"apiVersion": "2015-05-01-preview",
"location": "[resourceGroup().location]",
"properties": {
"accountType": "Standard_LRS"
}
},
// end added code
{
"type": "Microsoft.Network/virtualNetworks",
"name": "[variables('virtualNetworkName')]",
5. Create a new resource group for the VM being deployed. Run the following command:
NOTE
Remember to use an Azure region that's enabled for custom metrics. Remember to use an Azure region that's enabled
for custom metrics.
NOTE
If you want to update an existing scale set, add -Mode Incremental to the end of the command.
7. After your deployment succeeds, you should find the virtual machine scale set in the Azure portal. It should
emit metrics to Azure Monitor.
NOTE
You might run into errors around the selected vmSkuSize . In that case, go back to your azuredeploy.json file and
update the default value of the vmSkuSize parameter. We recommend that you try Standard_DS1_v2 .
Next steps
Learn more about custom metrics.
Send data from Windows Azure diagnostics
extension to Azure Event Hubs
4/29/2020 • 4 minutes to read • Edit Online
Azure diagnostics extension is an agent in Azure Monitor that collects monitoring data from the guest operating
system and workloads of Azure virtual machines and other compute resources. This article describes how to send
data from the Windows Azure Diagnostic extension (WAD) to Azure Event Hubs so you can forward to locations
outside of Azure.
Supported data
The data collected from the guest operating system that can be sent to Event Hubs includes the following. Other
data sources collected by WAD, including IIS Logs and crash dumps, cannot be sent to Event Hubs.
Event Tracing for Windows (ETW) events
Performance counters
Windows event logs, including application logs in the Windows event log
Azure Diagnostics infrastructure logs
Prerequisites
Windows diagnostics extension 1.6 or higher. See Azure Diagnostics extension configuration schema versions
and history for a version history and Azure Diagnostics extension overview for supported resources.
Event Hubs namespace must always be provisioned. See Get started with Event Hubs for details.
Configuration schema
See Install and configure Windows Azure diagnostics extension (WAD) for different options for enabling and
configuring the diagnostics extension and Azure Diagnostics configuration schema for a reference of the
configuration schema. The rest of this article will describe how to use this configuration to send data to an event
hub.
Azure Diagnostics always sends logs and metrics to an Azure Storage account. You can configure one or more data
sinks that send data to additional locations. Each sink is defined in the SinksConfig element of the public
configuration with sensitive information in the private configuration. This configuration for event hubs uses the
values in the following table.
SharedAccessKeyName Name of a shared access policy for the event hub that has at
least Send authority.
P RO P ERT Y DESC RIP T IO N
SharedAccessKey Primary or secondary key from the shared access policy for
the event hub.
Example public and private configurations are shown below. This is a minimal configuration with a single
performance counter and event log to illustrate how to configure and use the event hub data sink. See Azure
Diagnostics configuration schema for a more complex example.
Public configuration
{
"WadCfg": {
"DiagnosticMonitorConfiguration": {
"overallQuotaInMB": 5120,
"PerformanceCounters": {
"scheduledTransferPeriod": "PT1M",
"sinks": "myEventHub",
"PerformanceCounterConfiguration": [
{
"counterSpecifier": "\\Processor(_Total)\\% Processor Time",
"sampleRate": "PT3M"
}
]
},
"WindowsEventLog": {
"scheduledTransferPeriod": "PT1M",
"sinks": "myEventHub",
"DataSource": [
{
"name": "Application!*[System[(Level=1 or Level=2 or Level=3)]]"
}
]
}
},
"SinksConfig": {
"Sink": [
{
"name": "myEventHub",
"EventHub": {
"Url": "https://diags-mycompany-ns.servicebus.windows.net/diageventhub",
"SharedAccessKeyName": "SendRule"
}
}
]
}
},
"StorageAccount": "mystorageaccount",
}
Private configuration
{
"storageAccountName": "mystorageaccount",
"storageAccountKey": "{base64 encoded key}",
"storageAccountEndPoint": "https://core.windows.net",
"EventHub": {
"Url": "https://diags-mycompany-ns.servicebus.windows.net/diageventhub",
"SharedAccessKeyName": "SendRule",
"SharedAccessKey": "{base64 encoded key}"
}
}
Configuration options
To send data to a data sink, you specify the sinks attribute on the data source's node. Where you place the sinks
attribute determines the scope of the assignment. In the following example, the sinks attribute is defined to the
PerformanceCounters node which will cause all child performance counters to be sent to the event hub.
"PerformanceCounters": {
"scheduledTransferPeriod": "PT1M",
"sinks": "MyEventHub",
"PerformanceCounterConfiguration": [
{
"counterSpecifier": "\\Processor(_Total)\\% Processor Time",
"sampleRate": "PT3M"
},
{
"counterSpecifier": "\\Memory\\Available MBytes",
"sampleRate": "PT3M"
},
{
"counterSpecifier": "\\Web Service(_Total)\\ISAPI Extension Requests/sec",
"sampleRate": "PT3M"
}
]
}
In the following example, the sinks attribute is applied directly to three counters which will cause only those
performance counters to be sent to the event hub.
"PerformanceCounters": {
"scheduledTransferPeriod": "PT1M",
"PerformanceCounterConfiguration": [
{
"counterSpecifier": "\\Processor(_Total)\\% Processor Time",
"sampleRate": "PT3M",
"sinks": "MyEventHub"
},
{
"counterSpecifier": "\\Memory\\Available MBytes",
"sampleRate": "PT3M"
},
{
"counterSpecifier": "\\Web Service(_Total)\\ISAPI Extension Requests/sec",
"sampleRate": "PT3M"
},
{
"counterSpecifier": "\\ASP.NET\\Requests Rejected",
"sampleRate": "PT3M",
"sinks": "MyEventHub"
},
{
"counterSpecifier": "\\ASP.NET\\Requests Queued",
"sampleRate": "PT3M",
"sinks": "MyEventHub"
}
]
}
Validating configuration
You can use a variety of methods to validate that data is being sent to the event hub. ne straightforward method is
to use Event Hubs capture as described in Capture events through Azure Event Hubs in Azure Blob Storage or
Azure Data Lake Storage.
Next steps
Event Hubs overview
Create an event hub
Event Hubs FAQ
Collect data from Azure diagnostics extension to
Azure Monitor Logs
11/2/2020 • 2 minutes to read • Edit Online
Azure diagnostics extension is an agent in Azure Monitor that collects monitoring data from the guest operating
system of Azure compute resources including virtual machines. This article describes how to collect data collected
by the diagnostics extension from Azure Storage to Azure Monitor Logs.
NOTE
The Log Analytics agent in Azure Monitor is typically the preferred method to collect data from the guest operating system
into Azure Monitor Logs. See Overview of the Azure Monitor agents for a detailed comparison of the agents.
LO G T Y P E RESO URC E T Y P E LO C AT IO N
NOTE
The portal does not validate that the source exists in the storage account or if new data is being written.
Next steps
Collect logs and metrics for Azure services for supported Azure services.
Enable Solutions to provide insight into the data.
Use search queries to analyze the data.
Send Cloud Service, Virtual Machine, or Service
Fabric diagnostic data to Application Insights
11/2/2020 • 4 minutes to read • Edit Online
Cloud services, Virtual Machines, Virtual Machine Scale Sets and Service Fabric all use the Azure Diagnostics
extension to collect data. Azure diagnostics sends data to Azure Storage tables. However, you can also pipe all or a
subset of the data to other locations using Azure Diagnostics extension 1.5 or later.
This article describes how to send data from the Azure Diagnostics extension to Application Insights.
<SinksConfig>
<Sink name="ApplicationInsights">
<ApplicationInsights>{Insert InstrumentationKey}</ApplicationInsights>
<Channels>
<Channel logLevel="Error" name="MyTopDiagData" />
<Channel logLevel="Verbose" name="MyLogData" />
</Channels>
</Sink>
</SinksConfig>
"SinksConfig": {
"Sink": [
{
"name": "ApplicationInsights",
"ApplicationInsights": "{Insert InstrumentationKey}",
"Channels": {
"Channel": [
{
"logLevel": "Error",
"name": "MyTopDiagData"
},
{
"logLevel": "Error",
"name": "MyLogData"
}
]
}
}
]
}
The Sink name attribute is a string value that uniquely identifies the sink.
The ApplicationInsights element specifies instrumentation key of the Application insights resource where
the Azure diagnostics data is sent.
If you don't have an existing Application Insights resource, see Create a new Application Insights
resource for more information on creating a resource and getting the instrumentation key.
If you are developing a Cloud Service with Azure SDK 2.8 and later, this instrumentation key is
automatically populated. The value is based on the APPINSIGHTS_INSTRUMENTATIONKEY service
configuration setting when packaging the Cloud Service project. See Use Application Insights with Cloud
Services.
The Channels element contains one or more Channel elements.
The name attribute uniquely refers to that channel.
The loglevel attribute lets you specify the log level that the channel allows. The available log levels in
order of most to least information are:
Verbose
Information
Warning
Error
Critical
A channel acts like a filter and allows you to select specific log levels to send to the target sink. For example, you
could collect verbose logs and send them to storage, but send only Errors to the sink.
The following graphic shows this relationship.
The following graphic summarizes the configuration values and how they work. You can include multiple sinks in
the configuration at different levels in the hierarchy. The sink at the top level acts as a global setting and the one
specified at the individual element acts like an override to that global setting.
<SinksConfig>
<Sink name="ApplicationInsights">
<ApplicationInsights>{Insert InstrumentationKey}</ApplicationInsights>
<Channels>
<Channel logLevel="Error" name="MyTopDiagData" />
<Channel logLevel="Verbose" name="MyLogData" />
</Channels>
</Sink>
</SinksConfig>
</WadCfg>
"WadCfg": {
"DiagnosticMonitorConfiguration": {
"overallQuotaInMB": 4096,
"sinks": "ApplicationInsights.MyTopDiagData", "_comment": "All info below sent to this channel",
"DiagnosticInfrastructureLogs": {
},
"PerformanceCounters": {
"PerformanceCounterConfiguration": [
{
"counterSpecifier": "\\Processor(_Total)\\% Processor Time",
"sampleRate": "PT3M"
},
{
"counterSpecifier": "\\Memory\\Available MBytes",
"sampleRate": "PT3M"
}
]
},
"WindowsEventLog": {
"scheduledTransferPeriod": "PT1M",
"DataSource": [
{
"name": "Application!*"
}
]
},
"Logs": {
"scheduledTransferPeriod": "PT1M",
"scheduledTransferLogLevelFilter": "Verbose",
"sinks": "ApplicationInsights.MyLogData", "_comment": "This specific info sent to this channel"
}
},
"SinksConfig": {
"Sink": [
{
"name": "ApplicationInsights",
"ApplicationInsights": "{Insert InstrumentationKey}",
"Channels": {
"Channel": [
{
"logLevel": "Error",
"name": "MyTopDiagData"
},
{
"logLevel": "Verbose",
"name": "MyLogData"
}
]
}
}
]
}
}
In the previous configuration, the following lines have the following meanings:
Send all the data that is being collected by Azure diagnostics
"DiagnosticMonitorConfiguration": {
"overallQuotaInMB": 4096,
"sinks": "ApplicationInsights.MyTopDiagData",
}
"DiagnosticMonitorConfiguration": {
"overallQuotaInMB": 4096,
"sinks": "ApplicationInsights.MyLogData",
}
Limitations
Channels only log type and not performance counters. If you specify a channel with a performance
counter element, it is ignored.
The log level for a channel cannot exceed the log level for what is being collected by Azure
diagnostics. For example, you cannot collect Application Log errors in the Logs element and try to send
Verbose logs to the Application Insight sink. The scheduledTransferLogLevelFilter attribute must always collect
equal or more logs than the logs you are trying to send to a sink.
You cannot send blob data collected by Azure diagnostics extension to Application Insights. For
example, anything specified under the Directories node. For Crash Dumps the actual crash dump is sent to blob
storage and only a notification that the crash dump was generated is sent to Application Insights.
Next Steps
Learn how to view your Azure diagnostics information in Application Insights.
Use PowerShell to enable the Azure diagnostics extension for your application.
Use Visual Studio to enable the Azure diagnostics extension for your application
Windows Azure Diagnostics extension (WAD)
configuration schema versions and history
11/2/2020 • 7 minutes to read • Edit Online
This article provides the version history of the Azure Diagnostics extension for Windows (WAD) schema versions
shipped as part of the Microsoft Azure SDK.
Azure Diagnostics version 1.0 first shipped in a plug-in model -- meaning that when you installed the Azure SDK,
you got the version of Azure diagnostics shipped with it.
Starting with SDK 2.5 (diagnostics version 1.2), Azure diagnostics went to an extension model. The tools to utilize
new features were only available in newer Azure SDKs, but any service using Azure diagnostics would pick up the
latest shipping version directly from Azure. For example, anyone still using SDK 2.5 would be loading the latest
version shown in the previous table, regardless if they are using the newer features.
Schemas index
Different versions of Azure diagnostics use different configuration schemas. Schema 1.0 and 1.2 have been
deprecated. For more information on version 1.3 and later, see Diagnostics 1.3 and later Configuration Schema
Version history
Diagnostics extension 1.11
Added support for the Azure Monitor sink. This sink is only applicable to performance counters. Enables sending
performance counters collected on your VM, VMSS, or cloud service to Azure Monitor as custom metrics. The
Azure Monitor sink supports:
Retrieving all performance counters sent to Azure Monitor via the Azure Monitor metrics APIs.
Alerting on all performance counters sent to Azure Monitor via the new unified alerts experience in Azure
Monitor
Treating wildcard operator in performance counters as the "Instance" dimension on your metric. For example if
you collected the "LogicalDisk(*)/DiskWrites/sec" counter you would be able to filter and split on the "Instance"
dimension to plot or alert on the Disk Writes/sec for each Logical Disk (C:, D:, etc.)
Define Azure Monitor as a new sink in your diagnostics extension configuration
"SinksConfig": {
"Sink": [
{
"name": "AzureMonitorSink",
"AzureMonitor": {}
},
]
}
<SinksConfig>
<Sink name="AzureMonitorSink">
<AzureMonitor/>
</Sink>
</SinksConfig>
NOTE
Configuring the Azure Monitor sink for Classic VMs and Classic CLoud Service requires more parameters to be defined in the
Diagnostics extension's private config.
For more details please reference the detailed diagnostics extension schema documentation.
Next, you can configure your performance counters to be routed to the Azure Monitor Sink.
"PerformanceCounters": {
"scheduledTransferPeriod": "PT1M",
"sinks": "AzureMonitorSink",
"PerformanceCounterConfiguration": [
{
"counterSpecifier": "\\Processor(_Total)\\% Processor Time",
"sampleRate": "PT1M",
"unit": "percent"
}
]
},
<PerformanceCounters scheduledTransferPeriod="PT1M", sinks="AzureMonitorSink">
<PerformanceCounterConfiguration counterSpecifier="\Processor(_Total)\% Processor Time" sampleRate="PT1M"
unit="percent" />
</PerformanceCounters>
{
"storageAccountName": "diagstorageaccount",
"storageAccountEndPoint": "https://core.windows.net",
"storageAccountSasToken": "{sas token}",
"SecondaryStorageAccounts": {
"StorageAccount": [
{
"name": "secondarydiagstorageaccount",
"endpoint": "https://core.windows.net",
"sasToken": "{sas token}"
}
]
}
}
<PrivateConfig>
<StorageAccount name="diagstorageaccount" endpoint="https://core.windows.net" sasToken="{sas token}" />
<SecondaryStorageAccounts>
<StorageAccount name="secondarydiagstorageaccount" endpoint="https://core.windows.net" sasToken="{sas
token}" />
</SecondaryStorageAccounts>
</PrivateConfig>
{
"WadCfg": {
},
"StorageAccount": "diagstorageaccount",
"StorageType": "TableAndBlob"
}
<PublicConfig>
<WadCfg />
<StorageAccount>diagstorageaccount</StorageAccount>
<StorageType>TableAndBlob</StorageType>
</PublicConfig>
WARNING
If you are sending Azure resource logs or metrics to a storage account using diagnostic settings or activity logs to a storage
account using log profiles, the format of the data in the storage account changed to JSON Lines on Nov. 1, 2018. The
instructions below describe the impact and how to update your tooling to handle the new format.
What changed
Azure Monitor offers a capability that enables you to send resource logs and activity logs into an Azure storage
account, Event Hubs namespace, or into a Log Analytics workspace in Azure Monitor. In order to address a system
performance issue, on November 1, 2018 at 12:00 midnight UTC the format of log data send to blob storage
changed. If you have tooling that is reading data out of blob storage, you need to update your tooling to
understand the new data format.
On Thursday, November 1, 2018 at 12:00 midnight UTC, the blob format changed to be JSON Lines. This means
each record will be delimited by a newline, with no outer records array and no commas between JSON records.
The blob format changed for all diagnostic settings across all subscriptions at once. The first PT1H.json file
emitted for November 1 used this new format. The blob and container names remain the same.
Setting a diagnostic setting between prior to November 1 continued to emit data in the current format until
November 1.
This change occurred at once across all public cloud regions. The change will not occur in Microsoft Azure
Operated by 21Vianet, Azure Germany, or Azure Government clouds yet.
This change impacts the following data types:
Azure resource logs (see list of resources here)
Azure resource metrics being exported by diagnostic settings
Azure Activity log data being exported by log profiles
This change does not impact:
Network flow logs
Azure service logs not made available through Azure Monitor yet (for example, Azure App Service
resource logs, storage analytics logs)
Routing of Azure resource logs and activity logs to other destinations (Event Hubs, Log Analytics)
How to see if you are impacted
You are only impacted by this change if you:
1. Are sending log data to an Azure storage account using a diagnostic setting, and
2. Have tooling that depends on the JSON structure of these logs in storage.
To identify if you have diagnostic settings that are sending data to an Azure storage account, you can navigate to
the Monitor section of the portal, click on Diagnostic Settings , and identify any resources that have Diagnostic
Status set to Enabled :
If Diagnostic Status is set to enabled, you have an active diagnostic setting on that resource. Click on the resource to
see if any diagnostic settings are sending data to a storage account:
If you do have resources sending data to a storage account using these resource diagnostic settings, the format of
the data in that storage account will be impacted by this change. Unless you have custom tooling that operates off
of these storage accounts, the format change will not impact you.
Details of the format change
The current format of the PT1H.json file in Azure blob storage uses a JSON array of records. Here is a sample of a
KeyVault log file now:
{
"records": [
{
"time": "2016-01-05T01:32:01.2691226Z",
"resourceId": "/SUBSCRIPTIONS/361DA5D4-A47A-4C79-AFDD-
XXXXXXXXXXXX/RESOURCEGROUPS/CONTOSOGROUP/PROVIDERS/MICROSOFT.KEYVAULT/VAULTS/CONTOSOKEYVAULT",
"operationName": "VaultGet",
"operationVersion": "2015-06-01",
"category": "AuditEvent",
"resultType": "Success",
"resultSignature": "OK",
"resultDescription": "",
"durationMs": "78",
"callerIpAddress": "104.40.82.76",
"correlationId": "",
"identity": {
"claim": {
"http://schemas.microsoft.com/identity/claims/objectidentifier": "d9da5048-2737-4770-bd64-
XXXXXXXXXXXX",
"http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn":
"live.com#username@outlook.com",
"appid": "1950a258-227b-4e31-a9cf-XXXXXXXXXXXX"
}
},
"properties": {
"clientInfo": "azure-resource-manager/2.0",
"requestUri": "https://control-prod-wus.vaultcore.azure.net/subscriptions/361da5d4-a47a-4c79-
afdd-XXXXXXXXXXXX/resourcegroups/contosoresourcegroup/providers/Microsoft.KeyVault/vaults/contosokeyvault?api-
version=2015-06-01",
"id": "https://contosokeyvault.vault.azure.net/",
"httpStatusCode": 200
}
},
{
"time": "2016-01-05T01:33:56.5264523Z",
"resourceId": "/SUBSCRIPTIONS/361DA5D4-A47A-4C79-AFDD-
XXXXXXXXXXXX/RESOURCEGROUPS/CONTOSOGROUP/PROVIDERS/MICROSOFT.KEYVAULT/VAULTS/CONTOSOKEYVAULT",
"operationName": "VaultGet",
"operationVersion": "2015-06-01",
"category": "AuditEvent",
"resultType": "Success",
"resultSignature": "OK",
"resultDescription": "",
"durationMs": "83",
"callerIpAddress": "104.40.82.76",
"correlationId": "",
"identity": {
"claim": {
"http://schemas.microsoft.com/identity/claims/objectidentifier": "d9da5048-2737-4770-bd64-
XXXXXXXXXXXX",
"http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn":
"live.com#username@outlook.com",
"appid": "1950a258-227b-4e31-a9cf-XXXXXXXXXXXX"
}
},
"properties": {
"clientInfo": "azure-resource-manager/2.0",
"requestUri": "https://control-prod-wus.vaultcore.azure.net/subscriptions/361da5d4-a47a-4c79-
afdd-XXXXXXXXXXXX/resourcegroups/contosoresourcegroup/providers/Microsoft.KeyVault/vaults/contosokeyvault?api-
version=2015-06-01",
"id": "https://contosokeyvault.vault.azure.net/",
"httpStatusCode": 200
}
}
]
}
The new format uses JSON lines, where each event is a line and the newline character indicates a new event. Here is
what the above sample will look like in the PT1H.json file after the change:
This new format enables Azure Monitor to push log files using append blobs, which are more efficient for
continuously appending new event data.
How to update
You only need to make updates if you have custom tooling that ingests these log files for further processing. If you
are making use of an external log analytics or SIEM tool, we recommend using event hubs to ingest this data
instead. Event hubs integration is easier in terms of processing logs from many services and bookmarking location
in a particular log.
Custom tools should be updated to handle both the current format and the JSON Lines format described above.
This will ensure that when data starts to appear in the new format, your tools do not break.
Next steps
Learn about archiving resource resource logs to a storage account
Learn about archiving activity log data to a storage account
Azure Diagnostics troubleshooting
11/2/2020 • 13 minutes to read • Edit Online
This article describes troubleshooting information that's relevant to using Azure Diagnostics. For more information
about Azure diagnostics, see Azure Diagnostics overview.
Logical components
Diagnostics Plugin Launcher (DiagnosticsPluginLauncher.exe) : Launches the Azure Diagnostics extension.
Serves as the entry point process.
Diagnostics Plugin (DiagnosticsPlugin.exe) : Configures, launches, and manages the lifetime of the monitoring
agent. It is the main process that is launched by the launcher.
Monitoring Agent (MonAgent*.exe processes) : Monitors, collects, and transfers the diagnostics data.
Log/artifact paths
Following are the paths to some important logs and artifacts. We refer to this information throughout the rest of
the document.
Azure Cloud Services
A RT IFA C T PAT H
Virtual machines
A RT IFA C T PAT H
A RT IFA C T PAT H
If you find a negative exit code, refer to the exit code table in the References section.
Verify that the provided storage account is correct. Make sure you don't have network restrictions that
prevent the components from reaching public storage endpoints. One way to do that is to remote access
into the machine, and then try to write something to the same storage account yourself.
Finally, you can look at what failures are being reported by the monitoring Agent. The monitoring agent
writes its logs in maeventtable.tsf , which is located in the local store for diagnostics data. Follow the
instructions in the Local log extraction section for opening this file. Then try to determine if there are
errors that indicate failures reading to local files writing to storage.
if (String.IsNullOrEmpty(eventDestination)) {
if (e == "DefaultEvents")
tableName = "WADDefault" + MD5(provider);
else
tableName = "WADEvent" + MD5(provider) + eventId;
}
else
tableName = "WAD" + eventDestination;
Here is an example:
<EtwEventSourceProviderConfiguration provider="prov1">
<Event id="1" />
<Event id="2" eventDestination="dest1" />
<DefaultEvents />
</EtwEventSourceProviderConfiguration>
<EtwEventSourceProviderConfiguration provider="prov2">
<DefaultEvents eventDestination="dest2" />
</EtwEventSourceProviderConfiguration>
"EtwEventSourceProviderConfiguration": [
{
"provider": "prov1",
"Event": [
{
"id": 1
},
{
"id": 2,
"eventDestination": "dest1"
}
],
"DefaultEvents": {
"eventDestination": "DefaultEventDestination",
"sinks": ""
}
},
{
"provider": "prov2",
"DefaultEvents": {
"eventDestination": "dest2"
}
}
]
EVEN T TA B L E N A M E
References
How to check Diagnostics extension configuration
The easiest way to check your extension configuration is to go to Azure Resource Explorer, and then go to the
virtual machine or cloud service where the Azure Diagnostics extension (IaaSDiagnostics / PaaDiagnostics) is.
Alternatively, remote desktop into the machine and look at the Azure Diagnostics Configuration file that's described
in the Log artifacts path section.
In either case, search for Microsoft.Azure.Diagnostics , and then for the xmlCfg or WadCfg field.
If you're searching on a virtual machine and the WadCfg field is present, it means the config is in JSON format. If
the xmlCfg field is present, it means the config is in XML and is base64-encoded. You need to decode it to see the
XML that was loaded by Diagnostics.
For the cloud service role, if you pick the configuration from disk, the data is base64-encoded, so you need to
decode it to see the XML that was loaded by Diagnostics.
Azure Diagnostics plugin exit codes
The plugin returns the following exit codes:
0 Success.
-1 Generic error.
-11 The guest agent was unable to create the process responsible
for launching and monitoring the monitoring agent.
Solution: Verify that sufficient system resources are
available to launch new processes.
-104 Unable to load the rcf file provided by the guest agent.
This internal error should only happen if the guest agent
plugin launcher is manually invoked incorrectly on the
VM.
A new file called <relevantLogFile>.csv is created in the same path as the corresponding .tsf file.
NOTE
You only need to run this utility against the main .tsf file (for example, PerformanceCountersTable.tsf). The accompanying files
(for example, PerformanceCountersTables_**001.tsf, PerformanceCountersTables_**002.tsf, and so on) are automatically
processed.
NOTE
The following information applies mostly to Azure Cloud Services unless you have configured the
DiagnosticsMonitorTraceListener on an application that's running on your IaaS VM.
The Azure Log Analytics agent collects telemetry from Windows and Linux virtual machines in any cloud, on-
premises machines, and those monitored by System Center Operations Manager and sends it collected data to
your Log Analytics workspace in Azure Monitor. The Log Analytics agent also supports insights and other
services in Azure Monitor such as Azure Monitor for VMs, Azure Security Center, and Azure Automation. This
article provides a detailed overview of the agent, system and network requirements, and deployment methods.
NOTE
You may also see the Log Analytics agent referred to as the Microsoft Monitoring Agent (MMA) or OMS Linux agent.
Costs
There is no cost for Log Analytics agent, but you may incur charges for the data ingested. Check Manage usage
and costs with Azure Monitor Logs for detailed information on the pricing for data collected in a Log Analytics
workspace.
Data collected
The following table lists the types of data you can configure a Log Analytics workspace to collect from all
connected agents. See What is monitored by Azure Monitor? for a list of insights, solutions, and other solutions
that use the Log Analytics agent to collect other kinds of data.
Windows Event logs Information sent to the Windows event logging system.
DATA SO URC E DESC RIP T IO N
IIS logs Usage information for IIS web sites running on the guest
operating system.
Custom logs Events from text files on both Windows and Linux computers.
Data destinations
The Log Analytics agent sends data to a Log Analytics workspace in Azure Monitor. The Windows agent can be
multihomed to send data to multiple workspaces and System Center Operations Manager management groups.
The Linux agent can send to only a single destination, either a workspace or management group.
Other services
The agent for Linux and Windows isn't only for connecting to Azure Monitor. Other services such as Azure
Security Center and Azure Sentinel rely on the agent and its connected Log Analytics workspace. The agent also
supports Azure Automation to host the Hybrid Runbook worker role and other services such as Change Tracking,
Update Management, and Azure Security Center. For more information about the Hybrid Runbook Worker role,
see Azure Automation Hybrid Runbook Worker.
Security limitations
The Windows and Linux agents support the FIPS 140 standard, but other types of hardening may not be
supported.
Installation options
There are multiple methods to install the Log Analytics agent and connect your machine to Azure Monitor
depending on your requirements. The following sections list the possible methods for different types of virtual
machine.
NOTE
It is not supported to clone a machine with the Log Analytics Agent already configured. If the agent has already been
associated with a workspace this will not work for 'golden images'.
Network requirements
The agent for Linux and Windows communicates outbound to the Azure Monitor service over TCP port 443. If
the machine connects through a firewall or proxy server to communicate over the Internet, review requirements
below to understand the network configuration required. If your IT security policies do not allow computers on
the network to connect to the Internet, you can set up a Log Analytics gateway and then configure the agent to
connect through the gateway to Azure Monitor. The agent can then receive configuration information and send
data collected.
The following table lists the proxy and firewall configuration information required for the Linux and Windows
agents to communicate with Azure Monitor logs.
Firewall requirements
A GEN T RESO URC E P O RT S DIREC T IO N B Y PA SS H T T P S IN SP EC T IO N
For firewall information required for Azure Government, see Azure Government management.
If you plan to use the Azure Automation Hybrid Runbook Worker to connect to and register with the Automation
service to use runbooks or management solutions in your environment, it must have access to the port number
and the URLs described in Configure your network for the Hybrid Runbook Worker.
Proxy configuration
The Windows and Linux agent supports communicating either through a proxy server or Log Analytics gateway
to Azure Monitor using the HTTPS protocol. Both anonymous and basic authentication (username/password) are
supported. For the Windows agent connected directly to the service, the proxy configuration is specified during
installation or after deployment from Control Panel or with PowerShell.
For the Linux agent, the proxy server is specified during installation or after installation by modifying the
proxy.conf configuration file. The Linux agent proxy configuration value has the following syntax:
[protocol://][user:password@]proxyhost[:port]
Protocol https
P RO P ERT Y DESC RIP T IO N
NOTE
If you use special characters such as "@" in your password, you receive a proxy connection error because value is parsed
incorrectly. To work around this issue, encode the password in the URL using a tool such as URLDecode.
Next steps
Review data sources to understand the data sources available to collect data from your Windows or Linux
system.
Learn about log queries to analyze the data collected from data sources and solutions.
Learn about monitoring solutions that add functionality to Azure Monitor and also collect data into the Log
Analytics workspace.
Install Log Analytics agent on Windows computers
11/2/2020 • 9 minutes to read • Edit Online
This article provides details on installing the Log Analytics agent on Windows computers using the following
methods:
Manual installation using the setup wizard or command line.
Azure Automation Desired State Configuration (DSC).
IMPORTANT
The installation methods described in this article are typically used for virtual machines on-premises or in other clouds. See
Installation options for more efficient options you can use for Azure virtual machines.
NOTE
If you need to configure the agent to report to more than one workspace, this cannot be performed during initial setup,
only afterwards by updating the settings from Control Panel or PowerShell as described in Adding or removing a workspace.
Network requirements
See Log Analytics agent overview for the network requirements for the Windows agent.
9. On the Ready to Install page, review your choices and then click Install .
10. On the Configuration completed successfully page, click Finish .
When complete, the Microsoft Monitoring Agent appears in Control Panel . To confirm it is reporting to Log
Analytics, review Verify agent connectivity to Log Analytics.
NOTE
If you want to upgrade an agent, you need to use the Log Analytics scripting API. See the topic Managing and maintaining
the Log Analytics agent for Windows and Linux for further information.
The following table highlights the specific parameters supported by setup for the agent, including when deployed
using Automation DSC.
M M A - SP EC IF IC O P T IO N S N OT ES
1. To extract the agent installation files, from an elevated command prompt run MMASetup-<platform>.exe /c
and it will prompt you for the path to extract files to. Alternatively, you can specify the path by passing the
arguments MMASetup-<platform>.exe /c /t:<Full Path> .
2. To silently install the agent and configure it to report to a workspace in Azure commercial cloud, from the
folder you extracted the setup files to type:
NOTE
The string values for the parameters OPINSIGHTS_WORKSPACE_ID and OPINSIGHTS_WORKSPACE_KEY need to be
encapsulated in double-quotes to instruct Windows Installer to interprit as valid options for the package.
The 32-bit and 64-bit versions of the agent package have different product codes and new versions released also
have a unique value. The product code is a GUID that is the principal identification of an application or product and
is represented by the Windows Installer ProductCode property. The ProductId value in the MMAgent.ps1
script has to match the product code from the 32-bit or 64-bit agent installer package.
To retrieve the product code from the agent install package directly, you can use Orca.exe from the Windows SDK
Components for Windows Installer Developers that is a component of the Windows Software Development Kit or
using PowerShell following an example script written by a Microsoft Valuable Professional (MVP). For either
approach, you first need to extract the MOMagent.msi file from the MMASetup installation package. This is
shown earlier in the first step under the section Install the agent using the command line.
1. Import the xPSDesiredStateConfiguration DSC Module from
https://www.powershellgallery.com/packages/xPSDesiredStateConfiguration into Azure Automation.
2. Create Azure Automation variable assets for OPSINSIGHTS_WS_ID and OPSINSIGHTS_WS_KEY. Set
OPSINSIGHTS_WS_ID to your Log Analytics workspace ID and set OPSINSIGHTS_WS_KEY to the primary key of
your workspace.
3. Copy the script and save it as MMAgent.ps1.
Configuration MMAgent
{
$OIPackageLocalPath = "C:\Deploy\MMASetup-AMD64.exe"
$OPSINSIGHTS_WS_ID = Get-AutomationVariable -Name "OPSINSIGHTS_WS_ID"
$OPSINSIGHTS_WS_KEY = Get-AutomationVariable -Name "OPSINSIGHTS_WS_KEY"
Node OMSnode {
Service OIService
{
Name = "HealthService"
State = "Running"
DependsOn = "[Package]OI"
}
xRemoteFile OIPackage {
Uri = "https://go.microsoft.com/fwlink/?LinkId=828603"
DestinationPath = $OIPackageLocalPath
}
Package OI {
Ensure = "Present"
Path = $OIPackageLocalPath
Name = "Microsoft Monitoring Agent"
ProductId = "8A7F2C51-4C7D-4BFD-9014-91D11F24AAE2"
Arguments = '/C:"setup.exe /qn NOAPM=1 ADD_OPINSIGHTS_WORKSPACE=1 OPINSIGHTS_WORKSPACE_ID=' +
$OPSINSIGHTS_WS_ID + ' OPINSIGHTS_WORKSPACE_KEY=' + $OPSINSIGHTS_WS_KEY + ' AcceptEndUserLicenseAgreement=1"'
DependsOn = "[xRemoteFile]OIPackage"
}
}
}
4. Update the ProductId value in the script with the product code extracted from the latest version of the agent
install package using the methods recommended earlier.
5. Import the MMAgent.ps1 configuration script into your Automation account.
6. Assign a Windows computer or node to the configuration. Within 15 minutes, the node checks its configuration
and the agent is pushed to the node.
You can also perform a simple log query in the Azure portal.
1. In the Azure portal, search for and select Monitor .
2. Select Logs in the menu.
3. On the Logs pane, in the query field type:
Heartbeat
| where Category == "Direct Agent"
| where TimeGenerated > ago(30m)
In the search results returned, you should see heartbeat records for the computer indicating it is connected and
reporting to the service.
Cache information
Data from the Log Analytics agent is cached on the local machine at C:\Program Files\Microsoft Monitoring
Agent\Agent\Health Service State before it's sent to Azure Monitor. The agent attempts to upload every 20
seconds. If it fails, it will wait an exponentially increasing length of time until it succeeds. It will wait 30 seconds
before the second attempt, 60 seconds before the next, 120 seconds, and so on to a maximum of 8.5 hours
between retries until it successfully connects again. This wait time is slightly randomized to avoid all agents
simultaneously attempting connection. Oldest data is discarded when the maximum buffer is reached.
The default cache size is 50 MB but can be configured between a minimum of 5 MB and maximum of 1.5 GB. It's
stored in the registry key
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\HealthService\Parameters\Persistence Cache
Maximum. The value represents the number of pages, with 8 KB per page.
Next steps
Review Managing and maintaining the Log Analytics agent for Windows and Linux to learn about how to
reconfigure, upgrade, or remove the agent from the virtual machine.
Review Troubleshooting the Windows agent if you encounter issues while installing or managing the agent.
Log Analytics virtual machine extension for Windows
11/2/2020 • 5 minutes to read • Edit Online
Azure Monitor Logs provides monitoring capabilities across cloud and on-premises assets. The Log Analytics agent
virtual machine extension for Windows is published and supported by Microsoft. The extension installs the Log
Analytics agent on Azure virtual machines, and enrolls virtual machines into an existing Log Analytics workspace.
This document details the supported platforms, configurations, and deployment options for the Log Analytics
virtual machine extension for Windows.
Prerequisites
Operating system
For details about the supported Windows operating systems, refer to the Overview of Azure Monitor agents article.
Agent and VM Extension version
The following table provides a mapping of the version of the Windows Log Analytics VM extension and Log
Analytics agent bundle for each release.
LO G A N A LY T IC S W IN DO W S LO G A N A LY T IC S W IN DO W S
A GEN T B UN DL E VERSIO N VM EXT EN SIO N VERSIO N REL EA SE DAT E REL EA SE N OT ES
Extension schema
The following JSON shows the schema for the Log Analytics agent extension. The extension requires the workspace
ID and workspace key from the target Log Analytics workspace. These can be found in the settings for the
workspace in the Azure portal. Because the workspace key should be treated as sensitive data, it should be stored in
a protected setting configuration. Azure VM extension protected setting data is encrypted, and only decrypted on
the target virtual machine. Note that workspaceId and workspaceKey are case-sensitive.
{
"type": "extensions",
"name": "OMSExtension",
"apiVersion": "[variables('apiVersion')]",
"location": "[resourceGroup().location]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', variables('vmName'))]"
],
"properties": {
"publisher": "Microsoft.EnterpriseCloud.Monitoring",
"type": "MicrosoftMonitoringAgent",
"typeHandlerVersion": "1.0",
"autoUpgradeMinorVersion": true,
"settings": {
"workspaceId": "myWorkSpaceId"
},
"protectedSettings": {
"workspaceKey": "myWorkspaceKey"
}
}
}
Property values
NAME VA L UE / EXA M P L E
apiVersion 2015-06-15
publisher Microsoft.EnterpriseCloud.Monitoring
type MicrosoftMonitoringAgent
typeHandlerVersion 1.0
NOTE
For additional properties see Azure Connect Windows Computers to Azure Monitor.
Template deployment
Azure VM extensions can be deployed with Azure Resource Manager templates. The JSON schema detailed in the
previous section can be used in an Azure Resource Manager template to run the Log Analytics agent extension
during an Azure Resource Manager template deployment. A sample template that includes the Log Analytics agent
VM extension can be found on the Azure Quickstart Gallery.
NOTE
The template does not support specifying more than one workspace ID and workspace key when you want to configure the
agent to report to multiple workspaces. To configure the agent to report to multiple workspaces, see Adding or removing a
workspace.
The JSON for a virtual machine extension can be nested inside the virtual machine resource, or placed at the root or
top level of a Resource Manager JSON template. The placement of the JSON affects the value of the resource name
and type. For more information, see Set name and type for child resources.
The following example assumes the Log Analytics extension is nested inside the virtual machine resource. When
nesting the extension resource, the JSON is placed in the "resources": [] object of the virtual machine.
{
"type": "extensions",
"name": "OMSExtension",
"apiVersion": "[variables('apiVersion')]",
"location": "[resourceGroup().location]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', variables('vmName'))]"
],
"properties": {
"publisher": "Microsoft.EnterpriseCloud.Monitoring",
"type": "MicrosoftMonitoringAgent",
"typeHandlerVersion": "1.0",
"autoUpgradeMinorVersion": true,
"settings": {
"workspaceId": "myWorkSpaceId"
},
"protectedSettings": {
"workspaceKey": "myWorkspaceKey"
}
}
}
When placing the extension JSON at the root of the template, the resource name includes a reference to the parent
virtual machine, and the type reflects the nested configuration.
{
"type": "Microsoft.Compute/virtualMachines/extensions",
"name": "<parentVmResource>/OMSExtension",
"apiVersion": "[variables('apiVersion')]",
"location": "[resourceGroup().location]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', variables('vmName'))]"
],
"properties": {
"publisher": "Microsoft.EnterpriseCloud.Monitoring",
"type": "MicrosoftMonitoringAgent",
"typeHandlerVersion": "1.0",
"autoUpgradeMinorVersion": true,
"settings": {
"workspaceId": "myWorkSpaceId"
},
"protectedSettings": {
"workspaceKey": "myWorkspaceKey"
}
}
}
PowerShell deployment
The Set-AzVMExtension command can be used to deploy the Log Analytics agent virtual machine extension to an
existing virtual machine. Before running the command, the public and private configurations need to be stored in a
PowerShell hash table.
$PublicSettings = @{"workspaceId" = "myWorkspaceId"}
$ProtectedSettings = @{"workspaceKey" = "myWorkspaceKey"}
C:\WindowsAzure\Logs\Plugins\Microsoft.EnterpriseCloud.Monitoring.MicrosoftMonitoringAgent\
Support
If you need more help at any point in this article, you can contact the Azure experts on the MSDN Azure and Stack
Overflow forums. Alternatively, you can file an Azure support incident. Go to the Azure support site and select Get
support. For information about using Azure Support, read the Microsoft Azure support FAQ.
Connect computers without internet access by using
the Log Analytics gateway in Azure Monitor
11/2/2020 • 19 minutes to read • Edit Online
NOTE
As Microsoft Operations Management Suite (OMS) transitions to Microsoft Azure Monitor, terminology is changing. This
article refers to OMS Gateway as the Azure Log Analytics gateway.
This article describes how to configure communication with Azure Automation and Azure Monitor by using the
Log Analytics gateway when computers that are directly connected or that are monitored by Operations Manager
have no internet access.
The Log Analytics gateway is an HTTP forward proxy that supports HTTP tunneling using the HTTP CONNECT
command. This gateway sends data to Azure Automation and a Log Analytics workspace in Azure Monitor on
behalf of the computers that cannot directly connect to the internet.
The Log Analytics gateway supports:
Reporting up to the same Log Analytics workspaces configured on each agent behind it and that are configured
with Azure Automation Hybrid Runbook Workers.
Windows computers on which the Microsoft Monitoring Agent is directly connected to a Log Analytics
workspace in Azure Monitor.
Linux computers on which a Log Analytics agent for Linux is directly connected to a Log Analytics workspace in
Azure Monitor.
System Center Operations Manager 2012 SP1 with UR7, Operations Manager 2012 R2 with UR3, or a
management group in Operations Manager 2016 or later that is integrated with Log Analytics.
Some IT security policies don't allow internet connection for network computers. These unconnected computers
could be point of sale (POS) devices or servers supporting IT services, for example. To connect these devices to
Azure Automation or a Log Analytics workspace so you can manage and monitor them, configure them to
communicate directly with the Log Analytics gateway. The Log Analytics gateway can receive configuration
information and forward data on their behalf. If the computers are configured with the Log Analytics agent to
directly connect to a Log Analytics workspace, the computers instead communicate with the Log Analytics gateway.
The Log Analytics gateway transfers data from the agents to the service directly. It doesn't analyze any of the data
in transit and the gateway does not cache data when it loses connectivity with the service. When the gateway is
unable to communicate with service, the agent continues to run and queues the collected data on the disk of the
monitored computer. When the connection is restored, the agent sends the cached data collected to Azure Monitor.
When an Operations Manager management group is integrated with Log Analytics, the management servers can
be configured to connect to the Log Analytics gateway to receive configuration information and send collected
data, depending on the solution you have enabled. Operations Manager agents send some data to the
management server. For example, agents might send Operations Manager alerts, configuration assessment data,
instance space data, and capacity data. Other high-volume data, such as Internet Information Services (IIS) logs,
performance data, and security events, is sent directly to the Log Analytics gateway.
If one or more Operations Manager Gateway servers are deployed to monitor untrusted systems in a perimeter
network or an isolated network, those servers can't communicate with a Log Analytics gateway. Operations
Manager Gateway servers can report only to a management server. When an Operations Manager management
group is configured to communicate with the Log Analytics gateway, the proxy configuration information is
automatically distributed to every agent-managed computer that is configured to collect log data for Azure
Monitor, even if the setting is empty.
To provide high availability for directly connected or Operations Management groups that communicate with a Log
Analytics workspace through the gateway, use network load balancing (NLB) to redirect and distribute traffic
across multiple gateway servers. That way, if one gateway server goes down, the traffic is redirected to another
available node.
The computer that runs the Log Analytics gateway requires the Log Analytics Windows agent to identify the
service endpoints that the gateway needs to communicate with. The agent also needs to direct the gateway to
report to the same workspaces that the agents or Operations Manager management group behind the gateway
are configured with. This configuration allows the gateway and the agent to communicate with their assigned
workspace.
A gateway can be multihomed to up to four workspaces. This is the total number of workspaces a Windows agent
supports.
Each agent must have network connectivity to the gateway so that agents can automatically transfer data to and
from the gateway. Avoid installing the gateway on a domain controller. Linux computers that are behind a gateway
server cannot use the wrapper script installation method to install the Log Analytics agent for Linux. The agent
must be downloaded manually, copied to the computer, and installed manually because the gateway only supports
communicating with the Azure services mentioned earlier.
The following diagram shows data flowing from direct agents, through the gateway, to Azure Automation and Log
Analytics. The agent proxy configuration must match the port that the Log Analytics gateway is configured with.
The following diagram shows data flow from an Operations Manager management group to Log Analytics.
Set up your system
Computers designated to run the Log Analytics gateway must have the following configuration:
Windows 10, Windows 8.1, or Windows 7
Windows Server 2019, Windows Server 2016, Windows Server 2012 R2, Windows Server 2012, Windows
Server 2008 R2, or Windows Server 2008
Microsoft .NET Framework 4.5
At least a 4-core processor and 8 GB of memory
A Log Analytics agent for Windows that is configured to report to the same workspace as the agents that
communicate through the gateway
Language availability
The Log Analytics gateway is available in these languages:
Chinese (Simplified)
Chinese (Traditional)
Czech
Dutch
English
French
German
Hungarian
Italian
Japanese
Korean
Polish
Portuguese (Brazil)
Portuguese (Portugal)
Russian
Spanish (International)
Supported encryption protocols
The Log Analytics gateway supports only Transport Layer Security (TLS) 1.0, 1.1, and 1.2. It doesn't support Secure
Sockets Layer (SSL). To ensure the security of data in transit to Log Analytics, configure the gateway to use at least
TLS 1.2. Older versions of TLS or SSL are vulnerable. Although they currently allow backward compatibility, avoid
using them.
For additional information, review Sending data securely using TLS 1.2.
Supported number of agent connections
The following table shows approximately how many agents can communicate with a gateway server. Support is
based on agents that upload about 200 KB of data every 6 seconds. For each agent tested, data volume is about 2.7
GB per day.
3. On the License Agreement page, select I accept the terms in the License Agreement to agree to the
Microsoft Software License Terms, and then select Next .
4. On the Por t and proxy address page:
a. Enter the TCP port number to be used for the gateway. Setup uses this port number to configure an
inbound rule on Windows Firewall. The default value is 8080. The valid range of the port number is 1
through 65535. If the input does not fall into this range, an error message appears.
b. If the server where the gateway is installed needs to communicate through a proxy, enter the proxy
address where the gateway needs to connect. For example, enter http://myorgname.corp.contoso.com:80 . If
you leave this field blank, the gateway will try to connect to the internet directly. If your proxy server
requires authentication, enter a username and password.
c. Select Next .
5. If you do not have Microsoft Update enabled, the Microsoft Update page appears, and you can choose to
enable it. Make a selection and then select Next . Otherwise, continue to the next step.
6. On the Destination Folder page, either leave the default folder C:\Program Files\OMS Gateway or enter
the location where you want to install the gateway. Then select Next .
7. On the Ready to install page, select Install . If User Account Control requests permission to install, select
Yes .
8. After Setup finishes, select Finish . To verify that the service is running, open the services.msc snap-in and
verify that OMS Gateway appears in the list of services and that its status is Running .
PA RA M ET ERS N OT ES
To silently install the gateway and configure it with a specific proxy address, port number, type the following:
Using the /qn command-line option hides setup, /qb shows setup during silent install.
If you need to provide credentials to authenticate with the proxy, type the following:
After installation, you can confirm the settings are accepted (excluding the username and password) using the
following PowerShell cmdlets:
Get-OMSGatewayConfig – Returns the TCP Port the gateway is configured to listen on.
Get-OMSGatewayRelayProxy – Returns the IP address of the proxy server you configured it to communicate
with.
NOTE
Configuring the Azure Load Balancer using the Basic SKU , requires that Azure virtual machines belong to an Availability Set.
To learn more about availability sets, see Manage the availability of Windows virtual machines in Azure. To add existing
virtual machines to an availability set, refer to Set Azure Resource Manager VM Availability Set.
After the load balancer is created, a backend pool needs to be created, which distributes traffic to one or more
gateway servers. Follow the steps described in the quickstart article section Create resources for the load balancer.
NOTE
When configuring the health probe it should be configured to use the TCP port of the gateway server. The health probe
dynamically adds or removes the gateway servers from the load balancer rotation based on their response to health checks.
NOTE
To install the Log Analytics agent on the gateway and Windows computers that directly connect to Log Analytics, see
Connect Windows computers to the Log Analytics service in Azure. To connect Linux computers, see Connect Linux
computers to Azure Monitor.
After you install the agent on the gateway server, configure it to report to the workspace or workspace agents that
communicate with the gateway. If the Log Analytics Windows agent is not installed on the gateway, event 300 is
written to the OMS Gateway event log, indicating that the agent needs to be installed. If the agent is installed but
not configured to report to the same workspace as the agents that communicate through it, event 105 is written to
the same log, indicating that the agent on the gateway needs to be configured to report to the same workspace as
the agents that communicate with the gateway.
After you complete configuration, restart the OMS Gateway service to apply the changes. Otherwise, the gateway
will reject agents that attempt to communicate with Log Analytics and will report event 105 in the OMS Gateway
event log. This will also happen when you add or remove a workspace from the agent configuration on the
gateway server.
For information related to the Automation Hybrid Runbook Worker, see Automate resources in your datacenter or
cloud by using Hybrid Runbook Worker.
Configure Operations Manager, where all agents use the same proxy server
The Operations Manager proxy configuration is automatically applied to all agents that report to Operations
Manager, even if the setting is empty.
To use OMS Gateway to support Operations Manager, you must have:
Microsoft Monitoring Agent (version 8.0.10900.0 or later) installed on the OMS Gateway server and configured
with the same Log Analytics workspaces that your management group is configured to report to.
Internet connectivity. Alternatively, OMS Gateway must be connected to a proxy server that is connected to the
internet.
NOTE
If you specify no value for the gateway, blank values are pushed to all agents.
If your Operations Manager management group is registering with a Log Analytics workspace for the first time,
you won't see the option to specify the proxy configuration for the management group in the Operations console.
This option is available only if the management group has been registered with the service.
To configure integration, update the system proxy configuration by using Netsh on the system where you're
running the Operations console and on all management servers in the management group. Follow these steps:
1. Open an elevated command prompt:
a. Select Star t and enter cmd .
b. Right-click Command Prompt and select Run as administrator .
2. Enter the following command:
netsh winhttp set proxy <proxy>:<port>
After completing the integration with Log Analytics, remove the change by running netsh winhttp reset proxy .
Then, in the Operations console, use the Configure proxy ser ver option to specify the Log Analytics gateway
server.
1. On the Operations Manager console, under Operations Management Suite , select Connection , and
then select Configure Proxy Ser ver .
2. Select Use a proxy ser ver to access the Operations Management Suite and then enter the IP
address of the Log Analytics gateway server or virtual IP address of the load balancer. Be careful to start
with the prefix http:// .
3. Select Finish . Your Operations Manager management group is now configured to communicate through
the gateway server to the Log Analytics service.
Configure Operations Manager, where specific agents use a proxy server
For large or complex environments, you might want only specific servers (or groups) to use the Log Analytics
gateway server. For these servers, you can't update the Operations Manager agent directly because this value is
overwritten by the global value for the management group. Instead, override the rule used to push these values.
NOTE
Use this configuration technique if you want to allow for multiple Log Analytics gateway servers in your environment. For
example, you can require specific Log Analytics gateway servers to be specified on a regional basis.
To configure specific servers or groups to use the Log Analytics gateway server:
1. Open the Operations Manager console and select the Authoring workspace.
2. In the Authoring workspace, select Rules .
3. On the Operations Manager toolbar, select the Scope button. If this button is not available, make sure you
have selected an object, not a folder, in the Monitoring pane. The Scope Management Pack Objects
dialog box displays a list of common targeted classes, groups, or objects.
4. In the Look for field, enter Health Ser vice and select it from the list. Select OK .
5. Search for Advisor Proxy Setting Rule .
6. On the Operations Manager toolbar, select Overrides and then point to Override the Rule\For a
specific object of class: Health Ser vice and select an object from the list. Or create a custom group that
contains the health service object of the servers you want to apply this override to. Then apply the override
to your custom group.
7. In the Override Proper ties dialog box, add a check mark in the Override column next to the
WebProxyAddress parameter. In the Override Value field, enter the URL of the Log Analytics gateway
server. Be careful to start with the prefix http:// .
NOTE
You don't need to enable the rule. It's already managed automatically with an override in the Microsoft System
Center Advisor Secure Reference Override management pack that targets the Microsoft System Center Advisor
Monitoring Server Group.
8. Select a management pack from the Select destination management pack list, or create a new unsealed
management pack by selecting New .
9. When you finish, select OK .
Configure for Automation Hybrid Runbook Workers
If you have Automation Hybrid Runbook Workers in your environment, follow these steps to configure the
gateway to support the workers.
Refer to the Configure your network section of the Automation documentation to find the URL for each region.
If your computer is registered as a Hybrid Runbook Worker automatically, for example if the Update Management
solution is enabled for one or more VMs, follow these steps:
1. Add the Job Runtime Data service URLs to the Allowed Host list on the Log Analytics gateway. For example:
Add-OMSGatewayAllowedHost we-jobruntimedata-prod-su1.azure-automation.net
2. Restart the Log Analytics gateway service by using the following PowerShell cmdlet:
Restart-Service OMSGatewayService
If your computer is joined to Azure Automation by using the Hybrid Runbook Worker registration cmdlet, follow
these steps:
1. Add the agent service registration URL to the Allowed Host list on the Log Analytics gateway. For example:
Add-OMSGatewayAllowedHost ncus-agentservice-prod-1.azure-automation.net
2. Add the Job Runtime Data service URLs to the Allowed Host list on the Log Analytics gateway. For example:
Add-OMSGatewayAllowedHost we-jobruntimedata-prod-su1.azure-automation.net
3. Restart the Log Analytics gateway service. Restart-Service OMSGatewayService
Set- Address Sets the address (and 1. Set a relay proxy and
OMSGatewayRelayProxy Username credential) of relay credential:
Password (secure string) (upstream) proxy Set-
OMSGatewayRelayProxy
-Address
http://www.myproxy.com:8080
-Username user1 -
Password 123
Troubleshooting
To collect events logged by the gateway, you should have the Log Analytics agent installed.
ID DESC RIP T IO N
The Log Analytics gateway supports only TLS 1.0, TLS 1.1, and
1.2. It does not support SSL.
Log Analytics Gateway/Active Client Connection Number of active client network (TCP) connections
Log Analytics Gateway/Rejection Count Number of rejections due to any TLS validation error
Assistance
When you're signed in to the Azure portal, you can get help with the Log Analytics gateway or any other Azure
service or feature. To get help, select the question mark icon in the upper-right corner of the portal and select New
suppor t request . Then complete the new support request form.
Next steps
Add data sources to collect data from connected sources, and store the data in your Log Analytics workspace.
Managing and maintaining the Log Analytics agent
for Windows and Linux
2/27/2020 • 9 minutes to read • Edit Online
After initial deployment of the Log Analytics Windows or Linux agent in Azure Monitor, you may need to
reconfigure the agent, upgrade it, or remove it from the computer if it has reached the retirement stage in its
lifecycle. You can easily manage these routine maintenance tasks manually or through automation, which reduces
both operational error and expenses.
Upgrading agent
The Log Analytics agent for Windows and Linux can be upgraded to the latest release manually or automatically
depending on the deployment scenario and environment the VM is running in. The following methods can be used
to upgrade the agent.
Custom Azure VM images Manual install of Log Analytics agent Updating VMs to the newest version of
for Windows/Linux the agent needs to be performed from
the command line running the
Windows installer package or Linux self-
extracting and installable shell script
bundle.
Non-Azure VMs Manual install of Log Analytics agent Updating VMs to the newest version of
for Windows/Linux the agent needs to be performed from
the command line running the
Windows installer package or Linux self-
extracting and installable shell script
bundle.
NOTE
During the upgrade of the Log Analytics agent for Windows, it does not support configuring or reconfiguring a workspace to
report to. To configure the agent, you need to follow one of the supported methods listed under Adding or removing a
workspace.
NOTE
If you've used the command line or script previously to install or configure the agent, EnableAzureOperationalInsights
was replaced by AddCloudWorkspace and RemoveCloudWorkspace .
Linux agent
The following steps demonstrate how to reconfigure the Linux agent if you decide to register it with a different
workspace or to remove a workspace from its configuration.
1. To verify it is registered to a workspace, run the following command:
/opt/microsoft/omsagent/bin/omsadmin.sh -l
It is important that the status also shows the agent is running, otherwise the following steps to reconfigure
the agent will not complete successfully.
2. If it is already registered with a workspace, remove the registered workspace by running the following
command. Otherwise if it is not registered, proceed to the next step.
/opt/microsoft/omsagent/bin/omsadmin.sh -X
The agent service does not need to be restarted in order for the changes to take effect.
param($ProxyDomainName="https://proxy.contoso.com:30443", $cred=(Get-Credential))
if (!$proxyMethod)
{
Write-Output 'Health Service proxy API not present, will not update settings.'
return
}
$ProxyUserName = $cred.username
Linux agent
Perform the following steps if your Linux computers need to communicate through a proxy server or Log Analytics
gateway. The proxy configuration value has the following syntax [protocol://][user:password@]proxyhost[:port] .
The proxyhost property accepts a fully qualified domain name or IP address of the proxy server.
1. Edit the file /etc/opt/microsoft/omsagent/proxy.conf by running the following commands and change the
values to your specific settings.
proxyconf="https://proxyuser:proxypassword@proxyserver01:30443"
sudo echo $proxyconf >>/etc/opt/microsoft/omsagent/proxy.conf
sudo chown omsagent:omiusers /etc/opt/microsoft/omsagent/proxy.conf
Uninstall agent
Use one of the following procedures to uninstall the Windows or Linux agent using the command line or setup
wizard.
Windows agent
Uninstall from Control Panel
1. Sign on to the computer with an account that has administrative rights.
2. In Control Panel , click Programs and Features .
3. In Programs and Features , click Microsoft Monitoring Agent , click Uninstall , and then click Yes .
NOTE
The Agent Setup Wizard can also be run by double-clicking MMASetup-<platform>.exe , which is available for download
from a workspace in the Azure portal.
NOTE
As part of the ongoing transition from Microsoft Operations Management Suite to Azure Monitor, the Operations
Management Suite Agent for Windows or Linux will be referred to as the Log Analytics agent for Windows and Log Analytics
agent for Linux.
NOTE
As part of the ongoing transition from Microsoft Operations Management Suite to Azure Monitor, the Operations
Management Suite Agent for Windows or Linux will be referred to as the Log Analytics agent for Windows and Log Analytics
agent for Linux.
2. Ensure that the line beginning with httpsport= defines the port 1270. Such as: httpsport=1270
Next steps
Review Troubleshooting the Linux agent if you encounter issues while installing or managing the Linux
agent.
Review Troubleshooting the Windows agent if you encounter issues while installing or managing the
Windows agent.
Agent Health solution in Azure Monitor
11/2/2020 • 5 minutes to read • Edit Online
The Agent Health solution in Azure helps you understand, for all of the agents reporting directly to the Log
Analytics workspace in Azure Monitor or a System Center Operations Manager management group connected to
Azure Monitor, which are unresponsive and submitting operational data. You can also keep track of how many
agents are deployed, where they are distributed geographically, and perform other queries to maintain awareness
of the distribution of agents deployed in Azure, other cloud environments, or on-premises.
Prerequisites
Before you deploy this solution, confirm you have currently supported Windows agents reporting to the Log
Analytics workspace or reporting to an Operations Manager management group integrated with your workspace.
Solution components
This solution consists of the following resources that are added to your workspace and directly connected agents or
Operations Manager connected management group.
Management packs
If your System Center Operations Manager management group is connected to a Log Analytics workspace, the
following management packs are installed in Operations Manager. These management packs are also installed on
directly connected Windows computers after adding this solution. There is nothing to configure or manage with
these management packs.
Microsoft System Center Advisor HealthAssessment Direct Channel Intelligence Pack
(Microsoft.IntelligencePacks.HealthAssessmentDirect)
Microsoft System Center Advisor HealthAssessment Server Channel Intelligence Pack
(Microsoft.IntelligencePacks.HealthAssessmentViaServer).
For more information on how solution management packs are updated, see Connect Operations Manager to Log
Analytics.
Configuration
Add the Agent Health solution to your Log Analytics workspace using the process described in Add solutions. There
is no further configuration required.
Data collection
Supported agents
The following table describes the connected sources that are supported by this solution.
System Center Operations Manager Yes Heartbeat events are collected from
management group agents reporting to the management
group every 60 seconds and then
forwarded to Azure Monitor. A direct
connection from Operations Manager
agents to Azure Monitor is not
required. Heartbeat event data is
forwarded from the management group
to the Log Analytics workspace.
Click on the Agent Health tile to open the Agent Health dashboard. The dashboard includes the columns in the
following table. Each column lists the top ten events by count that match that column’s criteria for the specified
time range. You can run a log search that provides the entire list by selecting See all at the right bottom of each
column, or by clicking the column header.
C O L UM N DESC RIP T IO N
Agent count over time A trend of your agent count over a period of seven days for
both Linux and Windows agents.
Count of unresponsive agents A list of agents that haven’t sent a heartbeat in the past 24
hours.
Distribution by OS Type A partition of how many Windows and Linux agents you have
in your environment.
Distribution by Agent Version A partition of the different agent versions installed in your
environment and a count of each one.
Distribution by Agent Category A partition of the different categories of agents that are
sending up heartbeat events: direct agents, OpsMgr agents,
or the OpsMgr Management Server.
Count of Gateways Installed The number of servers that have the Log Analytics gateway
installed, and a list of these servers.
Type Heartbeat
ComputerIP The public IP address of the computer. On Azure VMs, this will
show the public IP if one is available. For VMs using private
IPs, this will display the Azure SNAT address (not the private IP
address).
Each agent reporting to an Operations Manager management server will send two heartbeats, and
SCAgentChannel property's value will include both Direct and SCManagementSer ver depending on what data
sources and monitoring solutions you have enabled in your subscription. If you recall, data from solutions are
either sent directly from an Operations Manager management server to Azure Monitor, or because of the volume
of data collected on the agent, are sent directly from the agent to Azure Monitor. For heartbeat events which have
the value SCManagementSer ver , the ComputerIP value is the IP address of the management server since the
data is actually uploaded by it. For heartbeats where SCAgentChannel is set to Direct , it is the public IP address of
the agent.
Heartbeat | summarize LastCall = max(TimeGenerated) by Count of unresponsive agents in the last 24 hours
Computer | where LastCall < ago(24h)
Heartbeat | summarize LastCall = max(TimeGenerated) by Count of unresponsive agents in the last 15 minutes
Computer | where LastCall < ago(15m)
Heartbeat | where TimeGenerated > ago(24h) and Computer Computers online (in the last 24 hours)
in ((Heartbeat | where TimeGenerated > ago(24h) | distinct
Computer)) | summarize LastCall = max(TimeGenerated) by
Computer
Heartbeat | where TimeGenerated > ago(24h) and Computer Total Agents Offline in Last 30 minutes (for the last 24 hours)
!in ((Heartbeat | where TimeGenerated > ago(30m) | distinct
Computer)) | summarize LastCall = max(TimeGenerated) by
Computer
Heartbeat | summarize AggregatedValue = dcount(Computer) Get a trend of number of agents over time by OSType
by OSType
Q UERY DESC RIP T IO N
Next steps
Learn about Alerts in Azure Monitor for details on generating alerts from log queries.
Log Analytics agent data sources in Azure Monitor
11/2/2020 • 3 minutes to read • Edit Online
The data that Azure Monitor collects from virtual machines with the Log Analytics agent is defined by the data
sources that you configure on the Log Analytics workspace. Each data source creates records of a particular type
with each type having its own set of properties.
IMPORTANT
This article covers data sources for the Log Analytics agent which is one of the agents used by Azure Monitor. Other agents
collect different data and are configured differently. See Overview of Azure Monitor agents for a list of the available agents
and the data they can collect.
O P ERAT IO N
S
M A N A GER
A GEN T
O P ERAT IO N O P ERAT IO N DATA SEN T C O L L EC T IO
LO G S S VIA N
DATA A N A LY T IC S M A N A GER A Z URE M A N A GER M A N A GEM F REQ UEN C
SO URC E P L AT F O RM A GEN T A GEN T STO RA GE REQ UIRED? EN T GRO UP Y
Performanc Windows • • as
e counters scheduled,
minimum of
10 seconds
Performanc Linux • as
e counters scheduled,
minimum of
10 seconds
Data collection
Data source configurations are delivered to agents that are directly connected to Azure Monitor within a few
minutes. The specified data is collected from the agent and delivered directly to Azure Monitor at intervals specific
to each data source. See the documentation for each data source for these specifics.
For System Center Operations Manager agents in a connected management group, data source configurations are
translated into management packs and delivered to the management group every 5 minutes by default. The agent
downloads the management pack like any other and collects the specified data. Depending on the data source, the
data will be either sent to a management server which forwards the data to the Azure Monitor, or the agent will
send the data to Azure Monitor without going through the management server. See Data collection details for
monitoring solutions in Azure for details. You can read about details of connecting Operations Manager and Azure
Monitor and modifying the frequency that configuration is delivered at Configure Integration with System Center
Operations Manager.
If the agent is unable to connect to Azure Monitor or Operations Manager, it will continue to collect data that it will
deliver when it establishes a connection. Data can be lost if the amount of data reaches the maximum cache size
for the client, or if the agent is not able to establish a connection within 24 hours.
Log records
All log data collected by Azure Monitor is stored in the workspace as records. Records collected by different data
sources will have their own set of properties and be identified by their Type property. See the documentation for
each data source and solution for details on each record type.
Next steps
Learn about monitoring solutions that add functionality to Azure Monitor and also collect data into the
workspace.
Learn about log queries to analyze the data collected from data sources and monitoring solutions.
Configure alerts to proactively notify you of critical data collected from data sources and monitoring solutions.
Collect Windows event log data sources with Log
Analytics agent
11/2/2020 • 3 minutes to read • Edit Online
Windows Event logs are one of the most common data sources for Log Analytics agents on Windows virtual
machines since many applications write to the Windows event log. You can collect events from standard logs such
as System and Application in addition to specifying any custom logs created by applications you need to monitor.
IMPORTANT
This article covers collecting Windows events with the Log Analytics agent which is one of the agents used by Azure Monitor.
Other agents collect different data and are configured differently. See Overview of Azure Monitor agents for a list of the
available agents and the data they can collect.
Data collection
Azure Monitor collects each event that matches a selected severity from a monitored event log as the event is
created. The agent records its place in each event log that it collects from. If the agent goes offline for a period of
time, then it collects events from where it last left off, even if those events were created while the agent was offline.
There is a potential for these events to not be collected if the event log wraps with uncollected events being
overwritten while the agent is offline.
NOTE
Azure Monitor does not collect audit events created by SQL Server from source MSSQLSERVER with event ID 18453 that
contains keywords - Classic or Audit Success and keyword 0xa0000000000000.
Computer Name of the computer that the event was collected from.
EventLog Name of the event log that the event was collected from.
Event | where EventLevelName == "error" All Windows events with severity of error.
Event | where EventLevelName == "error" | summarize count() Count of Windows error events by source.
by Source
Next steps
Configure Log Analytics to collect other data sources for analysis.
Learn about log queries to analyze the data collected from data sources and solutions.
Configure collection of performance counters from your Windows agents.
Collecting custom JSON data sources with the Log
Analytics agent for Linux in Azure Monitor
11/2/2020 • 2 minutes to read • Edit Online
NOTE
As part of the ongoing transition from Microsoft Operations Management Suite to Azure Monitor, the Operations
Management Suite Agent for Windows or Linux will be referred to as the Log Analytics agent for Windows and Log Analytics
agent for Linux.
Custom JSON data sources can be collected into Azure Monitor using the Log Analytics agent for Linux. These
custom data sources can be simple scripts returning JSON such as curl or one of FluentD's 300+ plugins. This
article describes the configuration required for this data collection.
NOTE
Log Analytics agent for Linux v1.1.0-217+ is required for Custom JSON Data
Configuration
Configure input plugin
To collect JSON data in Azure Monitor, add oms.api. to the start of a FluentD tag in an input plugin.
For example, following is a separate configuration file in
exec-json.conf
/etc/opt/microsoft/omsagent/<workspace id>/conf/omsagent.d/ . This uses the FluentD plugin exec to run a curl
command every 30 seconds. The output from this command is collected by the JSON output plugin.
<source>
type exec
command 'curl localhost/json.output'
format json
tag oms.api.httpresponse
run_interval 30s
</source>
<match oms.api.httpresponse>
type out_oms_api
log_level info
buffer_chunk_limit 5m
buffer_type file
buffer_path /var/opt/microsoft/omsagent/<workspace id>/state/out_oms_api_httpresponse*.buffer
buffer_queue_limit 10
flush_interval 20s
retry_limit 10
retry_wait 30s
</match>
<match oms.api.**>
type out_oms_api
log_level info
buffer_chunk_limit 5m
buffer_type file
buffer_path /var/opt/microsoft/omsagent/<workspace id>/state/out_oms_api*.buffer
buffer_queue_limit 10
flush_interval 20s
retry_limit 10
retry_wait 30s
</match>
Output
The data will be collected in Azure Monitor with a record type of <FLUENTD_TAG>_CL .
For example, the custom tag tag oms.api.tomcat in Azure Monitor with a record type of tomcat_CL . You could
retrieve all records of this type with the following log query.
Type=tomcat_CL
Nested JSON data sources are supported, but are indexed based off of parent field. For example, the following
JSON data is returned from a log query as tag_s : "[{ "a":"1", "b":"2" }] .
{
"tag": [{
"a":"1",
"b":"2"
}]
}
Next steps
Learn about log queries to analyze the data collected from data sources and solutions.
Collect data from CollectD on Linux agents in Azure
Monitor
11/2/2020 • 3 minutes to read • Edit Online
CollectD is an open source Linux daemon that periodically collects performance metrics from applications and
system level information. Example applications include the Java Virtual Machine (JVM), MySQL Server, and Nginx.
This article provides information on collecting performance data from CollectD in Azure Monitor.
A full list of available plugins can be found at Table of Plugins.
The following CollectD configuration is included in the Log Analytics agent for Linux to route CollectD data to the
Log Analytics agent for Linux.
NOTE
As part of the ongoing transition from Microsoft Operations Management Suite to Azure Monitor, the Operations
Management Suite Agent for Windows or Linux will be referred to as the Log Analytics agent for Windows and Log Analytics
agent for Linux.
LoadPlugin write_http
<Plugin write_http>
<Node "oms">
URL "127.0.0.1:26000/oms.collectd"
Format "JSON"
StoreRates true
</Node>
</Plugin>
Additionally, if using an versions of collectD before 5.5 use the following configuration instead.
LoadPlugin write_http
<Plugin write_http>
<URL "127.0.0.1:26000/oms.collectd">
Format "JSON"
StoreRates true
</URL>
</Plugin>
The CollectD configuration uses the default write_http plugin to send performance metric data over port 26000 to
Log Analytics agent for Linux.
NOTE
This port can be configured to a custom-defined port if needed.
The Log Analytics agent for Linux also listens on port 26000 for CollectD metrics and then converts them to Azure
Monitor schema metrics. The following is the Log Analytics agent for Linux configuration collectd.conf .
<source>
type http
port 26000
bind 127.0.0.1
</source>
<filter oms.collectd>
type filter_collectd
</filter>
NOTE
CollectD by default is set to read values at a 10-second interval. As this directly affects the volume of data sent to Azure
Monitor Logs, you might need to tune this interval within the CollectD configuration to strike a good balance between the
monitoring requirements and associated costs and usage for Azure Monitor Logs.
Versions supported
Azure Monitor currently supports CollectD version 4.8 and above.
Log Analytics agent for Linux v1.1.0-217 or above is required for CollectD metric collection.
Configuration
The following are basic steps to configure collection of CollectD data in Azure Monitor.
1. Configure CollectD to send data to the Log Analytics agent for Linux using the write_http plugin.
2. Configure the Log Analytics agent for Linux to listen for the CollectD data on the appropriate port.
3. Restart CollectD and Log Analytics agent for Linux.
Configure CollectD to forward data
1. To route CollectD data to the Log Analytics agent for Linux, oms.conf needs to be added to CollectD's
configuration directory. The destination of this file depends on the Linux distro of your machine.
If your CollectD config directory is located in /etc/collectd.d/:
sudo cp /etc/opt/microsoft/omsagent/sysconf/omsagent.d/collectd.conf
/etc/opt/microsoft/omsagent/<workspace id>/conf/omsagent.d/
sudo chown omsagent:omiusers /etc/opt/microsoft/omsagent/<workspace id>/conf/omsagent.d/collectd.conf
3. Restart CollectD and Log Analytics agent for Linux with the following commands.
host Computer
plugin None
type ObjectName
type_instance CounterName
If type_instance is null then CounterName=blank
dsnames[] CounterName
dstypes None
values[] CounterValue
Next steps
Learn about log queries to analyze the data collected from data sources and solutions.
Use Custom Fields to parse data from syslog records into individual fields.
Collect Syslog data sources with Log Analytics agent
11/2/2020 • 6 minutes to read • Edit Online
Syslog is an event logging protocol that is common to Linux. Applications will send messages that may be stored
on the local machine or delivered to a Syslog collector. When the Log Analytics agent for Linux is installed, it
configures the local Syslog daemon to forward messages to the agent. The agent then sends the message to Azure
Monitor where a corresponding record is created.
IMPORTANT
This article covers collecting Syslog events with the Log Analytics agent which is one of the agents used by Azure Monitor.
Other agents collect different data and are configured differently. See Overview of Azure Monitor agents for a list of the
available agents and the data they can collect.
NOTE
Azure Monitor supports collection of messages sent by rsyslog or syslog-ng, where rsyslog is the default daemon. The
default syslog daemon on version 5 of Red Hat Enterprise Linux, CentOS, and Oracle Linux version (sysklog) is not supported
for syslog event collection. To collect syslog data from this version of these distributions, the rsyslog daemon should be
installed and configured to replace sysklog.
Configuring Syslog
The Log Analytics agent for Linux will only collect events with the facilities and severities that are specified in its
configuration. You can configure Syslog through the Azure portal or by managing configuration files on your Linux
agents.
Configure Syslog in the Azure portal
Configure Syslog from the Data menu in Advanced Settings for the Log Analytics workspace. This configuration is
delivered to the configuration file on each Linux agent.
You can add a new facility by first selecting the option Apply below configuration to my machines and then
typing in its name and clicking + . For each facility, only messages with the selected severities will be collected.
Check the severities for the particular facility that you want to collect. You cannot provide any additional criteria to
filter messages.
By default, all configuration changes are automatically pushed to all agents. If you want to configure Syslog
manually on each Linux agent, then uncheck the box Apply below configuration to my machines.
Configure Syslog on Linux agent
When the Log Analytics agent is installed on a Linux client, it installs a default syslog configuration file that defines
the facility and severity of the messages that are collected. You can modify this file to change the configuration. The
configuration file is different depending on the Syslog daemon that the client has installed.
NOTE
If you edit the syslog configuration, you must restart the syslog daemon for the changes to take effect.
rsyslog
The configuration file for rsyslog is located at /etc/rsyslog.d/95-omsagent.conf . Its default contents are shown
below. This collects syslog messages sent from the local agent for all facilities with a level of warning or higher.
kern.warning @127.0.0.1:25224
user.warning @127.0.0.1:25224
daemon.warning @127.0.0.1:25224
auth.warning @127.0.0.1:25224
syslog.warning @127.0.0.1:25224
uucp.warning @127.0.0.1:25224
authpriv.warning @127.0.0.1:25224
ftp.warning @127.0.0.1:25224
cron.warning @127.0.0.1:25224
local0.warning @127.0.0.1:25224
local1.warning @127.0.0.1:25224
local2.warning @127.0.0.1:25224
local3.warning @127.0.0.1:25224
local4.warning @127.0.0.1:25224
local5.warning @127.0.0.1:25224
local6.warning @127.0.0.1:25224
local7.warning @127.0.0.1:25224
You can remove a facility by removing its section of the configuration file. You can limit the severities that are
collected for a particular facility by modifying that facility's entry. For example, to limit the user facility to messages
with a severity of error or higher you would modify that line of the configuration file to the following:
user.error @127.0.0.1:25224
syslog-ng
The configuration file for syslog-ng is location at /etc/syslog-ng/syslog-ng.conf . Its default contents are shown
below. This collects syslog messages sent from the local agent for all facilities and all severities.
#
# Warnings (except iptables) in one file:
#
destination warn { file("/var/log/warn" fsync(yes)); };
log { source(src); filter(f_warn); destination(warn); };
#OMS_Destination
destination d_oms { udp("127.0.0.1" port(25224)); };
#OMS_facility = auth
filter f_auth_oms { level(alert,crit,debug,emerg,err,info,notice,warning) and facility(auth); };
log { source(src); filter(f_auth_oms); destination(d_oms); };
#OMS_facility = authpriv
filter f_authpriv_oms { level(alert,crit,debug,emerg,err,info,notice,warning) and facility(authpriv); };
log { source(src); filter(f_authpriv_oms); destination(d_oms); };
#OMS_facility = cron
filter f_cron_oms { level(alert,crit,debug,emerg,err,info,notice,warning) and facility(cron); };
log { source(src); filter(f_cron_oms); destination(d_oms); };
#OMS_facility = daemon
filter f_daemon_oms { level(alert,crit,debug,emerg,err,info,notice,warning) and facility(daemon); };
log { source(src); filter(f_daemon_oms); destination(d_oms); };
#OMS_facility = kern
filter f_kern_oms { level(alert,crit,debug,emerg,err,info,notice,warning) and facility(kern); };
log { source(src); filter(f_kern_oms); destination(d_oms); };
#OMS_facility = local0
filter f_local0_oms { level(alert,crit,debug,emerg,err,info,notice,warning) and facility(local0); };
log { source(src); filter(f_local0_oms); destination(d_oms); };
#OMS_facility = local1
filter f_local1_oms { level(alert,crit,debug,emerg,err,info,notice,warning) and facility(local1); };
log { source(src); filter(f_local1_oms); destination(d_oms); };
#OMS_facility = mail
filter f_mail_oms { level(alert,crit,debug,emerg,err,info,notice,warning) and facility(mail); };
log { source(src); filter(f_mail_oms); destination(d_oms); };
#OMS_facility = syslog
filter f_syslog_oms { level(alert,crit,debug,emerg,err,info,notice,warning) and facility(syslog); };
log { source(src); filter(f_syslog_oms); destination(d_oms); };
#OMS_facility = user
filter f_user_oms { level(alert,crit,debug,emerg,err,info,notice,warning) and facility(user); };
log { source(src); filter(f_user_oms); destination(d_oms); };
You can remove a facility by removing its section of the configuration file. You can limit the severities that are
collected for a particular facility by removing them from its list. For example, to limit the user facility to just alert
and critical messages, you would modify that section of the configuration file to the following:
#OMS_facility = user
filter f_user_oms { level(alert,crit) and facility(user); };
log { source(src); filter(f_user_oms); destination(d_oms); };
You can change the port number by creating two configuration files: a FluentD config file and a rsyslog-or-syslog-
ng file depending on the Syslog daemon you have installed.
The FluentD config file should be a new file located in: /etc/opt/microsoft/omsagent/conf/omsagent.d and
replace the value in the por t entry with your custom port number.
<source>
type syslog
port %SYSLOG_PORT%
bind 127.0.0.1
protocol_type udp
tag oms.syslog
</source>
<filter oms.syslog.**>
type filter_syslog
For rsyslog, you should create a new configuration file located in: /etc/rsyslog.d/ and replace the value
%SYSLOG_PORT% with your custom port number.
NOTE
If you modify this value in the configuration file 95-omsagent.conf , it will be overwritten when the agent applies a
default configuration.
The syslog-ng config should be modified by copying the example configuration shown below and adding
the custom modified settings to the end of the syslog-ng.conf configuration file located in /etc/syslog-ng/ .
Do not use the default label %WORKSPACE_ID%_oms or %WORKSPACE_ID_OMS , define a custom
label to help distinguish your changes.
NOTE
If you modify the default values in the configuration file, they will be overwritten when the agent applies a default
configuration.
After completing the changes, the Syslog and the Log Analytics agent service needs to be restarted to ensure the
configuration changes take effect.
Facility Defines the part of the system that generated the message.
Syslog | where SeverityLevel == "error" All Syslog records with severity of error.
Next steps
Learn about log queries to analyze the data collected from data sources and solutions.
Use Custom Fields to parse data from syslog records into individual fields.
Configure Linux agents to collect other types of data.
Collect Windows and Linux performance data
sources with Log Analytics agent
11/2/2020 • 8 minutes to read • Edit Online
Performance counters in Windows and Linux provide insight into the performance of hardware components,
operating systems, and applications. Azure Monitor can collect performance counters from Log Analytics agents at
frequent intervals for Near Real Time (NRT) analysis in addition to aggregating performance data for longer term
analysis and reporting.
IMPORTANT
This article covers collecting performance data with the Log Analytics agent which is one of the agents used by Azure
Monitor. Other agents collect different data and are configured differently. See Overview of Azure Monitor agents for a list of
the available agents and the data they can collect.
* All instances
<source>
type oms_omi
object_name "Processor"
instance_regex ".*"
counter_name_regex ".*"
interval 30s
</source>
The following table lists the objects and counters that you can specify in the configuration file. There are additional
counters available for certain applications as described in Collect performance counters for Linux applications in
Azure Monitor.
O B JEC T N A M E C O UN T ER N A M E
Memory Pages/sec
System Processes
System Uptime
System Users
<source>
type oms_omi
object_name "Logical Disk"
instance_regex ".*"
counter_name_regex ".*"
interval 5m
</source>
<source>
type oms_omi
object_name "Processor"
instance_regex ".*"
counter_name_regex ".*"
interval 30s
</source>
<source>
type oms_omi
object_name "Memory"
instance_regex ".*"
counter_name_regex ".*"
interval 30s
</source>
Data collection
Azure Monitor collects all specified performance counters at their specified sample interval on all agents that have
that counter installed. The data is not aggregated, and the raw data is available in all log query views for the
duration specified by your log analytics workspace.
Sizing estimates
A rough estimate for collection of a particular counter at 10-second intervals is about 1 MB per day per instance.
You can estimate the storage requirements of a particular counter with the following formula.
Perf | where Computer == "MyComputer" All Performance data from a particular computer
Perf | where CounterName == "Current Disk Queue Length" All Performance data for a particular counter
Perf | where ObjectName == "Processor" and CounterName Average CPU Utilization across all computers
== "% Processor Time" and InstanceName == "_Total" |
summarize AVGCPU = avg(CounterValue) by Computer
Perf | where CounterName == "% Processor Time" | Maximum CPU Utilization across all computers
summarize AggregatedValue = max(CounterValue) by
Computer
Perf | where ObjectName == "LogicalDisk" and CounterName Average Current Disk Queue length across all the instances of
== "Current Disk Queue Length" and Computer == a given computer
"MyComputerName" | summarize AggregatedValue =
avg(CounterValue) by InstanceName
Perf | where CounterName == "Disk Transfers/sec" | 95th Percentile of Disk Transfers/Sec across all computers
summarize AggregatedValue = percentile(CounterValue, 95)
by Computer
Perf | where CounterName == "% Processor Time" and Hourly average of CPU usage across all computers
InstanceName == "_Total" | summarize AggregatedValue =
avg(CounterValue) by bin(TimeGenerated, 1h), Computer
Perf | where Computer == "MyComputer" and CounterName Hourly 70 percentile of every % percent counter for a
startswith_cs "%" and InstanceName == "_Total" | summarize particular computer
AggregatedValue = percentile(CounterValue, 70) by
bin(TimeGenerated, 1h), CounterName
Q UERY DESC RIP T IO N
Perf | where CounterName == "% Processor Time" and Hourly average, minimum, maximum, and 75-percentile CPU
InstanceName == "_Total" and Computer == "MyComputer" | usage for a specific computer
summarize ["min(CounterValue)"] = min(CounterValue),
["avg(CounterValue)"] = avg(CounterValue),
["percentile75(CounterValue)"] = percentile(CounterValue, 75),
["max(CounterValue)"] = max(CounterValue) by
bin(TimeGenerated, 1h), Computer
Perf | where ObjectName == "MSSQL$INST2:Databases" and All Performance data from the Database performance object
InstanceName == "master" for the master database from the named SQL Server instance
INST2.
Next steps
Collect performance counters from Linux applications including MySQL and Apache HTTP Server.
Learn about log queries to analyze the data collected from data sources and solutions.
Export collected data to Power BI for additional visualizations and analysis.
Collect IIS logs with Log Analytics agent in Azure
Monitor
11/2/2020 • 2 minutes to read • Edit Online
Internet Information Services (IIS) stores user activity in log files that can be collected by the Log Analytics agent
and stored in Azure Monitor Logs.
IMPORTANT
This article covers collecting IIS logs with the Log Analytics agent which is one of the agents used by Azure Monitor. Other
agents collect different data and are configured differently. See Overview of Azure Monitor agents for a list of the available
agents and the data they can collect.
Data collection
Azure Monitor collects IIS log entries from each agent each time the log timestamp changes. The log is read every
5 minutes . If for any reason IIS doesn't update the timestamp before the rollover time when a new file is created,
entries will be collected following creation of the new file. The frequency of new file creation is controlled by the
Log File Rollover Schedule setting for the IIS site, which is once a day by default. If the setting is Hourly , Azure
Monitor collects the log each hour. If the setting is Daily , Azure Monitor collects the log every 24 hours.
Computer Name of the computer that the event was collected from.
P RO P ERT Y DESC RIP T IO N
csReferer Site that the user followed a link from to the current site.
SourceSystem OpsMgr
W3CIISLog | where scStatus==500 All IIS log records with a return status of 500.
W3CIISLog | summarize count() by cIP Count of IIS log entries by client IP address.
W3CIISLog | where csHost=="www.contoso.com" | summarize Count of IIS log entries by URL for the host
count() by csUriStem www.contoso.com.
W3CIISLog | summarize sum(csBytes) by Computer | take Total bytes received by each IIS computer.
500000
Next steps
Configure Azure Monitor to collect other data sources for analysis.
Learn about log queries to analyze the data collected from data sources and solutions.
Collect custom logs with Log Analytics agent in
Azure Monitor
11/2/2020 • 8 minutes to read • Edit Online
The Custom Logs data source for the Log Analytics agent in Azure Monitor allows you to collect events from text
files on both Windows and Linux computers. Many applications log information to text files instead of standard
logging services such as Windows Event log or Syslog. Once collected, you can either parse the data into
individual fields in your queries or extract the data during collection to individual fields.
IMPORTANT
This article covers collecting custom logs with the Log Analytics agent which is one of the agents used by Azure Monitor.
Other agents collect different data and are configured differently. See Overview of Azure Monitor agents for a list of the
available agents and the data they can collect.
NOTE
A Log Analytics workspace supports the following limits:
Only 500 custom logs can be created.
A table only supports up to 500 columns.
The maximum number of characters for the column name is 500.
IMPORTANT
Custom log collection requires that the application writing the log file flushes the log content to the disk periodically. This is
because the custom log collection relies on filesystem change notifications for the log file being tracked.
All files in C:\Logs with a name starting with log and a .txt C:\Logs\log*.txt
extension on Windows agent
All files in /var/log/audit with a name starting with log and a /var/log/audit/log*.txt
.txt extension on Linux agent
1. Select Windows or Linux to specify which path format you are adding.
2. Type in the path and click the + button.
3. Repeat the process for any additional paths.
Step 4. Provide a name and description for the log
The name that you specify will be used for the log type as described above. It will always end with _CL to
distinguish it as a custom log.
1. Type in a name for the log. The _CL suffix is automatically provided.
2. Add an optional Description .
3. Click Next to save the custom log definition.
Step 5. Validate that the custom logs are being collected
It may take up to an hour for the initial data from a new custom log to appear in Azure Monitor. It will start
collecting entries from the logs found in the path you specified from the point that you defined the custom log. It
will not retain the entries that you uploaded during the custom log creation, but it will collect already existing
entries in the log files that it locates.
Once Azure Monitor starts collecting from the custom log, its records will be available with a log query. Use the
name that you gave the custom log as the Type in your query.
NOTE
If the RawData property is missing from the query, you may need to close and reopen your browser.
Data collection
Azure Monitor will collect new entries from each custom log approximately every 5 minutes. The agent will record
its place in each log file that it collects from. If the agent goes offline for a period of time, then Azure Monitor will
collect entries from where it last left off, even if those entries were created while the agent was offline.
The entire contents of the log entry are written to a single property called RawData . See Parse text data in Azure
Monitor for methods to parse each imported log entry into multiple properties.
TimeGenerated Date and time that the record was collected by Azure
Monitor. If the log uses a time-based delimiter then this is the
time collected from the entry.
RawData Full text of the collected entry. You will most likely want to
parse this data into individual properties.
Next steps
See Parse text data in Azure Monitor for methods to parse each imported log entry into multiple properties.
Learn about log queries to analyze the data collected from data sources and solutions.
Create custom fields in a Log Analytics workspace in
Azure Monitor (Preview)
11/2/2020 • 7 minutes to read • Edit Online
NOTE
This article describes how to parse text data in a Log Analytics workspace as it's collected. We recommend parsing text data
in a query filter after it's collected following the guidance described in Parse text data in Azure Monitor. It provides several
advantages over using custom fields.
IMPORTANT
Custom fields increases the amount of data collected in the Log Analytics workspace which can increase your cost. See
Manage usage and costs with Azure Monitor Logs for details.
The Custom Fields feature of Azure Monitor allows you to extend existing records in your Log Analytics
workspace by adding your own searchable fields. Custom fields are automatically populated from data extracted
from other properties in the same record.
For example, the sample record below has useful data buried in the event description. Extracting this data into a
separate property makes it available for such actions as sorting and filtering.
NOTE
In the Preview, you are limited to 100 custom fields in your workspace. This limit will be expanded when this feature reaches
general availability.
NOTE
The custom field is populated as records matching the specified criteria are added to the Log Analytics workspace, so it will
only appear on records collected after the custom field is created. The custom field will not be added to records that are
already in the data store when it’s created.
Sample walkthrough
The following section walks through a complete example of creating a custom field. This example extracts the
service name in Windows events that indicate a service changing state. This relies on events created by Service
Control Manager during system startup on Windows computers. If you want to follow this example, you must be
collecting Information events for the System log.
We enter the following query to return all events from Service Control Manager that have an Event ID of 7036
which is the event that indicates a service starting or stopping.
The Field Extraction Wizard is opened, and the EventLog and EventID fields are selected in the Main Example
column. This indicates that the custom field will be defined for events from the System log with an event ID of
7036. This is sufficient so we don’t need to select any other fields.
We highlight the name of the service in the RenderedDescription property and use Ser vice to identify the
service name. The custom field will be called Ser vice_CF . The field type in this case is a string, so we can leave that
unchanged.
We see that the service name is identified properly for some records but not for others. The Search Results show
that part of the name for the WMI Performance Adapter wasn’t selected. The Summar y shows that one record
identified Modules Installer instead of Windows Modules Installer .
We start with the WMI Performance Adapter record. We click its edit icon and then Modify this highlight .
We increase the highlight to include the word WMI and then rerun the extract.
We can see that the entries for WMI Performance Adapter have been corrected, and Log Analytics also used
that information to correct the records for Windows Module Installer .
We can now run a query that verifies Ser vice_CF is created but is not yet added to any records. That's because the
custom field doesn't work against existing records so we need to wait for new records to be collected.
After some time has passed so new events are collected, we can see that the Ser vice_CF field is now being added
to records that match our criteria.
We can now use the custom field like any other record property. To illustrate this, we create a query that groups by
the new Ser vice_CF field to inspect which services are the most active.
Next steps
Learn about log queries to build queries using custom fields for criteria.
Monitor custom log files that you parse using custom fields.
Troubleshooting the Log Analytics VM extension in
Azure Monitor
11/2/2020 • 2 minutes to read • Edit Online
This article provides help troubleshooting errors you might experience with the Log Analytics VM extension for
Windows and Linux virtual machines running on Microsoft Azure, and suggests possible solutions to resolve them.
To verify the status of the extension, perform the following steps from the Azure portal.
1. Sign into the Azure portal.
2. In the Azure portal, click All ser vices . In the list of resources, type vir tual machines . As you begin typing,
the list filters based on your input. Select Vir tual machines .
3. In your list of virtual machines, find and select it.
4. On the virtual machine, click Extensions .
5. From the list, check to see if the Log Analytics extension is enabled or not. For Linux, the agent is listed as
OMSAgentforLinux and for Windows, the agent is listed as MicrosoftMonitoringAgent .
If the Log Analytics agent for Linux VM extension is not installing or reporting, you can perform the following steps
to troubleshoot the issue.
1. If the extension status is Unknown check if the Azure VM agent is installed and working correctly by reviewing
the VM agent log file /var/log/waagent.log
If the log does not exist, the VM agent is not installed.
Install the Azure VM Agent on Linux VMs
2. For other unhealthy statuses, review the Log Analytics agent for Linux VM extension logs files in
/var/log/azure/Microsoft.EnterpriseCloud.Monitoring.OmsAgentForLinux/*/extension.log and
/var/log/azure/Microsoft.EnterpriseCloud.Monitoring.OmsAgentForLinux/*/CommandExecution.log
3. If the extension status is healthy, but data is not being uploaded review the Log Analytics agent for Linux log files
in /var/opt/microsoft/omsagent/log/omsagent.log
For more information, see troubleshooting Linux extensions.
Next steps
For additional troubleshooting guidance related to the Log Analytics agent for Linux hosted on computers outside
of Azure, see Troubleshoot Azure Log Analytics Linux Agent.
How to troubleshoot issues with the Log Analytics
agent for Windows
11/2/2020 • 8 minutes to read • Edit Online
This article provides help troubleshooting errors you might experience with the Log Analytics agent for Windows
in Azure Monitor and suggests possible solutions to resolve them.
If none of these steps work for you, the following support channels are also available:
Customers with Premier support benefits can open a support request with Premier.
Customers with Azure support agreements can open a support request in the Azure portal.
Visit the Log Analytics Feedback page to review submitted ideas and bugs https://aka.ms/opinsightsfeedback or
file a new one.
Connectivity issues
If the agent is communicating through a proxy server or firewall, there may be restrictions in place preventing
communication from the source computer and the Azure Monitor service. In case communication is blocked,
because of misconfiguration, registration with a workspace might fail while attempting to install the agent or
configure the agent post-setup to report to an additional workspace. Agent communication may fail after
successful registration. This section describes the methods to troubleshoot this type of issue with the Windows
agent.
Double check that the firewall or proxy is configured to allow the following ports and URLs described in the
following table. Also confirm HTTP inspection is not enabled for web traffic, as it can prevent a secure TLS channel
between the agent and Azure Monitor.
For firewall information required for Azure Government, see Azure Government management. If you plan to use
the Azure Automation Hybrid Runbook Worker to connect to and register with the Automation service to use
runbooks or management solutions in your environment, it must have access to the port number and the URLs
described in Configure your network for the Hybrid Runbook Worker.
There are several ways you can verify if the agent is successfully communicating with Azure Monitor.
Enable the Azure Log Analytics Agent Health assessment in the workspace. From the Agent Health
dashboard, view the Count of unresponsive agents column to quickly see if the agent is listed.
Run the following query to confirm the agent is sending a heartbeat to the workspace it is configured to
report to. Replace <ComputerName> with the actual name of the machine.
Heartbeat
| where Computer like "<ComputerName>"
| summarize arg_max(TimeGenerated, * ) by Computer
If the computer is successfully communicating with the service, the query should return a result. If the
query did not return a result, first verify the agent is configured to report to the correct workspace. If it is
configured correctly, proceed to step 3 and search the Windows Event Log to identify if the agent is logging
what issue might be preventing it from communicating with Azure Monitor.
Another method to identify a connectivity issue is by running the TestCloudConnectivity tool. The tool is
installed by default with the agent in the folder %SystemRoot%\Program Files\Microsoft Monitoring
Agent\Agent. From an elevated command prompt, navigate to the folder and run the tool. The tool returns
the results and highlights where the test failed (for example, if it was related to a particular port/URL that
was blocked).
Filter the Operations Manager event log by Event sources - Health Service Modules, HealthService, and
Service Connector and filter by Event Level Warning and Error to confirm if it has written events from the
following table. If they are, review the resolution steps included for each possible event.
2133 & 2129 Health Service Connection to the service This error can occur when
from the agent failed the agent cannot
communicate directly or
through a firewall/proxy
server to the Azure
Monitor service. Verify
agent proxy settings or
that the network
firewall/proxy allows TCP
traffic from the computer
to the service.
EVEN T ID SO URC E DESC RIP T IO N RESO L UT IO N
2138 Health Service Modules Proxy requires Configure the agent proxy
authentication settings and specify the
username/password
required to authenticate
with the proxy server.
4000 Service Connector DNS name resolution The machine could not
failed resolve the Internet
address used when
sending data to the
service. This might be DNS
resolver settings on your
machine, incorrect proxy
settings, or maybe a
temporary DNS issue with
your provider. If it happens
periodically, it could be
caused by a transient
network-related issue.
EVEN T ID SO URC E DESC RIP T IO N RESO L UT IO N
4001 Service Connector Connection to the service This error can occur when
failed. the agent cannot
communicate directly or
through a firewall/proxy
server to the Azure
Monitor service. Verify
agent proxy settings or
that the network
firewall/proxy allows TCP
traffic from the computer
to the service.
4002 Service Connector The service returned HTTP This error is written during
status code 403 in the agent’s initial
response to a query. registration phase and
Check with the service you’ll see a URL similar to
administrator for the the following:
health of the service. The https://<workspaceID>.o
query will be retried later. ms.opinsights.azure.com/A
gentService.svc/AgentTopo
logyRequest. An error code
403 means forbidden and
can be caused by a
mistyped Workspace ID or
key, or the data and time
is incorrect on the
computer. If the time is +/-
15 minutes from current
time, then onboarding
fails. To correct this, update
the date and/or timezone
of your Windows
computer.
Heartbeat
| where Computer like "<ComputerName>"
| summarize arg_max(TimeGenerated, * ) by Computer
If the query returns results, then you need to determine if a particular data type is not collected and forwarded to
the service. This could be caused by the agent not receiving updated configuration from the service, or some other
symptom preventing the agent from operating normally. Perform the following steps to further troubleshoot.
1. Open an elevated command prompt on the computer and restart the agent service by typing
net stop healthservice && net start healthservice .
2. Open the Operations Manager event log and search for event IDs 7023, 7024, 7025, 7028 and 1210 from
Event source HealthService. These events indicate the agent is successfully receiving configuration from
Azure Monitor and they are actively monitoring the computer. The event description for event ID 1210 will
also specify on the last line all of the solutions and Insights that are included in the scope of monitoring on
the agent.
3. If after several minutes you do not see the expected data in the query results or visualization, depending on
if you are viewing the data from a solution or Insight, from the Operations Manager event log, search for
Event sources HealthService and Health Service Modules and filter by Event Level Warning and Error to
confirm if it has written events from the following table.
8000 HealthService This event will specify if a Event ID 2136 from source
workflow related to HealthService is written
performance, event, or together with this event
other data type collected is and can indicate the agent
unable to forward to the is unable to communicate
service for ingestion to the with the service, possibly
workspace. due to misconfiguration of
the proxy and
authentication settings,
network outage, or the
network firewall/proxy
does not allow TCP traffic
from the computer to the
service.
10102 and 10103 Health Service Modules Workflow could not This can occur if the
resolve data source. specified performance
counter or instance does
not exist on the computer
or is incorrectly defined in
the workspace data
settings. If this is a user-
specified performance
counter, verify the
information specified is
following the correct
format and exists on the
target computers.
EVEN T ID SO URC E DESC RIP T IO N RESO L UT IO N
26002 Health Service Modules Workflow could not This can occur if the
resolve data source. specified Windows event
log does not exist on the
computer. This error can
be safely ignored if the
computer is not expected
to have this event log
registered, otherwise if this
is a user-specified event
log, verify the information
specified is correct.
What is Azure Monitor for VMs?
11/2/2020 • 2 minutes to read • Edit Online
Azure Monitor for VMs monitors the performance and health of your virtual machines and virtual machine scale
sets, including their running processes and dependencies on other resources. It can help deliver predictable
performance and availability of vital applications by identifying performance bottlenecks and network issues and
can also help you understand whether an issue is related to other dependencies.
Azure Monitor for VMs supports Windows and Linux operating systems on the following:
Azure virtual machines
Azure virtual machine scale sets
Hybrid virtual machines connected with Azure Arc
On-premises virtual machines
Virtual machines hosted in another cloud environment
Azure Monitor for VMs stores its data in Azure Monitor Logs, which allows it to deliver powerful aggregation and
filtering and to analyze data trends over time. You can view this data in a single VM from the virtual machine
directly, or you can use Azure Monitor to deliver an aggregated view of multiple VMs.
Pricing
There's no direct cost for Azure Monitor for VMs, but you're charged for its activity in the Log Analytics
workspace. Based on the pricing that's published on the Azure Monitor pricing page, Azure Monitor for VMs is
billed for:
Data ingested from agents and stored in the workspace.
Alert rules based on log and health data.
Notifications sent from alert rules.
The log size varies by the string lengths of performance counters, and it can increase with the number of logical
disks and network adapters allocated to the VM. If you're already using Service Map, the only change you'll see is
the additional performance data that's sent to the Azure Monitor InsightsMetrics data type.
Configuring Azure Monitor for VMs
The steps to configure Azure Monitor for VMs are as follows. Follow each link for detailed guidance on each step:
Create Log Analytics workspace.
Add VMInsights solution to workspace.
Install agents on virtual machine and virtual machine scale set to be monitored.
Next steps
See Deploy Azure Monitor for VMs for requirements and methods that to enable monitoring for your virtual
machines.
Azure Monitor for VMs Generally Available (GA)
Frequently Asked Questions
11/2/2020 • 7 minutes to read • Edit Online
This General Availability FAQ covers changes that were made in Q4 2019 and Q1 2020 as we prepared for GA.
What is changing?
We have released a new solution, named VMInsights, that includes additional capabilities for data collection along
with a new location for storing this data in your Log Analytics workspace.
In the past, we enabled the ServiceMap solution on your workspace and setup performance counters in your Log
Analytics workspace to send the data to the Perf table. This new solution sends the data to a table named
InsightsMetrics that is also used by Azure Monitor for containers. This table schema allows us to store additional
metrics and service data sets that are not compatible with the Perf table format.
We have updated our Performance charts to use the data we store in the InsightsMetrics table. You can upgrade to
use the InsightsMetrics table from our Get Star ted page as described below.
How do I upgrade?
When a Log Analytics workspace is upgraded to the latest version of Azure Monitor to VMs, it will upgrade the
dependency agent on each of the VMs attached to that workspace. Each VM requiring upgrade will be identified in
the Get Star ted tab in Azure Monitor for VMs in the Azure portal. When you choose to upgrade a VM, it will
upgrade the workspace for that VM along with any other VMs attached to that workspace. You can select a single
VM or multiple VMs, resource groups, or subscriptions.
Use the following command to upgrade a workspace using PowerShell:
NOTE
If you have Alert Rules that reference these counters in the Perf table, you need to update them to reference new data
stored in the InsightsMetrics table. Refer to our documentation for example log queries that you can use that refer to
this table.
If you decide to keep the performance counters enabled, you will be billed for the data ingested and stored in the
Perf table based on [Log Analytics pricing[(https://azure.microsoft.com/pricing/details/monitor/).
I use VM Health now with one environment and would like to deploy it
to a new one
If you are an existing customer that is using the Health feature and want to use it for a new roll-out, please contact
us at vminsights@microsoft.com to request instructions.
Next steps
To understand the requirements and methods that help you monitor your virtual machines, review Deploy Azure
Monitor for VMs.
Azure Monitor Frequently Asked Questions
11/2/2020 • 41 minutes to read • Edit Online
This Microsoft FAQ is a list of commonly asked questions about Azure Monitor. If you have any additional
questions, go to the discussion forum and post your questions. When a question is frequently asked, we add it to
this article so that it can be found quickly and easily.
General
What is Azure Monitor?
Azure Monitor is a service in Azure that provides performance and availability monitoring for applications and
services in Azure, other cloud environments, or on-premises. Azure Monitor collects data from multiple sources
into a common data platform where it can be analyzed for trends and anomalies. Rich features in Azure Monitor
assist you in quickly identifying and responding to critical situations that may affect your application.
What's the difference between Azure Monitor, Log Analytics, and Application Insights?
In September 2018, Microsoft combined Azure Monitor, Log Analytics, and Application Insights into a single service
to provide powerful end-to-end monitoring of your applications and the components they rely on. Features in Log
Analytics and Application Insights have not changed, although some features have been rebranded to Azure
Monitor in order to better reflect their new scope. The log data engine and query language of Log Analytics is now
referred to as Azure Monitor Logs. See Azure Monitor terminology updates.
What does Azure Monitor cost?
Features of Azure Monitor that are automatically enabled such as collection of metrics and activity logs are
provided at no cost. There is a cost associated with other features such as log queries and alerting. See the Azure
Monitor pricing page for detailed pricing information.
How do I enable Azure Monitor?
Azure Monitor is enabled the moment that you create a new Azure subscription, and Activity log and platform
metrics are automatically collected. Create diagnostic settings to collect more detailed information about the
operation of your Azure resources, and add monitoring solutions and insights to provide additional analysis on
collected data for particular services.
How do I access Azure Monitor?
Access all Azure Monitor features and data from the Monitor menu in the Azure portal. The Monitoring section
of the menu for different Azure services provides access to the same tools with data filtered to a particular
resource. Azure Monitor data is also accessible for a variety of scenarios using CLI, PowerShell, and a REST API.
Is there an on-premises version of Azure Monitor?
No. Azure Monitor is a scalable cloud service that processes and stores large amounts of data, although Azure
Monitor can monitor resources that are on-premises and in other clouds.
Can Azure Monitor monitor on-premises resources?
Yes, in addition to collecting monitoring data from Azure resources, Azure Monitor can collect data from virtual
machines and applications in other clouds and on-premises. See Sources of monitoring data for Azure Monitor.
Does Azure Monitor integrate with System Center Operations Manager?
You can connect your existing System Center Operations Manager management group to Azure Monitor to collect
data from agents into Azure Monitor Logs. This allows you to use log queries and solution to analyze data collected
from agents. You can also configure existing System Center Operations Manager agents to send data directly to
Azure Monitor. See Connect Operations Manager to Azure Monitor.
What IP addresses does Azure Monitor use?
See IP addresses used by Application Insights and Log Analytics for a listing of the IP addresses and ports required
for agents and other external resources to access Azure Monitor.
Monitoring data
Where does Azure Monitor get its data?
Azure Monitor collects data from a variety of sources including logs and metrics from Azure platform and
resources, custom applications, and agents running on virtual machines. Other services such as Azure Security
Center and Network Watcher collect data into a Log Analytics workspace so it can be analyzed with Azure Monitor
data. You can also send custom data to Azure Monitor using the REST API for logs or metrics. See Sources of
monitoring data for Azure Monitor.
What data is collected by Azure Monitor?
Azure Monitor collects data from a variety of sources into logs or metrics. Each type of data has its own relative
advantages, and each supports a particular set of features in Azure Monitor. There is a single metrics database for
each Azure subscription, while you can create multiple Log Analytics workspaces to collect logs depending on your
requirements. See Azure Monitor data platform.
Is there a maximum amount of data that I can collect in Azure Monitor?
There is no limit to the amount of metric data you can collect, but this data is stored for a maximum of 93 days. See
Retention of Metrics. There is no limit on the amount of log data that you can collect, but it may be affected by the
pricing tier you choose for the Log Analytics workspace. See pricing details.
How do I access data collected by Azure Monitor?
Insights and solutions provide a custom experience for working with data stored in Azure Monitor. You can work
directly with log data using a log query written in Kusto Query Language (KQL). In the Azure portal, you can write
and run queries and interactively analyze data using Log Analytics. Analyze metrics in the Azure portal with the
Metrics Explorer. See Analyze log data in Azure Monitor and Getting started with Azure Metrics Explorer.
Logs
What's the difference between Azure Monitor Logs and Azure Data Explorer?
Azure Data Explorer is a fast and highly scalable data exploration service for log and telemetry data. Azure Monitor
Logs is built on top of Azure Data Explorer and uses the same Kusto Query Language (KQL) with some minor
differences. See Azure Monitor log query language differences.
How do I retrieve log data?
All data is retrieved from a Log Analytics workspace using a log query written using Kusto Query Language (KQL).
You can write your own queries or use solutions and insights that include log queries for a particular application or
service. See Overview of log queries in Azure Monitor.
Can I delete data from a Log Analytics workspace?
Data is removed from a workspace according to its retention period. You can delete specific data for privacy or
compliance reasons. See How to export and delete private data for more information.
What is a Log Analytics workspace?
All log data collected by Azure Monitor is stored in a Log Analytics workspace. A workspace is essentially a
container where log data is collected from a variety of sources. You may have a single Log Analytics workspace for
all your monitoring data or may have requirements for multiple workspaces. See Designing your Azure Monitor
Logs deployment.
Can you move an existing Log Analytics workspace to another Azure subscription?
You can move a workspace between resource groups or subscriptions but not to a different region. See Move a Log
Analytics workspace to different subscription or resource group.
Why can't I see Query Explorer and Save buttons in Log Analytics?
Quer y Explorer , Save and New aler t rule buttons are not available when the query scope is set to a specific
resource. To create alerts, save or load a query, Log Analytics must be scoped to a workspace. To open Log Analytics
in workspace context, select Logs from the Azure Monitor menu. The last used workspace is selected, but you can
select any other workspace. See Log query scope and time range in Azure Monitor Log Analytics
Why am I getting the error: "Register resource provider 'Microsoft.Insights' for this subscription to enable this
query" when opening Log Analytics from a VM?
Many resource providers are automatically registered, but you may need to manually register some resource
providers. The scope for registration is always the subscription. See Resource providers and types for more
information.
Why am I getting no access error message when opening Log Analytics from a VM?
To view VM Logs, you need to be granted with read permission to the workspaces that stores the VM logs. In these
cases, your administrator must grant you with to permissions in Azure.
Metrics
Why are metrics from the guest OS of my Azure virtual machine not showing up in Metrics explorer?
Platform metrics are collected automatically for Azure resources. You must perform some configuration though to
collect metrics from the guest OS of a virtual machine. For a Windows VM, install the diagnostic extension and
configure the Azure Monitor sink as described in Install and configure Windows Azure diagnostics extension
(WAD). For Linux, install the Telegraf agent as described in Collect custom metrics for a Linux VM with the
InfluxData Telegraf agent.
Alerts
What is an alert in Azure Monitor?
Alerts proactively notify you when important conditions are found in your monitoring data. They allow you to
identify and address issues before the users of your system notice them. There are multiple kinds of alerts:
Metric - Metric value exceeds a threshold.
Log query - Results of a log query match defined criteria.
Activity log - Activity log event matches defined criteria.
Web test - Results of availability test match defined criteria.
See Overview of alerts in Microsoft Azure.
What is an action group?
An action group is a collection of notifications and actions that can be triggered by an alert. Multiple alerts can use
a single action group allowing you to leverage common sets of notifications and actions. See Create and manage
action groups in the Azure portal.
What is an action rule?
An action rule allows you to modify the behavior of a set of alerts that match a certain criteria. This allows you to
perform such requirements as disable alert actions during a maintenance window. You can also apply an action
group to a set of alerts rather than applying them directly to the alert rules. See Action rules.
Agents
Does Azure Monitor require an agent?
An agent is only required to collect data from the operating system and workloads in virtual machines. The virtual
machines can be located in Azure, another cloud environment, or on-premises. See Overview of the Azure Monitor
agents.
What's the difference between the Azure Monitor agents?
Azure Diagnostic extension is for Azure virtual machines and collects data to Azure Monitor Metrics, Azure Storage,
and Azure Event Hubs. The Log Analytics agent is for virtual machines in Azure, another cloud environment, or on-
premises and collects data to Azure Monitor Logs. The Dependency agent requires the Log Analytics agent and
collected process details and dependencies. See Overview of the Azure Monitor agents.
Does my agent traffic use my ExpressRoute connection?
Traffic to Azure Monitor uses the Microsoft peering ExpressRoute circuit. See ExpressRoute documentation for a
description of the different types of ExpressRoute traffic.
How can I confirm that the Log Analytics agent is able to communicate with Azure Monitor?
From Control Panel on the agent computer, select Security & Settings , **Microsoft Monitoring Agent. Under the
Azure Log Analytics (OMS) tab, a green check mark icon confirms that the agent is able to communicate with
Azure Monitor. A yellow warning icon means the agent is having issues. One common reason is the Microsoft
Monitoring Agent service has stopped. Use service control manager to restart the service.
How do I stop the Log Analytics agent from communicating with Azure Monitor?
For agents connected to Log Analytics directly, open the Control Panel and select Security & Settings , Microsoft
Monitoring Agent . Under the Azure Log Analytics (OMS) tab, remove all workspaces listed. In System Center
Operations Manager, remove the computer from the Log Analytics managed computers list. Operations Manager
updates the configuration of the agent to no longer report to Log Analytics.
How much data is sent per agent?
The amount of data sent per agent depends on:
The solutions you have enabled
The number of logs and performance counters being collected
The volume of data in the logs
See Manage usage and costs with Azure Monitor Logs for details.
For computers that are able to run the WireData agent, use the following query to see how much data is being
sent:
WireData
| where ProcessName == "C:\\Program Files\\Microsoft Monitoring Agent\\Agent\\MonitoringHost.exe"
| where Direction == "Outbound"
| summarize sum(TotalBytes) by Computer
How much network bandwidth is used by the Microsoft Management Agent (MMA ) when sending data to
Azure Monitor?
Bandwidth is a function on the amount of data sent. Data is compressed as it is sent over the network.
How can I be notified when data collection from the Log Analytics agent stops?
Use the steps described in create a new log alert to be notified when data collection stops. Use the following
settings for the alert rule:
Define aler t condition : Specify your Log Analytics workspace as the resource target.
Aler t criteria
Signal Name : Custom log search
Search quer y :
Heartbeat | summarize LastCall = max(TimeGenerated) by Computer | where LastCall < ago(15m)
Aler t logic : Based on number of results, Condition Greater than, Threshold value 0
Evaluated based on : Period (in minutes) 30, Frequency (in minutes) 10
Define aler t details
Name : Data collection stopped
Severity : Warning
Specify an existing or new Action Group so that when the log alert matches criteria, you are notified if you have a
heartbeat missing for more than 15 minutes.
What are the firewall requirements for Azure Monitor agents?
See Network firewall requirementsfor details on firewall requirements.
Visualizations
Why can't I see View Designer?
View Designer is only available for users assigned with Contributor permissions or higher in the Log Analytics
workspace.
Application Insights
Configuration problems
I'm having trouble setting up my:
.NET app
Monitoring an already-running app
Azure diagnostics
Java web app
I get no data from my server:
Set firewall exceptions
Set up an ASP.NET server
Set up a Java server
How many Application Insights resources should I deploy:
How to design your Application Insights deployment: One versus many Application Insights resources?
Can I use Application Insights with ...?
Web apps on an IIS server in Azure VM or Azure virtual machine scale set
Web apps on an IIS server - on-premises or in a VM
Java web apps
Node.js apps
Web apps on Azure
Cloud Services on Azure
App servers running in Docker
Single-page web apps
SharePoint
Windows desktop app
Other platforms
Is it free?
Yes, for experimental use. In the basic pricing plan, your application can send a certain allowance of data each
month free of charge. The free allowance is large enough to cover development, and publishing an app for a small
number of users. You can set a cap to prevent more than a specified amount of data from being processed.
Larger volumes of telemetry are charged by the Gb. We provide some tips on how to limit your charges.
The Enterprise plan incurs a charge for each day that each web server node sends telemetry. It is suitable if you
want to use Continuous Export on a large scale.
Read the pricing plan.
How much does it cost?
Open the Usage and estimated costs page in an Application Insights resource. There's a chart of recent
usage. You can set a data volume cap, if you want.
Open the Azure Billing blade to see your bills across all resources.
What does Application Insights modify in my project?
The details depend on the type of project. For a web application:
Adds these files to your project:
ApplicationInsights.config
ai.js
Installs these NuGet packages:
Application Insights API - the core API
Application Insights API for Web Applications - used to send telemetry from the server
Application Insights API for JavaScript Applications - used to send telemetry from the client
The packages include these assemblies:
Microsoft.ApplicationInsights
Microsoft.ApplicationInsights.Platform
Inserts items into:
Web.config
packages.config
(New projects only - if you add Application Insights to an existing project, you have to do this manually.) Inserts
snippets into the client and server code to initialize them with the Application Insights resource ID. For example,
in an MVC app, code is inserted into the master page Views/Shared/_Layout.cshtml
How do I upgrade from older SDK versions?
See the release notes for the SDK appropriate to your type of application.
How can I change which Azure resource my project sends data to?
In Solution Explorer, right-click ApplicationInsights.config and choose Update Application Insights . You can
send the data to an existing or new resource in Azure. The update wizard changes the instrumentation key in
ApplicationInsights.config, which determines where the server SDK sends your data. Unless you deselect "Update
all," it will also change the key where it appears in your web pages.
Can I use providers('Microsoft.Insights', 'components').apiVersions[0] in my Azure Resource Manager
deployments?
We do not recommend using this method of populating the API version. The newest version can represent preview
releases which may contain breaking changes. Even with newer non-preview releases, the API versions are not
always backwards compatible with existing templates, or in some cases the API version may not be available to all
subscriptions.
What is Status Monitor?
A desktop app that you can use in your IIS web server to help configure Application Insights in web apps. It doesn't
collect telemetry: you can stop it when you are not configuring an app.
Learn more.
What telemetry is collected by Application Insights?
From server web apps:
HTTP requests
Dependencies. Calls to: SQL Databases; HTTP calls to external services; Azure Cosmos DB, table, blob storage,
and queue.
Exceptions and stack traces.
Performance Counters - If you use Status Monitor, Azure monitoring for App Services, Azure monitoring for VM
or virtual machine scale set, or the Application Insights collectd writer.
Custom events and metrics that you code.
Trace logs if you configure the appropriate collector.
From client web pages:
Page view counts
AJAX calls Requests made from a running script.
Page view load data
User and session counts
Authenticated user IDs
From other sources, if you configure them:
Azure diagnostics
Import to Analytics
Log Analytics
Logstash
Can I filter out or modify some telemetry?
Yes, in the server you can write:
Telemetry Processor to filter or add properties to selected telemetry items before they are sent from your app.
Telemetry Initializer to add properties to all items of telemetry.
Learn more for ASP.NET or Java.
How are city, country/region, and other geo location data calculated?
We look up the IP address (IPv4 or IPv6) of the web client using GeoLite2.
Browser telemetry: We collect the sender's IP address.
Server telemetry: The Application Insights module collects the client IP address. It is not collected if
X-Forwarded-For is set.
To learn more about how IP address and geolocation data are collected in Application Insights refer to this
article.
You can configure the ClientIpHeaderTelemetryInitializer to take the IP address from a different header. In some
systems, for example, it is moved by a proxy, load balancer, or CDN to X-Originating-IP . Learn more.
You can use Power BI to display your request telemetry on a map.
How long is data retained in the portal? Is it secure?
Take a look at Data Retention and Privacy.
What happens to Application Insight's telemetry when a server or device loses connection with Azure?
All of our SDKs, including the web SDK, includes "reliable transport" or "robust transport". When the server or
device loses connection with Azure, telemetry is stored locally on the file system (Server SDKs) or in HTML5
Session Storage (Web SDK). The SDK will periodically retry to send this telemetry until our ingestion service
considers it "stale" (48-hours for logs, 30 minutes for metrics). Stale telemetry will be dropped. In some cases, such
as when local storage is full, retry will not occur.
Could personal data be sent in the telemetry?
This is possible if your code sends such data. It can also happen if variables in stack traces include personal data.
Your development team should conduct risk assessments to ensure that personal data is properly handled. Learn
more about data retention and privacy.
All octets of the client web address are always set to 0 after the geo location attributes are looked up.
My Instrumentation Key is visible in my web page source.
This is common practice in monitoring solutions.
It can't be used to steal your data.
It could be used to skew your data or trigger alerts.
We have not heard that any customer has had such problems.
You could:
Use two separate Instrumentation Keys (separate Application Insights resources), for client and server data. Or
Write a proxy that runs in your server, and have the web client send data through that proxy.
How do I see POST data in Diagnostic search?
We don't log POST data automatically, but you can use a TrackTrace call: put the data in the message parameter.
This has a longer size limit than the limits on string properties, though you can't filter on it.
Should I use single or multiple Application Insights resources?
Use a single resource for all the components or roles in a single business system. Use separate resources for
development, test, and release versions, and for independent applications.
See the discussion here
Example - cloud service with worker and web roles
How do I dynamically change the instrumentation key?
Discussion here
Example - cloud service with worker and web roles
What are the User and Session counts?
The JavaScript SDK sets a user cookie on the web client, to identify returning users, and a session cookie to
group activities.
If there is no client-side script, you can set cookies at the server.
If one real user uses your site in different browsers, or using in-private/incognito browsing, or different
machines, then they will be counted more than once.
To identify a logged-in user across machines and browsers, add a call to setAuthenticatedUserContext().
Have I enabled everything in Application Insights?
W H AT Y O U SH O UL D SEE H O W TO GET IT W H Y Y O U WA N T IT
Server app perf: response times, ... Add Application Insights to your project Detect perf issues
or Install AI Status Monitor on server
(or write your own code to track
dependencies)
Dependency telemetry Install AI Status Monitor on server Diagnose issues with databases or other
external components
Get stack traces from exceptions Insert TrackException calls in your code Detect and diagnose exceptions
(but some are reported automatically)
Search log traces Add a logging adapter Diagnose exceptions, perf issues
Client usage basics: page views, JavaScript initializer in web pages Usage analytics
sessions, ...
Client custom metrics Tracking calls in web pages Enhance user experience
NOTE
If the resource you are creating in a new region is replacing a classic resource we recommend exploring the benefits of
creating a new workspace-based resource or alternatively migrating your existing resource to workspace-based.
Automation
Configuring Application Insights
You can write PowerShell scripts using Azure Resource Monitor to:
Create and update Application Insights resources.
Set the pricing plan.
Get the instrumentation key.
Add a metric alert.
Add an availability test.
You can't set up a Metric Explorer report or set up continuous export.
Querying the telemetry
Use the REST API to run Analytics queries.
How can I set an alert on an event?
Azure alerts are only on metrics. Create a custom metric that crosses a value threshold whenever your event
occurs. Then set an alert on the metric. You'll get a notification whenever the metric crosses the threshold in either
direction; you won't get a notification until the first crossing, no matter whether the initial value is high or low;
there is always a latency of a few minutes.
Are there data transfer charges between an Azure web app and Application Insights?
If your Azure web app is hosted in a data center where there is an Application Insights collection endpoint, there
is no charge.
If there is no collection endpoint in your host data center, then your app's telemetry will incur Azure outgoing
charges.
This doesn't depend on where your Application Insights resource is hosted. It just depends on the distribution of
our endpoints.
Can I send telemetry to the Application Insights portal?
We recommend you use our SDKs and use the SDK API. There are variants of the SDK for various platforms. These
SDKs handle buffering, compression, throttling, retries, and so on. However, the ingestion schema and endpoint
protocol are public.
Can I monitor an intranet web server?
Yes, but you will need to allow traffic to our services by either firewall exceptions or proxy redirects.
QuickPulse https://rt.services.visualstudio.com:443
ApplicationIdProvider https://dc.services.visualstudio.com:443
TelemetryChannel https://dc.services.visualstudio.com:443
<ApplicationInsights>
...
<TelemetryModules>
<Add
Type="Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.QuickPulse.QuickPulseTelemetryModule,
Microsoft.AI.PerfCounterCollector">
<QuickPulseServiceEndpoint>https://rt.services.visualstudio.com/QuickPulseService.svc</QuickPulseServiceEndpoi
nt>
</Add>
</TelemetryModules>
...
<TelemetryChannel>
<EndpointAddress>https://dc.services.visualstudio.com/v2/track</EndpointAddress>
</TelemetryChannel>
...
<ApplicationIdProvider
Type="Microsoft.ApplicationInsights.Extensibility.Implementation.ApplicationId.ApplicationInsightsApplicationI
dProvider, Microsoft.ApplicationInsights">
<ProfileQueryEndpoint>https://dc.services.visualstudio.com/api/profiles/{0}/appId</ProfileQueryEndpoint>
</ApplicationIdProvider>
...
</ApplicationInsights>
NOTE
ApplicationIdProvider is available starting in v2.6.0.
Proxy passthrough
Proxy passthrough can be achieved by configuring either a machine level or application level proxy. For more
information see dotnet's article on DefaultProxy.
Example Web.config:
<system.net>
<defaultProxy>
<proxy proxyaddress="http://xx.xx.xx.xx:yyyy" bypassonlocal="true"/>
</defaultProxy>
</system.net>
OpenTelemetry
What is OpenTelemetry
A new open-source standard for observability. Learn more at https://opentelemetry.io/.
Why is Microsoft / Azure Monitor investing in OpenTelemetry?
We believe it better serves our customers for three reasons:
1. Enable support for more customer scenarios.
2. Instrument without fear of vendor lock-in.
3. Increase customer transparency and engagement.
It also aligns with Microsoft’s strategy to embrace open source.
What additional value does OpenTelemetry give me?
In addition to the reasons above, OpenTelemetry is more efficient at-scale and provides consistent
design/configurations across languages.
How can I test out OpenTelemetry?
Sign up to join our Azure Monitor Application Insights early adopter community at https://aka.ms/AzMonOtel.
What does GA mean in the context of OpenTelemetry?
The OpenTelemetry community defines Generally Available (GA) here. However, OpenTelemetry “GA” does not
mean feature parity with the existing Application Insights SDKs. Azure Monitor will continue to recommend our
current Application Insights SDKs for customers requiring features such as pre-aggregated metrics, live metrics,
adaptive sampling, profiler, and snapshot debugger until the OpenTelemetry SDKs reach feature maturity.
Can I use Preview builds in production environments?
It’s not recommended. See Supplemental Terms of Use for Microsoft Azure Previews for more information.
What’s the difference between OpenTelemetry SDK and auto -instrumentation?
The OpenTelemetry specification defines SDK. In short, “SDK” is a language-specific package that collects telemetry
data across the various components of your application and sends the data to Azure Monitor via an exporter.
The concept of auto-instrumentation (sometimes referred to as bytecode injection, codeless, or agent-based) refers
to the capability to instrument your application without changing your code. For example, check out the
OpenTelemetry Java Auto-instrumentation Readme for more information.
What’s the OpenTelemetry Collector?
The OpenTelemetry Collector is described in its GitHub readme. Currently Microsoft does not utilize the
OpenTelemetry Collector and depends on direct exporters that send to Azure Monitor’s Application Insights.
What’s the difference between OpenCensus and OpenTelemetry?
OpenCensus is the precursor to OpenTelemetry. Microsoft helped bring together OpenTracing and OpenCensus to
create OpenTelemetry, a single observability standard for the world. Azure Monitor’s current production-
recommended Python SDK is based on OpenCensus, but eventually all Azure Monitor’s SDKs will be based on
OpenTelemetry.
Option 2
Re-enable collection for these properties for every container log line.
If the first option is not convenient due to query changes involved, you can re-enable collecting these fields by
enabling the setting log_collection_settings.enrich_container_logs in the agent config map as described in the
data collection configuration settings.
NOTE
The second option is not recommended with large clusters that have more than 50 nodes because it generates API server
calls from every node in the cluster to perform this enrichment. This option also increases data size for every log line
collected.
console.log(json.stringify({
"Hello": "This example has multiple lines:",
"Docker/Moby": "will not break this into multiple lines",
"and you will receive":"all of them in log analytics",
"as one": "log entry"
}));
This data will look like the following example in Azure Monitor for logs when you query for it:
LogEntry : ({"Hello": "This example has multiple lines:","Docker/Moby": "will not break this into multiple
lines", "and you will receive":"all of them in log analytics", "as one": "log entry"}
For a detailed look at the issue, review the following GitHub link.
How do I resolve Azure AD errors when I enable live logs?
You may see the following error: The reply url specified in the request does not match the reply urls
configured for the application: '<application ID>' . The solution to solve it can be found in the article How to
view container data in real time with Azure Monitor for containers.
Why can't I upgrade cluster after onboarding?
If after you enable Azure Monitor for containers for an AKS cluster, you delete the Log Analytics workspace the
cluster was sending its data to, when attempting to upgrade the cluster it will fail. To work around this, you will
have to disable monitoring and then re-enable it referencing a different valid workspace in your subscription.
When you try to perform the cluster upgrade again, it should process and complete successfully.
Which ports and domains do I need to open/allow for the agent?
See the Network firewall requirements for the proxy and firewall configuration information required for the
containerized agent with Azure, Azure US Government, and Azure China 21Vianet clouds.
Next steps
If your question isn't answered here, you can refer to the following forums to additional questions and answers.
Log Analytics
Application Insights
For general feedback on Azure Monitor please visit the feedback forum.
Enable Azure Monitor for VMs overview
11/2/2020 • 5 minutes to read • Edit Online
This article provides an overview of the options available to enable Azure Monitor for VMs to monitor health and
performance of the following:
Azure virtual machines
Azure virtual machine scale sets
Hybrid virtual machines connected with Azure Arc
On-premises virtual machines
Virtual machines hosted in another cloud environment.
To set up Azure Monitor for VMs:
Enable a single Azure VM, Azure VMSS, or Azure Arc machine by selecting Insights directly from their menu
in the Azure portal.
Enable multiple Azure VMs, Azure VMSS, or Azure Arc machines by using Azure Policy. This method ensures
that on existing and new VMs and scale sets, the required dependencies are installed and properly configured.
Noncompliant VMs and scale sets are reported, so you can decide whether to enable them and to remediate
them.
Enable multiple Azure VMs, Azure Arc VMs, Azure VMSS, or Azure Arc machines across a specified
subscription or resource group by using PowerShell.
Enable Azure Monitor for VMs to monitor VMs or physical computers hosted in your corporate network or
other cloud environment.
Prerequisites
Before you start, make sure that you understand the information in the following sections.
NOTE
The following information described in this section is also applicable to the Service Map solution.
NOTE
You can monitor Azure VMs in any region. The VMs themselves aren't limited to the regions supported by the Log
Analytics workspace.
If you don't have a Log Analytics workspace, you can create one by using one of the resources:
Azure CLI
PowerShell
Azure portal
Azure Resource Manager
Azure virtual machine
Azure virtual machine scale set
Hybrid virtual machine connected with Azure Arc
Windows agents Yes Along with the Log Analytics agent for
Windows, Windows agents need the
Dependency agent. For more
information, see supported operating
systems.
Linux agents Yes Along with the Log Analytics agent for
Linux, Linux agents need the
Dependency agent. For more
information, see supported operating
systems.
Agents
Azure Monitor for VMs requires the following two agents to be installed on each virtual machine or virtual
machine scale set to be monitored. Installing these agents and connecting them to the workspace is the only
requirement to onboard the resource.
Log Analytics agent. Collects events and performance data from the virtual machine or virtual machine scale
set and delivers it to the Log Analytics workspace. Deployment methods for the Log Analytics agent on Azure
resources use the VM extension for Windows and Linux.
Dependency agent. Collects discovered data about processes running on the virtual machine and external
process dependencies, which is used by the Map feature in Azure Monitor for VMs. The Dependency agent
relies on the Log Analytics agent to deliver its data to Azure Monitor. Deployment methods for the the
Dependency agent on Azure resources use the VM extension for Windows and Linux.
NOTE
The Log Analytics agent is the same agent used by System Center Operations Manager. Azure Monitor for VMs can
monitor agents that are also monitored by Operations Manager if they are directly connected, and you install the
Dependency agent on them. Agents connected to Azure Monitor through a management group connection cannot be
monitored by Azure Monitor for VMs.
M ET H O D DESC RIP T IO N
Resource Manager templates Install both agents using any of the supported methods to
deploy a Resource Manager template including CLI and
PowerShell.
Management packs
When a Log Analytics workspace is configured for Azure Monitor for VMs, two management packs are forwarded
to all the Windows computers connected to that workspace. The management packs are named
Microsoft.IntelligencePacks.ApplicationDependencyMonitor and Microsoft.IntelligencePacks.VMInsights and are
written to *%Programfiles%\Microsoft Monitoring Agent\Agent\Health Service State\Management Packs*.
The data source used by the ApplicationDependencyMonitor management pack is *%Program files%\Microsoft
Monitoring Agent\Agent\Health Service
State\Resources<AutoGeneratedID>\Microsoft.EnterpriseManagement.Advisor.ApplicationDependencyMonitorDa
taSource.dll. The data source used by the VMInsights management pack is %Program files%\Microsoft
Monitoring Agent\Agent\Health Service State\Resources<AutoGeneratedID>\
Microsoft.VirtualMachineMonitoringModule.dll.
NOTE
For information about viewing or deleting personal data, see Azure Data Subject Requests for the GDPR. For more
information about GDPR, see the GDPR section of the Service Trust portal.
Next steps
To learn how to use the Performance monitoring feature, see View Azure Monitor for VMs Performance. To view
discovered application dependencies, see View Azure Monitor for VMs Map.
Enable Azure Monitor for single virtual machine or
virtual machine scale set in the Azure portal
11/2/2020 • 2 minutes to read • Edit Online
This article describes how to enable Azure Monitor for VMs for a virtual machine or virtual machine scale set using
the Azure portal. This procedure can be used for the following:
Azure virtual machine
Azure virtual machine scale set
Hybrid virtual machine connected with Azure Arc
Prerequisites
Create and configure a Log Analytics workspace. Alternatively, you can create a new workspace during this
process.
See Supported operating systems to ensure that the operating system of the virtual machine or virtual machine
scale set you're enabling is supported.
If the virtual machine isn't already connected to a Log Analytics workspace, then you'll be prompted to select one. If
you haven't previously created a workspace, then you can select a default for the location where the virtual
machine or virtual machine scale set is deployed in the subscription. This workspace will be created and configured
if it doesn't already exist. If you select an existing workspace, it will be configured for Azure Monitor for VMs if it
wasn't already.
NOTE
If you select a workspace that wasn't previously configured for Azure Monitor for VMs, the VMInsights management pack
will be added to this workspace. This will be applied to any agent already connected to the workspace, whether or not it's
enabled for Azure Monitor for VMs. Performance data will be collected from these virtual machines and stored in the
InsightsMetrics table.
NOTE
If you use a manual upgrade model for your virtual machine scale set, upgrade the instances to complete the setup. You can
start the upgrades from the Instances page, in the Settings section.
Next steps
See Use Azure Monitor for VMs Map to view discovered application dependencies.
See View Azure VM performance to identify bottlenecks, overall utilization, and your VM's performance.
Enable Azure Monitor for VMs by using Azure Policy
11/2/2020 • 6 minutes to read • Edit Online
This article explains how to enable Azure Monitor for VMs for Azure virtual machines or hybrid virtual machine
connected with Azure Arc (preview) using Azure Policy. Azure Policy allows you to assign policy definitions that
install the required agents for Azure Monitor for VMs across your Azure environment and automatically enable
monitoring for VMs as each virtual machine is created. Azure Monitor for VMs provides a feature that allows you
to discover and remediate noncompliant VMs in your environment. Use this feature instead of working directly
with Azure Policy.
If you're not familiar with Azure Policy, get a brief introduction at Deploy Azure Monitor at scale using Azure Policy.
NOTE
To use Azure Policy with Azure virtual machine scale sets, or to work with Azure Policy directly to enable Azure virtual
machines, see Deploy Azure Monitor at scale using Azure Policy.
Prerequisites
Create and configure a Log Analytics workspace.
See Supported operating systems to ensure that the operating system of the virtual machine or virtual machine
scale set you're enabling is supported.
This is the same page to assign an initiative in Azure Policy except that it's hardcoded with the scope that you
selected and the Enable Azure Monitor for VMs initiative definition. You can optionally change the Assignment
name and add a Description . Select Exclusions if you want to provide an exclusion to the scope. For example,
your scope could be a management group, and you could specify a subscription in that management group to be
excluded from the assignment.
On the Parameters page, select a Log Analytics workspace to be used by all virtual machines in the
assignment. If you want to specify different workspaces for different virtual machines, then you must create
multiple assignments, each with their own scope.
NOTE
If the workspace is beyond the scope of the assignment, grant Log Analytics Contributor permissions to the policy
assignment's Principal ID. If you don't do this, you might see a deployment failure like
The client '343de0fe-e724-46b8-b1fb-97090f7054ed' with object id '343de0fe-e724-46b8-b1fb-97090f7054ed'
does not have authorization to perform action 'microsoft.operationalinsights/workspaces/read' over scope
...
Click Review + Create to review the details of the assignment before clicking Create to create it. Don't create a
remediation task at this point since you will most likely need multiple remediation tasks to enable existing virtual
machines. See Remediate compliance results below.
Review compliance
Once an assignment is created, you can review and manage coverage for the Enable Azure Monitor for VMs
initiative across your management groups and subscriptions. This will show how many virtual machines exist in
each of the management groups or subscriptions and their compliance status.
The following table provides a description of the information in this view.
F UN C T IO N DESC RIP T IO N
Total VMs Total number of VMs in that scope regardless of their status.
For a management group, this is a sum total of VMs nested
under the subscriptions or child management groups.
Assignment Status Success - All VMs in the scope have the Log Analytics and
Dependency agents deployed to them.
Warning - The subscription isn't under a management group.
Not Star ted - A new assignment was added.
Lock - You don't have sufficient privileges to the management
group.
Blank - No VMs exist or a policy isn't assigned.
Compliant VMs Number of VMs that are compliant, which is the number of
VMs that have both Log Analytics agent and Dependency
agent installed. This will be blank if there are no assignments,
no VMs in the scope, or not proper permissions.
Compliance State Compliant - All VMs in the scope virtual machines have the
Log Analytics and Dependency agents deployed to them or
any new VMs in the scope subject to the assignment have not
yet been evaluated.
Non-compliant - There are VMs that have been evaluated
but are not enabled and may require remediation.
Not Star ted - A new assignment was added.
Lock - You don't have sufficient privileges to the management
group.
Blank - No policy is assigned.
When you assign the initiative, the scope selected in the assignment could be the scope listed or a subset of it. For
instance, you might have created an assignment for a subscription (policy scope) and not a management group
(coverage scope). In this case, the value of Assignment Coverage indicates the VMs in the initiative scope divided
by the VMs in coverage scope. In another case, you might have excluded some VMs, resource groups, or a
subscription from policy scope. If the value is blank, it indicates that either the policy or initiative doesn't exist or
you don't have permission. Information is provided under Assignment Status .
The Compliance page lists assignments matching the specified filter and whether they're compliant. Click on an
assignment to view its details.
The Initiative compliance page lists the policy definitions in the initiative and whether each is in compliance.
Click on a policy definition to view its details. Scenarios that policy definitions will show as out of compliance
include the following:
Log Analytics agent or Dependency agent isn't deployed. Create a remediation task to mitigate.
VM image (OS) isn't identified in the policy definition. The criteria of the deployment policy include only VMs
that are deployed from well-known Azure VM images. Check the documentation to see whether the VM OS is
supported.
VMs aren't logging to the specified Log Analytics workspace. Some VMs in the initiative scope are connected to
a Log Analytics workspace other than the one that's specified in the policy assignment.
To create a remediation task to mitigate compliance issues, click Create Remediation Task .
Click Remediate to create the remediation task and then Remediate to start it. You will most likely need to create
multiple remediation tasks, one for each policy definition. You can't create a remediation task for an initiative.
Once the remediation tasks are complete, your VMs should be compliant with agents installed and enabled for
Azure Monitor for VMs.
Next steps
Now that monitoring is enabled for your virtual machines, this information is available for analysis with Azure
Monitor for VMs.
To view discovered application dependencies, see View Azure Monitor for VMs Map.
To identify bottlenecks and overall utilization with your VM's performance, see View Azure VM performance.
Enable Azure Monitor for VMs using PowerShell
11/2/2020 • 4 minutes to read • Edit Online
This article describes how to enable Azure Monitor for VMs on Azure virtual machines using PowerShell. This
procedure can be used for the following:
Azure virtual machine
Azure virtual machine scale set
Prerequisites
Create and configure a Log Analytics workspace.
See Supported operating systems to ensure that the operating system of the virtual machine or virtual
machine scale set you're enabling is supported.
PowerShell script
To enable Azure Monitor for VMs for multiple VMs or virtual machine scale sets, use the PowerShell script Install-
VMInsights.ps1, which is available from the Azure PowerShell Gallery. This script iterates through:
Every virtual machine and virtual machine scale set in your subscription.
The scoped resource group that's specified by ResourceGroup.
A single VM or virtual machine scale set that's specified by Name.
For each virtual machine or virtual machine scale set, the script verifies whether the VM extension for the Log
Analytics agent and Dependency agent are already installed. If both extensions are installed, the script tries to
reinstall it. If both extensions aren't installed, the script installs them.
Verify you are using Azure PowerShell module Az version 1.0.0 or later with Enable-AzureRM compatibility aliases
enabled. Run Get-Module -ListAvailable Az to find the version. If you need to upgrade, see Install Azure
PowerShell module. If you're running PowerShell locally, you also need to run Connect-AzAccount to create a
connection with Azure.
To get a list of the script's argument details and example usage, run Get-Help .
SYNOPSIS
This script installs VM extensions for Log Analytics and the Dependency agent as needed for VM Insights.
SYNTAX
.\Install-VMInsights.ps1 [-WorkspaceId] <String> [-WorkspaceKey] <String> [-SubscriptionId] <String> [[-
ResourceGroup]
<String>] [[-Name] <String>] [[-PolicyAssignmentName] <String>] [-ReInstall] [-TriggerVmssManualVMUpdate]
[-Approve] [-WorkspaceRegion] <String>
[-WhatIf] [-Confirm] [<CommonParameters>]
DESCRIPTION
This script installs or reconfigures the following on VMs and virtual machine scale sets:
- Log Analytics VM extension configured to supplied Log Analytics workspace
- Dependency agent VM extension
Script will show you a list of VMs or virtual machine scale sets that will apply to and let you confirm to
continue.
Use -Approve switch to run without prompting, if all required parameters are provided.
If the extensions are already installed, they will not install again.
Use -ReInstall switch if you need to, for example, update the workspace.
Use -WhatIf if you want to see what would happen in terms of installs, what workspace configured to, and
status of the extension.
PARAMETERS
-WorkspaceId <String>
Log Analytics WorkspaceID (GUID) for the data to be sent to
-WorkspaceKey <String>
Log Analytics Workspace primary or secondary key
-SubscriptionId <String>
SubscriptionId for the VMs/VM Scale Sets
If using PolicyAssignmentName parameter, subscription that VMs are in
-ResourceGroup <String>
<Optional> Resource Group to which the VMs or VM Scale Sets belong
-Name <String>
<Optional> To install to a single VM/VM Scale Set
-PolicyAssignmentName <String>
<Optional> Take the input VMs to operate on as the Compliance results from this Assignment
If specified will only take from this source.
-ReInstall [<SwitchParameter>]
<Optional> If VM/VM Scale Set is already configured for a different workspace, set this to change to
the new workspace
-TriggerVmssManualVMUpdate [<SwitchParameter>]
<Optional> Set this flag to trigger update of VM instances in a scale set whose upgrade policy is set
to Manual
-Approve [<SwitchParameter>]
<Optional> Gives the approval for the installation to start with no confirmation prompt for the listed
VMs/VM Scale Sets
-WorkspaceRegion <String>
Region the Log Analytics Workspace is in
Supported values: "East US","eastus","Southeast Asia","southeastasia","West Central
US","westcentralus","West Europe","westeurope"
For Health supported is: "East US","eastus","West Central US","westcentralus"
-WhatIf [<SwitchParameter>]
<Optional> See what would happen in terms of installs.
If extension is already installed will show what workspace is currently configured, and status of the
VM extension
-Confirm [<SwitchParameter>]
<Optional> Confirm every action
<CommonParameters>
This cmdlet supports the common parameters: Verbose, Debug,
ErrorAction, ErrorVariable, WarningAction, WarningVariable,
OutBuffer, PipelineVariable, and OutVariable. For more information, see
about_CommonParameters (https:/go.microsoft.com/fwlink/?LinkID=113216).
-------------------------- EXAMPLE 1 --------------------------
.\Install-VMInsights.ps1 -WorkspaceRegion eastus -WorkspaceId <WorkspaceId>-WorkspaceKey <WorkspaceKey> -
SubscriptionId <SubscriptionId>
-ResourceGroup <ResourceGroup>
Specify to reinstall extensions even if already installed, for example, to update to a different workspace
Specify to use a PolicyAssignmentName for source and to reinstall (move to a new workspace)
The following example demonstrates using the PowerShell commands in the folder to enable Azure Monitor for
VMs and understand the expected output:
$WorkspaceId = "<GUID>"
$WorkspaceKey = "<Key>"
$SubscriptionId = "<GUID>"
.\Install-VMInsights.ps1 -WorkspaceId $WorkspaceId -WorkspaceKey $WorkspaceKey -SubscriptionId $SubscriptionId
-WorkspaceRegion eastus
Getting list of VMs or virtual machine scale sets matching criteria specified
db-ws-1 VM running
db-ws2012 VM running
This operation will install the Log Analytics and Dependency agent extensions on the previous two VMs or
virtual machine scale sets.
VMs in a non-running state will be skipped.
Extension will not be reinstalled if already installed. Use -ReInstall if desired, for example, to update
workspace.
Confirm
Continue?
[Y] Yes [N] No [S] Suspend [?] Help (default is "Y"): y
Summary:
Succeeded: (4)
db-ws-1 : Successfully deployed DependencyAgentWindows
db-ws-1 : Successfully deployed MicrosoftMonitoringAgent
db-ws2012 : Successfully deployed DependencyAgentWindows
db-ws2012 : Successfully deployed MicrosoftMonitoringAgent
Failed: (0)
Next steps
See Use Azure Monitor for VMs Map to view discovered application dependencies.
See View Azure VM performance to identify bottlenecks, overall utilization, and your VM's performance.
Enable Azure Monitor for VMs for a hybrid virtual
machine
11/2/2020 • 5 minutes to read • Edit Online
This article describes how to enable Azure Monitor for VMs for a virtual machine outside of Azure, including on-
premises and other cloud environments.
IMPORTANT
The recommended method of enabling hybrid VMs is first enabling Azure Arc for servers so that the VMs can be enabled for
Azure Monitor for VMs using processes similar to Azure VMs. This article describes how to onboard hybrid VMs if you
choose not to use Azure Arc.
Prerequisites
Create and configure a Log Analytics workspace.
See Supported operating systems to ensure that the operating system of the virtual machine or virtual machine
scale set you're enabling is supported.
Overview
Virtual machines outside of Azure require the same Log Analytics agent and Dependency agent that are used for
Azure VMs. Since you can't use VM extensions to install the agents though, you must manually install them in the
guest operating system or have them installed through some other method.
See Connect Windows computers to Azure Monitor or Connect Linux computers to Azure Monitor for details on
deploying the Log Analytics agent. Details for the Dependency agent are provided in this article.
Firewall requirements
Firewall requirements for the Log Analytics agent are provided in Log Analytics agent overview. The Azure Monitor
for VMs Map Dependency agent doesn't transmit any data itself, and it doesn't require any changes to firewalls or
ports. The Map data is always transmitted by the Log Analytics agent to the Azure Monitor service, either directly
or through the Operations Management Suite gateway if your IT security policies don't allow computers on the
network to connect to the internet.
Dependency agent
NOTE
The following information described in this section is also applicable to the Service Map solution.
F IL E OS VERSIO N SH A - 256
F IL E OS VERSIO N SH A - 256
PA RA M ET ER DESC RIP T IO N
For example, to run the installation program with the /? parameter, enter InstallDependencyAgent-
Windows.exe /? .
Files for the Windows Dependency agent are installed in C:\Program Files\Microsoft Dependency Agent by default.
If the Dependency agent fails to start after setup is finished, check the logs for detailed error information. The log
directory is %Programfiles%\Microsoft Dependency Agent\logs.
PowerShell script
Use the following sample PowerShell script to download and install the agent:
.\InstallDependencyAgent-Windows.exe /S
NOTE
Root access is required to install or configure the agent.
PA RA M ET ER DESC RIP T IO N
--check Check permissions and the operating system, but don't install
the agent.
For example, to run the installation program with the -help parameter, enter InstallDependencyAgent-
Linux64.bin -help . Install the Linux Dependency agent as root by running the command
sh InstallDependencyAgent-Linux64.bin .
If the Dependency agent fails to start, check the logs for detailed error information. On Linux agents, the log
directory is /var/opt/microsoft/dependency-agent/log.
Files for the Dependency agent are placed in the following directories:
F IL ES LO C AT IO N
Shell script
Use the following sample shell script to download and install the agent:
$DAPackageLocalPath = "C:\InstallDependencyAgent-Windows.exe"
Node localhost
{
# Download and install the Dependency agent
xRemoteFile DAPackage
{
Uri = "https://aka.ms/dependencyagentwindows"
DestinationPath = $DAPackageLocalPath
}
xPackage DA
{
Ensure="Present"
Name = "Dependency Agent"
Path = $DAPackageLocalPath
Arguments = '/S'
ProductId = ""
InstalledCheckRegKey =
"HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall\DependencyAgent"
InstalledCheckRegValueName = "DisplayName"
InstalledCheckRegValueData = "Dependency Agent"
DependsOn = "[xRemoteFile]DAPackage"
}
}
}
Troubleshooting
VM doesn't appear on the map
If your Dependency agent installation succeeded, but you don't see your computer on the map, diagnose the
problem by following these steps.
1. Is the Dependency agent installed successfully? You can validate this by checking to see if the service is
installed and running.
Windows : Look for the service named "Microsoft Dependency agent."
Linux : Look for the running process "microsoft-dependency-agent."
2. Are you on the Free pricing tier of Log Analytics? The Free plan allows for up to five unique computers. Any
subsequent computers won't show up on the map, even if the prior five are no longer sending data.
3. Is the computer sending log and perf data to Azure Monitor Logs? Perform the following query for your
computer:
Did it return one or more results? Is the data recent? If so, your Log Analytics agent is operating correctly
and communicating with the service. If not, check the agent on your server: Log Analytics agent for
Windows troubleshooting or Log Analytics agent for Linux troubleshooting.
Computer appears on the map but has no processes
If you see your server on the map, but it has no process or connection data, that indicates that the Dependency
agent is installed and running, but the kernel driver didn't load.
Check the C:\Program Files\Microsoft Dependency Agent\logs\wrapper.log file (Windows) or
/var/opt/microsoft/dependency-agent/log/service.log file (Linux). The last lines of the file should indicate why the
kernel didn't load. For example, the kernel might not be supported on Linux if you updated your kernel.
Next steps
Now that monitoring is enabled for your virtual machines, this information is available for analysis with Azure
Monitor for VMs.
To view discovered application dependencies, see View Azure Monitor for VMs Map.
To identify bottlenecks and overall utilization with your VM's performance, see View Azure VM performance.
Use the Map feature of Azure Monitor for VMs to
understand application components
11/2/2020 • 6 minutes to read • Edit Online
In Azure Monitor for VMs, you can view discovered application components on Windows and Linux virtual
machines (VMs) that run in Azure or your environment. You can observe the VMs in two ways. View a map
directly from a VM or view a map from Azure Monitor to see the components across groups of VMs. This article
will help you understand these two viewing methods and how to use the Map feature.
For information about configuring Azure Monitor for VMs, see Enable Azure Monitor for VMs.
Sign in to Azure
Sign in to the Azure portal.
Close the Logs page and return to the Proper ties pane. There, select Aler ts to view VM health-criteria alerts.
The Map feature integrates with Azure Alerts to show alerts for the selected server in the selected time range.
The server displays an icon for current alerts, and the Machine Aler ts pane lists the alerts.
To make the Map feature display relevant alerts, create an alert rule that applies to a specific computer:
Include a clause to group alerts by computer (for example, by Computer inter val 1 minute ).
Base the alert on a metric.
For more information about Azure Alerts and creating alert rules, see Unified alerts in Azure Monitor.
In the upper-right corner, the Legend option describes the symbols and roles on the map. For a closer look at
your map and to move it around, use the zoom controls in the lower-right corner. You can set the zoom level and
fit the map to the size of the page.
Connection metrics
The Connections pane displays standard metrics for the selected connection from the VM over the TCP port.
The metrics include response time, requests per minute, traffic throughput, and links.
Failed connections
The map shows failed connections for processes and computers. A dashed red line indicates a client system is
failing to reach a process or port. For systems that use the Dependency agent, the agent reports on failed
connection attempts. The Map feature monitors a process by observing TCP sockets that fail to establish a
connection. This failure could result from a firewall, a misconfiguration in the client or server, or an unavailable
remote service.
Understanding failed connections can help you troubleshoot, validate migration, analyze security, and understand
the overall architecture of the service. Failed connections are sometimes harmless, but they often point to a
problem. Connections might fail, for example, when a failover environment suddenly becomes unreachable or
when two application tiers can't communicate with each other after a cloud migration.
Client groups
On the map, client groups represent client machines that connect to the mapped machine. A single client group
represents the clients for an individual process or machine.
To see the monitored clients and IP addresses of the systems in a client group, select the group. The contents of
the group appear below.
If the group includes monitored and unmonitored clients, you can select the appropriate section of the group's
doughnut chart to filter the clients.
Server-port groups
Server-port groups represent ports on servers that have inbound connections from the mapped machine. The
group contains the server port and a count of the number of servers that have connections to that port. Select
the group to see the individual servers and connections.
If the group includes monitored and unmonitored servers, you can select the appropriate section of the group's
doughnut chart to filter the servers.
Choose a workspace by using the Workspace selector at the top of the page. If you have more than one Log
Analytics workspace, choose the workspace that's enabled with the solution and that has VMs reporting to it.
The Group selector returns subscriptions, resource groups, computer groups, and virtual machine scale sets of
computers that are related to the selected workspace. Your selection applies only to the Map feature and doesn't
carry over to Performance or Health.
By default, the map shows the last 30 minutes. If you want to see how dependencies looked in the past, you can
query for historical time ranges of up to one hour. To run the query, use the TimeRange selector. You might run a
query, for example, during an incident or to see the status before a change.
Next steps
To identify bottlenecks, check performance, and understand overall utilization of your VMs, see View performance
status for Azure Monitor for VMs.
How to chart performance with Azure Monitor for
VMs
11/2/2020 • 7 minutes to read • Edit Online
Azure Monitor for VMs includes a set of performance charts that target several key performance indicators (KPIs)
to help you determine how well a virtual machine is performing. The charts show resource utilization over a
period of time so you can identify bottlenecks, anomalies, or switch to a perspective listing each machine to view
resource utilization based on the metric selected. While there are numerous elements to consider when dealing
with performance, Azure Monitor for VMs monitors key operating system performance indicators related to
processor, memory, network adapter, and disk utilization. Performance complements the health monitoring
feature and helps expose issues that indicate a possible system component failure, support tuning and
optimization to achieve efficiency, or support capacity planning.
Limitations
Following are limitations in performance collection with Azure Monitor for VMs.
Available memor y is not available for virtual machines running Red Hat Linux (RHEL) 6. This metric is
calculated from MemAvailable which was introduced in kernel version 3.14.
Metrics are only available for data disks on Linux virtual machines using XFS filesystem or EXT filesystem
family (EXT2, EXT3, EXT4).
On the Top N Char ts tab, if you have more than one Log Analytics workspace, choose the workspace enabled
with the solution from the Workspace selector at the top of the page. The Group selector will return
subscriptions, resource groups, computer groups, and virtual machine scale sets of computers related to the
selected workspace that you can use to further filter results presented in the charts on this page and across the
other pages. Your selection only applies to the Performance feature and does not carry over to Health or Map.
By default, the charts show the last 24 hours. Using the TimeRange selector, you can query for historical time
ranges of up to 30 days to show how performance looked in the past.
The five capacity utilization charts shown on the page are:
CPU Utilization % - shows the top five machines with the highest average processor utilization
Available Memory - shows the top five machines with the lowest average amount of available memory
Logical Disk Space Used % - shows the top five machines with the highest average disk space used % across
all disk volumes
Bytes Sent Rate - shows the top five machines with highest average of bytes sent
Bytes Receive Rate - shows the top five machines with highest average of bytes received
Clicking on the pin icon at the upper right-hand corner of any one of the five charts will pin the selected chart to
the last Azure dashboard you last viewed. From the dashboard, you can resize and reposition the chart. Selecting
the chart from the dashboard will redirect you to Azure Monitor for VMs and load the correct scope and view.
Clicking on the icon located to the left of the pin icon on any one of the five charts opens the Top N List view.
Here you see the resource utilization for that performance metric by individual VM in a list view and which
machine is trending highest.
When you click on the virtual machine, the Proper ties pane is expanded on the right to show the properties of
the item selected, such as system information reported by the operating system, properties of the Azure VM, etc.
Clicking on one of the options under the Quick Links section will redirect you to that feature directly from the
selected VM.
Switch to the Aggregated Char ts tab to view the performance metrics filtered by average or percentiles
measures.
To filter the results on a specific virtual machine in the list, enter its computer name in the Search by name
textbox.
If you would rather view utilization from a different performance metric, from the Metric drop-down list select
Available Memor y , Logical Disk Space Used % , Network Received Byte/s , or Network Sent Byte/s and
the list updates to show utilization scoped to that metric.
Selecting a virtual machine from the list opens the Proper ties panel on the right-side of the page and from here
you can select Performance detail . The Vir tual Machine Detail page opens and is scoped to that VM, similar
in experience when accessing VM Insights Performance directly from the Azure VM.
Next steps
Learn how to use Workbooks that are included with Azure Monitor for VMs to further analyze
performance and network metrics.
To learn about discovered application dependencies, see View Azure Monitor for VMs Map.
How to query logs from Azure Monitor for VMs
11/2/2020 • 18 minutes to read • Edit Online
Azure Monitor for VMs collects performance and connection metrics, computer and process inventory data, and
health state information and forwards it to the Log Analytics workspace in Azure Monitor. This data is available for
query in Azure Monitor. You can apply this data to scenarios that include migration planning, capacity analysis,
discovery, and on-demand performance troubleshooting.
Map records
One record is generated per hour for each unique computer and process, in addition to the records that are
generated when a process or computer starts or is on-boarded to Azure Monitor for VMs Map feature. These
records have the properties in the following tables. The fields and values in the ServiceMapComputer_CL events
map to fields of the Machine resource in the ServiceMap Azure Resource Manager API. The fields and values in the
ServiceMapProcess_CL events map to the fields of the Process resource in the ServiceMap Azure Resource
Manager API. The ResourceName_s field matches the name field in the corresponding Resource Manager resource.
There are internally generated properties you can use to identify unique processes and computers:
Computer: Use ResourceId or ResourceName_s to uniquely identify a computer within a Log Analytics
workspace.
Process: Use ResourceId to uniquely identify a process within a Log Analytics workspace. ResourceName_s is
unique within the context of the machine on which the process is running (MachineResourceName_s)
Because multiple records can exist for a specified process and computer in a specified time range, queries can
return more than one record for the same computer or process. To include only the most recent record, add
| summarize arg_max(TimeGenerated, *) by ResourceId to the query.
To account for the impact of grouping, information about the number of grouped physical connections is provided
in the following properties of the record:
Metrics
In addition to connection count metrics, information about the volume of data sent and received on a given logical
connection or network port are also included in the following properties of the record:
BytesSent Total number of bytes that have been sent during the
reporting time window
P RO P ERT Y DESC RIP T IO N
BytesReceived Total number of bytes that have been received during the
reporting time window
The third type of data being reported is response time - how long does a caller spend waiting for a request sent
over a connection to be processed and responded to by the remote endpoint. The response time reported is an
estimation of the true response time of the underlying application protocol. It is computed using heuristics based
on the observation of the flow of data between the source and destination end of a physical network connection.
Conceptually, it is the difference between the time the last byte of a request leaves the sender, and the time when
the last byte of the response arrives back to it. These two timestamps are used to delineate request and response
events on a given physical connection. The difference between them represents the response time of a single
request.
In this first release of this feature, our algorithm is an approximation that may work with varying degree of success
depending on the actual application protocol used for a given network connection. For example, the current
approach works well for request-response based protocols such as HTTP(S), but does not work with one-way or
message queue-based protocols.
Here are some important points to consider:
1. If a process accepts connections on the same IP address but over multiple network interfaces, a separate record
for each interface will be reported.
2. Records with wildcard IP will contain no activity. They are included to represent the fact that a port on the
machine is open to inbound traffic.
3. To reduce verbosity and data volume, records with wildcard IP will be omitted when there is a matching record
(for the same process, port, and protocol) with a specific IP address. When a wildcard IP record is omitted, the
IsWildcardBind record property with the specific IP address, will be set to "True" to indicate that the port is
exposed over every interface of the reporting machine.
4. Ports that are bound only on a specific interface have IsWildcardBind set to False.
Naming and Classification
For convenience, the IP address of the remote end of a connection is included in the RemoteIp property. For
inbound connections, RemoteIp is the same as SourceIp, while for outbound connections, it is the same as
DestinationIp. The RemoteDnsCanonicalNames property represents the DNS canonical names reported by the
machine for RemoteIp. The RemoteDnsQuestions and RemoteClassification properties are reserved for future use.
Geolocation
VMConnection also includes geolocation information for the remote end of each connection record in the
following properties of the record:
P RO P ERT Y DESC RIP T IO N
Malicious IP
Every RemoteIp property in VMConnection table is checked against a set of IPs with known malicious activity. If the
RemoteIp is identified as malicious the following properties will be populated (they are empty, when the IP is not
considered malicious) in the following properties of the record:
TLPLevel Traffic Light Protocol (TLP) Level is one of the defined values,
White, Green, Amber, Red.
Ports
Ports on a machine that actively accept incoming traffic or could potentially accept traffic, but are idle during the
reporting time window, are written to the VMBoundPort table.
Every record in VMBoundPort is identified by the following fields:
The identity a port is derived from the above five fields and is stored in the PortId property. This property can be
used to quickly find records for a specific port across time.
Metrics
Port records include metrics representing the connections associated with them. Currently, the following metrics
are reported (the details for each metric are described in the previous section):
BytesSent and BytesReceived
LinksEstablished, LinksTerminated, LinksLive
ResposeTime, ResponseTimeMin, ResponseTimeMax, ResponseTimeSum
Here are some important points to consider:
If a process accepts connections on the same IP address but over multiple network interfaces, a separate record
for each interface will be reported.
Records with wildcard IP will contain no activity. They are included to represent the fact that a port on the
machine is open to inbound traffic.
To reduce verbosity and data volume, records with wildcard IP will be omitted when there is a matching record
(for the same process, port, and protocol) with a specific IP address. When a wildcard IP record is omitted, the
IsWildcardBind property for the record with the specific IP address, will be set to True. This indicates the port is
exposed over every interface of the reporting machine.
Ports that are bound only on a specific interface have IsWildcardBind set to False.
VMComputer records
Records with a type of VMComputer have inventory data for servers with the Dependency agent. These records
have the properties in the following table:
SourceSystem Insights
HypervisorType hyperv
HostingProvider azure
P RO P ERT Y DESC RIP T IO N
AzureVmScaleSetResourceId The unique identifier of the virtual machine scale set resource.
VMProcess records
Records with a type of VMProcess have inventory data for TCP-connected processes on servers with the
Dependency agent. These records have the properties in the following table:
SourceSystem Insights
Group Process group name. Processes in the same group are logically
related, e.g., part of the same product or system component.
let Today = now(); VMComputer | extend DaysSinceBoot = Today - BootTime | summarize by Computer,
DaysSinceBoot, BootTime | sort by BootTime asc
Bound Ports
VMBoundPort
| where TimeGenerated >= ago(24hr)
| where Computer == 'admdemo-appsvr'
| distinct Port, ProcessName
VMBoundPort
| where Ip != "127.0.0.1"
| summarize by Computer, Machine, Port, Protocol
| summarize OpenPorts=count() by Computer, Machine
| order by OpenPorts desc
Score processes in your workspace by the number of ports they have open
VMBoundPort
| where Ip != "127.0.0.1"
| summarize by ProcessName, Port, Protocol
| summarize OpenPorts=count() by ProcessName
| order by OpenPorts desc
//
VMBoundPort
| where Ip != "127.0.0.1"
| summarize BytesSent=sum(BytesSent), BytesReceived=sum(BytesReceived),
LinksEstablished=sum(LinksEstablished), LinksTerminated=sum(LinksTerminated), arg_max(TimeGenerated,
LinksLive) by Machine, Computer, ProcessName, Ip, Port, IsWildcardBind
| project-away TimeGenerated
| order by Machine, Computer, Port, Ip, ProcessName
Performance records
Records with a type of InsightsMetrics have performance data from the guest operating system of the virtual
machine. These records have the properties in the following table:
SourceSystem Insights
Origin vm.azm.ms
P RO P ERT Y DESC RIP T IO N
Tags Related details about the record. See the table below for tags
used with different record types.
Type InsightsMetrics
The performance counters currently collected into the InsightsMetrics table are listed in the following table:
Next steps
If you are new to writing log queries in Azure Monitor, review how to use Log Analytics in the Azure portal
to write log queries.
Learn about writing search queries.
Create interactive reports Azure Monitor for VMs
with workbooks
11/2/2020 • 12 minutes to read • Edit Online
Workbooks combine text,log queries, metrics, and parameters into rich interactive reports. Workbooks are editable
by any other team members who have access to the same Azure resources.
Workbooks are helpful for scenarios such as:
Exploring the usage of your virtual machine when you don't know the metrics of interest in advance: CPU
utilization, disk space, memory, network dependencies, etc. Unlike other usage analytics tools, workbooks let
you combine multiple kinds of visualizations and analyses, making them great for this kind of free-form
exploration.
Explaining to your team how a recently provisioned VM is performing, by showing metrics for key counters and
other log events.
Sharing the results of a resizing experiment of your VM with other members of your team. You can explain the
goals for the experiment with text, then show each usage metric and analytics queries used to evaluate the
experiment, along with clear call-outs for whether each metric was above- or below-target.
Reporting the impact of an outage on the usage of your VM, combining data, text explanation, and a discussion
of next steps to prevent outages in the future.
The following table summarizes the workbooks that Azure Monitor for VMs includes to get you started.
W O RK B O O K DESC RIP T IO N SC O P E
To add query section to your workbook, use the Add quer y button at the bottom of the workbook, or at the
bottom of any section.
Query sections are highly flexible and can be used to answer questions like:
How was my CPU utilization during the same time period as an increase in network traffic?
What was the trend in available disk space over the last month?
How many network connection failures did my VM experience over the last two weeks?
You also aren't only limited to querying from the context of the virtual machine you launched the workbook from.
You can query across multiple virtual machines, as well as Log Analytics workspaces, as long as you have access
permission to those resources.
To include data from other Log Analytics workspaces or from a specific Application Insights app using the
workspace identifier. To learn more about cross-resource queries, refer to the official guidance.
Advanced analytic query settings
Each section has its own advanced settings, which are accessible via the settings icon located to the right of the
Add parameters button.
Custom width Makes an item an arbitrary size, so you can fit many items on
a single line allowing you to better organize your charts and
tables into rich interactive reports.
Expor t a parameter Allow a selected row in the grid or chart to cause later steps
to change values or become visible.
Show quer y when not editing Displays the query above the chart or table even when in
reading mode.
Show open in analytics button when not editing Adds the blue Analytics icon to the right-hand corner of the
chart to allow one-click access.
Most of these settings are fairly intuitive, but to understand Expor t a parameter it is better to examine a
workbook that makes use of this functionality.
One of the prebuilt workbooks - TCP Traffic , provides information on connection metrics from a VM.
The first section of the workbook is based on log query data. The second section is also based on log query data,
but selecting a row in the first table will interactively update the contents of the charts:
The behavior is possible through use of the When an item is selected, expor t a parameter advanced settings,
which are enabled in the table's log query.
The second log query then utilizes the exported values when a row is selected to create a set of values that are then
used by the section heading and charts. If no row is selected, it hides the section heading and charts.
For example, the hidden parameter in the second section uses the following reference from the row selected in the
grid:
VMConnection
| where TimeGenerated {TimeRange}
| where Computer in ("{ComputerName}") or '*' in ("{ComputerName}")
| summarize Sent = sum(BytesSent), Received = sum(BytesReceived) by bin(TimeGenerated, {TimeRange:grain})
Text Allows the user to edit a text box, and you can optionally
supply a query to fill in the default value.
Time range picker Allows the user to choose from a predefined set of time range
values, or pick from a custom time range.
Resource picker Allows the user to choose from the resources selected for the
workbook.
[
{ "value": "inbound", "label": "Inbound"},
{ "value": "outbound", "label": "Outbound"}
]
A more applicable example is using a drop-down to pick from a set of performance counters by name:
Perf
| summarize by CounterName, ObjectName
| order by ObjectName asc, CounterName asc
| project Counter = pack('counter', CounterName, 'object', ObjectName), CounterName, group = ObjectName
Next steps
To identify limitations and overall VM performance, see View Azure VM Performance.
To learn about discovered application dependencies, see View Azure Monitor for VMs Map.
How to create alerts from Azure Monitor for VMs
11/2/2020 • 5 minutes to read • Edit Online
Alerts in Azure Monitor proactively notify you of interesting data and patterns in your monitoring data. Azure
Monitor for VMs does not include pre-configured alert rules, but you can create your own based on data that it
collects. This article provides guidance on creating alert rules, including a set of sample queries.
InsightsMetrics
| where Origin == "vm.azm.ms"
| where Namespace == "Processor" and Name == "UtilizationPercentage"
| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId
Available Memory in MB
InsightsMetrics
| where Origin == "vm.azm.ms"
| where Namespace == "Memory" and Name == "AvailableMB"
| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId
InsightsMetrics
| where Origin == "vm.azm.ms"
| where Namespace == "Memory" and Name == "AvailableMB"
| extend TotalMemory = toreal(todynamic(Tags)["vm.azm.ms/memorySizeMB"])
| extend AvailableMemoryPercentage = (toreal(Val) / TotalMemory) * 100.0
| summarize AggregatedValue = avg(AvailableMemoryPercentage) by bin(TimeGenerated, 15m), Computer, _ResourceId
InsightsMetrics
| where Origin == "vm.azm.ms"
| where Namespace == "LogicalDisk" and Name == "FreeSpacePercentage"
| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId
InsightsMetrics
| where Origin == "vm.azm.ms"
| where Namespace == "LogicalDisk" and Name == "TransfersPerSecond"
| extend Disk=tostring(todynamic(Tags)["vm.azm.ms/mountId"])
| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m) ), Computer, _ResourceId, Disk
InsightsMetrics
| where Origin == "vm.azm.ms"
| where Namespace == "LogicalDisk" and Name == "BytesPerSecond"
| extend Disk=tostring(todynamic(Tags)["vm.azm.ms/mountId"])
| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m) , Computer, _ResourceId, Disk
InsightsMetrics
| where Origin == "vm.azm.ms"
| where Namespace == "Network" and Name == "ReadBytesPerSecond"
| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId
InsightsMetrics
| where Origin == "vm.azm.ms"
| where Namespace == "Network" and Name == "ReadBytesPerSecond"
| extend NetworkInterface=tostring(todynamic(Tags)["vm.azm.ms/networkDeviceId"])
| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId, NetworkInterface
InsightsMetrics
| where Origin == "vm.azm.ms"
| where Namespace == "Network" and Name == "WriteBytesPerSecond"
| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId
InsightsMetrics
| where Origin == "vm.azm.ms"
| where Namespace == "Network" and Name == "WriteBytesPerSecond"
| extend NetworkInterface=tostring(todynamic(Tags)["vm.azm.ms/networkDeviceId"])
| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId, NetworkInterface
InsightsMetrics
| where Origin == "vm.azm.ms"
| where _ResourceId =~ "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/my-resource-
group/providers/Microsoft.Compute/virtualMachines/my-vm"
| where Namespace == "Processor" and Name == "UtilizationPercentage"
| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m)
InsightsMetrics
| where Origin == "vm.azm.ms"
| where _ResourceId startswith "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" and (_ResourceId contains
"/providers/Microsoft.Compute/virtualMachines/" or _ResourceId contains
"/providers/Microsoft.Compute/virtualMachineScaleSets/")
| where Namespace == "Processor" and Name == "UtilizationPercentage"
| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), _ResourceId
InsightsMetrics
| where Origin == "vm.azm.ms"
| where _ResourceId startswith "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/my-resource-
group/providers/Microsoft.Compute/virtualMachines/"
or _ResourceId startswith "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/my-resource-
group/providers/Microsoft.Compute/virtualMachineScaleSets/"
| where Namespace == "Processor" and Name == "UtilizationPercentage"
| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), _ResourceId
Next steps
Learn more about alerts in Azure Monitor.
Learn more about log queries using data from Azure Monitor for VMs.
How to upgrade the Azure Monitor for VMs
Dependency agent
4/17/2020 • 2 minutes to read • Edit Online
After initial deployment of the Azure Monitor for VMs Dependency agent, updates are released that include bug
fixes or support of new features or functionality. This article helps you understand the methods available and how
to perform the upgrade manually or through automation.
Upgrade options
The Dependency agent for Windows and Linux can be upgraded to the latest release manually or automatically
depending on the deployment scenario and environment the machine is running in. The following methods can be
used to upgrade the agent.
Custom Azure VM images Manual install of Dependency agent for Updating VMs to the newest version of
Windows/Linux the agent needs to be performed from
the command line running the Windows
installer package or Linux self-extracting
and installable shell script bundle.
Non-Azure VMs Manual install of Dependency agent for Updating VMs to the newest version of
Windows/Linux the agent needs to be performed from
the command line running the Windows
installer package or Linux self-extracting
and installable shell script bundle.
InstallDependencyAgent-Windows.exe /S /RebootMode=manual
The /RebootMode=manual parameter prevents the upgrade from automatically rebooting the machine if some
processes are using files from the previous version and have a lock on them.
3. To confirm the upgrade was successful, check the install.log for detailed setup information. The log
directory is %Programfiles%\Microsoft Dependency Agent\logs.
Next steps
If you want to stop monitoring your VMs for a period of time or remove Azure Monitor for VMs entirely, see
Disable monitoring of your VMs in Azure Monitor for VMs.
Disable monitoring of your VMs in Azure Monitor for
VMs
3/16/2020 • 3 minutes to read • Edit Online
After you enable monitoring of your virtual machines (VMs), you can later choose to disable monitoring in Azure
Monitor for VMs. This article shows how to disable monitoring for one or more VMs.
Currently, Azure Monitor for VMs doesn't support selective disabling of VM monitoring. Your Log Analytics
workspace might support Azure Monitor for VMs and other solutions. It might also collect other monitoring data. If
your Log Analytics workspace provides these services, you need to understand the effect and methods of disabling
monitoring before you start.
Azure Monitor for VMs relies on the following components to deliver its experience:
A Log Analytics workspace, which stores monitoring data from VMs and other sources.
A collection of performance counters configured in the workspace. The collection updates the monitoring
configuration on all VMs connected to the workspace.
VMInsights , which is a monitoring solution configured in the workspace. This solution updates the monitoring
configuration on all VMs connected to the workspace.
MicrosoftMonitoringAgent and DependencyAgent , which are Azure VM extensions. These extensions collect and
send data to the workspace.
As you prepare to disable monitoring of your VMs, keep these considerations in mind:
If you evaluated with a single VM and used the preselected default Log Analytics workspace, you can disable
monitoring by uninstalling the Dependency agent from the VM and disconnecting the Log Analytics agent from
this workspace. This approach is appropriate if you intend to use the VM for other purposes and decide later to
reconnect it to a different workspace.
If you selected a preexisting Log Analytics workspace that supports other monitoring solutions and data
collection from other sources, you can remove solution components from the workspace without interrupting
or affecting your workspace.
NOTE
After removing the solution components from your workspace, you might continue to see performance and map data for
your Azure VMs. Data will eventually stop appearing in the Performance and Map views. The Enable option will be
available from the selected Azure VM so you can re-enable monitoring in the future.
NOTE
Don't remove the Log Analytics agent if:
Azure Automation manages the VM to orchestrate processes or to manage configuration or updates.
Azure Security Center manages the VM for security and threat detection.
If you do remove the Log Analytics agent, you will prevent those services and solutions from proactively managing your VM.
Azure virtual machine scale sets let you create and manage a group of load balanced VMs. The number of VM
instances can automatically increase or decrease in response to demand or a defined schedule. Scale sets provide
high availability to your applications, and allow you to centrally manage, configure, and update a large number of
VMs. With virtual machine scale sets, you can build large-scale services for areas such as compute, big data, and
container workloads.
Add additional VM instances Manual process to create, configure, Automatically create from central
and ensure compliance configuration
Traffic balancing and distribution Manual process to create and configure Can automatically create and integrate
Azure load balancer or Application with Azure load balancer or Application
Gateway Gateway
High availability and redundancy Manually create Availability Set or Automatic distribution of VM instances
distribute and track VMs across across Availability Zones or Availability
Availability Zones Sets
Scaling of VMs Manual monitoring and Azure Autoscale based on host metrics, in-
Automation guest metrics, Application Insights, or
schedule
There is no additional cost to scale sets. You only pay for the underlying compute resources such as the VM
instances, load balancer, or Managed Disk storage. The management and automation features, such as autoscale
and redundancy, incur no additional charges over the use of VMs.
Next steps
To get started, create your first virtual machine scale set in the Azure portal.
Create a scale set in the Azure portal
Manage the availability of Linux virtual machines
11/2/2020 • 9 minutes to read • Edit Online
Learn ways to set up and manage multiple virtual machines to ensure high availability for your Linux application
in Azure.
IMPORTANT
A single instance virtual machine in an availability set by itself should use Premium SSD or Ultra Disk for all operating
system disks and data disks in order to qualify for the SLA for Virtual Machine connectivity of at least 99.9%.
A single instance virtual machine with a Standard SSD will have an SLA of at least 99.5%, while a single instance virtual
machine with a Standard HDD will have an SLA of at least 95%. See SLA for Virtual Machines
Each virtual machine in your availability set is assigned an update domain and a fault domain by the
underlying Azure platform. For a given availability set, five non-user-configurable update domains are assigned
by default (Resource Manager deployments can then be increased to provide up to 20 update domains) to
indicate groups of virtual machines and underlying physical hardware that can be rebooted at the same time.
When more than five virtual machines are configured within a single availability set, the sixth virtual machine is
placed into the same update domain as the first virtual machine, the seventh in the same update domain as the
second virtual machine, and so on. The order of update domains being rebooted may not proceed sequentially
during planned maintenance, but only one update domain is rebooted at a time. A rebooted update domain is
given 30 minutes to recover before maintenance is initiated on a different update domain.
Fault domains define the group of virtual machines that share a common power source and network switch. By
default, the virtual machines configured within your availability set are separated across up to three fault
domains for Resource Manager deployments (two fault domains for Classic). While placing your virtual
machines into an availability set does not protect your application from operating system or application-specific
failures, it does limit the impact of potential physical hardware failures, network outages, or power interruptions.
NOTE
Under certain circumstances, two VMs in the same availability set might share a fault domain. You can confirm a shared
fault domain by going to your availability set and checking the Fault Domain column. A shared fault domain might be
caused by the completing following sequence when you deployed the VMs:
1. Deploy the first VM.
2. Stop/deallocate the first VM.
3. Deploy the second VM.
Under these circumstances, the OS disk of the second VM might be created on the same fault domain as the first VM, so
the two VMs will be on same fault domain. To avoid this issue, we recommend that you don't stop/deallocate VMs
between deployments.
If you plan to use VMs with unmanaged disks, follow below best practices for Storage accounts where virtual
hard disks (VHDs) of VMs are stored as page blobs.
1. Keep all disks (OS and data) associated with a VM in the same storage account
2. Review the limits on the number of unmanaged disks in an Azure Storage account before adding
more VHDs to a storage account
3. Use a separate storage account for each VM in an Availability Set. Do not share Storage accounts
with multiple VMs in the same Availability Set. It is acceptable for VMs across different Availability Sets to
share storage accounts if above best practices are followed
Use scheduled events to proactively respond to VM impacting events
When you subscribe to scheduled events, your VM is notified about upcoming maintenance events that can
impact your VM. When scheduled events are enabled, your virtual machine is given a minimum amount of time
before the maintenance activity is performed. For example, Host OS updates that might impact your VM are
queued up as events that specify the impact, as well as a time at which the maintenance will be performed if no
action is taken. Schedule events are also queued up when Azure detects imminent hardware failure that might
impact your VM, which allows you to decide when the healing should be performed. Customers can use the
event to perform tasks prior to the maintenance, such as saving state, failing over to the secondary, and so on.
After you complete your logic for gracefully handling the maintenance event, you can approve the outstanding
scheduled event to allow the platform to proceed with maintenance.
Next steps
To learn more about load balancing your virtual machines, see Load Balancing virtual machines.
Create a proximity placement group using the portal
4/27/2020 • 2 minutes to read • Edit Online
To get VMs as close as possible, achieving the lowest possible latency, you should deploy them within a proximity
placement group.
A proximity placement group is a logical grouping used to make sure that Azure compute resources are physically
located close to each other. Proximity placement groups are useful for workloads where low latency is a
requirement.
NOTE
Proximity placement groups cannot be used with dedicated hosts.
If you want to use availability zones together with placement groups, you need to make sure that the VMs in the placement
group are also all in the same availability zone.
3. When you are done making all of the other required selections, select Review + create .
4. After it passes validation, select Create to deploy the VM in the placement group.
Next steps
You can also use the Azure PowerShell to create proximity placement groups.
Deploy VMs to proximity placement groups using
PowerShell
11/2/2020 • 3 minutes to read • Edit Online
To get VMs as close as possible, achieving the lowest possible latency, you should deploy them within a proximity
placement group.
A proximity placement group is a logical grouping used to make sure that Azure compute resources are physically
located close to each other. Proximity placement groups are useful for workloads where low latency is a
requirement.
$resourceGroup = "myPPGResourceGroup"
$location = "East US"
$ppgName = "myPPG"
New-AzResourceGroup -Name $resourceGroup -Location $location
$ppg = New-AzProximityPlacementGroup `
-Location $location `
-Name $ppgName `
-ResourceGroupName $resourceGroup `
-ProximityPlacementGroupType Standard
Get-AzProximityPlacementGroup
Create a VM
Create a VM in the proximity placement group using -ProximityPlacementGroup $ppg.Id to refer to the proximity
placement group ID when you use New-AzVM to create the VM.
$vmName = "myVM"
New-AzVm `
-ResourceGroupName $resourceGroup `
-Name $vmName `
-Location $location `
-OpenPorts 3389 `
-ProximityPlacementGroup $ppg.Id
Availability Sets
You can also create an availability set in your proximity placement group. Use the same -ProximityPlacementGroup
parameter with the New-AzAvailabilitySet cmdlet to create an availability set and all of the VMs created in the
availability set will also be created in the same proximity placement group.
To add or remove an existing availability set to a proximity placement group, you first need to stop all of the VMs
in the availability set.
Move an existing availability set into a proximity placement group
$resourceGroup = "myResourceGroup"
$avSetName = "myAvailabilitySet"
$avSet = Get-AzAvailabilitySet -ResourceGroupName $resourceGroup -Name $avSetName
$vmIds = $avSet.VirtualMachinesReferences
foreach ($vmId in $vmIDs){
$string = $vmID.Id.Split("/")
$vmName = $string[8]
Stop-AzVM -ResourceGroupName $resourceGroup -Name $vmName -Force
}
$avSet.ProximityPlacementGroup = ""
Update-AzAvailabilitySet -AvailabilitySet $avSet
foreach ($vmId in $vmIDs){
$string = $vmID.Id.Split("/")
$vmName = $string[8]
Start-AzVM -ResourceGroupName $resourceGroup -Name $vmName
}
Scale sets
You can also create a scale set in your proximity placement group. Use the same -ProximityPlacementGroup
parameter with New-AzVmss to create a scale set and all of the instances will be created in the same proximity
placement group.
To add or remove an existing scale set to a proximity placement group, you first need to stop the scale set.
Move an existing scale set into a proximity placement group
Next steps
You can also use the Azure CLI to create proximity placement groups.
Create a Windows virtual machine in an availability
zone with PowerShell
11/2/2020 • 4 minutes to read • Edit Online
This article details using Azure PowerShell to create an Azure virtual machine running Windows Server 2016 in an
Azure availability zone. An availability zone is a physically separate zone in an Azure region. Use availability zones
to protect your apps and data from an unlikely failure or loss of an entire datacenter.
To use an availability zone, create your virtual machine in a supported Azure region.
Sign in to Azure
Sign in to your Azure subscription with the Connect-AzAccount command and follow the on-screen directions.
Connect-AzAccount
The output is similar to the following condensed example, which shows the Availability Zones in which each VM
size is available:
# Create a virtual network card and associate with public IP address and NSG
$nic = New-AzNetworkInterface -Name myNic -ResourceGroupName myResourceGroup -Location eastus2 `
-SubnetId $vnet.Subnets[0].Id -PublicIpAddressId $pip.Id -NetworkSecurityGroupId $nsg.Id
The output shows that the managed disk is in the same availability zone as the VM:
ResourceGroupName : myResourceGroup
AccountType : PremiumLRS
OwnerId : /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-
xxxxxxxxxxxx/resourceGroups/myResourceGroup/providers/Microsoft.
Compute/virtualMachines/myVM
ManagedBy : /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-
xxxxxxxxxxxx//resourceGroups/myResourceGroup/providers/Microsoft.
Compute/virtualMachines/myVM
Sku : Microsoft.Azure.Management.Compute.Models.DiskSku
Zones : {2}
TimeCreated : 9/7/2017 6:57:26 PM
OsType : Windows
CreationData : Microsoft.Azure.Management.Compute.Models.CreationData
DiskSizeGB : 127
EncryptionSettings :
ProvisioningState : Succeeded
Id : /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-
xxxxxxxxxxxx/resourceGroups/myResourceGroup/providers/Microsoft.
Compute/disks/myVM_OsDisk_1_bd921920bb0a4650becfc2d830000000
Name : myVM_OsDisk_1_bd921920bb0a4650becfc2d830000000
Type : Microsoft.Compute/disks
Location : eastus2
Tags : {}
Next steps
In this article, you learned how to create a VM in an availability zone. Learn more about availability for Azure VMs.
Create a Windows virtual machine in an availability
zone with the Azure portal
11/2/2020 • 2 minutes to read • Edit Online
This article steps through using the Azure portal to create a virtual machine in an Azure availability zone. An
availability zone is a physically separate zone in an Azure region. Use availability zones to protect your apps and
data from an unlikely failure or loss of an entire datacenter.
To use an availability zone, create your virtual machine in a supported Azure region.
Sign in to Azure
Sign in to the Azure portal at https://portal.azure.com.
4. Choose a size for the VM. Select a recommended size, or filter based on features. Confirm the size is
available in the zone you want to use.
5. Under Settings > High availability , select one of the numbered zones from the Availability zone
dropdown, keep the remaining defaults, and click OK .
6. On the summary page, click Create to start the virtual machine deployment.
7. The VM will be pinned to the Azure portal dashboard. Once the deployment has completed, the VM
summary automatically opens.
3. Click the name of the Public IP address resource. The Over view page includes details about the location and
availability zone of the resource.
Next steps
In this article, you learned how to create a VM in an availability zone. Learn more about availability for Azure VMs.
Custom data and Cloud-Init on Azure Virtual
Machines
11/2/2020 • 3 minutes to read • Edit Online
You may need to inject a script or other metadata into a Microsoft Azure virtual machine at provisioning time. In
other clouds, this concept is often referred to as user data. In Microsoft Azure, we have a similar feature called
custom data.
Custom data is only made available to the VM during first boot/initial setup, we call this 'provisioning'. Provisioning
is the process where VM Create parameters (for example, hostname, username, password, certificates, custom data,
keys etc.) are made available to the VM and a provisioning agent processes them, such as the Linux Agent and
cloud-init.
az vm create \
--resource-group myResourceGroup \
--name centos74 \
--image OpenLogic:CentOS-CI:7-CI:latest \
--custom-data cloud-init.txt \
--generate-ssh-keys
"name": "[parameters('virtualMachineName')]",
"type": "Microsoft.Compute/virtualMachines",
"apiVersion": "2019-07-01",
"location": "[parameters('location')]",
"dependsOn": [
..],
"variables": {
"customDataBase64": "[base64(parameters('stringData'))]"
},
"properties": {
..
"osProfile": {
"computerName": "[parameters('virtualMachineName')]",
"adminUsername": "[parameters('adminUsername')]",
"adminPassword": "[parameters('adminPassword')]",
"customData": "[variables('customDataBase64')]"
},
FAQ
Can I update custom data after the VM has been created?
For single VMs, custom data in the VM model cannot be updated, but for VMSS, you can update VMSS custom data
via REST API (not applicable for PS or AZ CLI clients). When you update custom data in the VMSS model:
Existing instances in the VMSS will not get the updated custom data, only until they are reimaged.
Existing instances in the VMSS that are upgraded will not get the updated custom data.
New instances will receive the new custom data.
Can I place sensitive values in custom data?
We advise not to store sensitive data in custom data. For more information, see Azure Security and encryption best
practices.
Is custom data made available in IMDS?
No, this feature is not currently available.
Publish an ASP.NET Web App to an Azure VM from
Visual Studio
11/2/2020 • 2 minutes to read • Edit Online
This document describes how to publish an ASP.NET web application to an Azure virtual machine (VM) using the
Microsoft Azure Vir tual Machines publishing feature in Visual Studio 2019.
Prerequisites
In order to use Visual Studio to publish an ASP.NET project to an Azure VM, the VM must be correctly set up.
The machine must be configured to run an ASP.NET web application and have WebDeploy installed. For
more information, see Create an ASP.NET VM with WebDeploy.
The VM must have a DNS name configured. For more information, see Create a fully qualified domain name
in the Azure portal for a Windows VM.
Publish your ASP.NET web app to the Azure VM using Visual Studio
The following section describes how to publish an existing ASP.NET web application to an Azure virtual machine.
1. Open your web app solution in Visual Studio 2019.
2. Right-click the project in Solution Explorer and choose Publish...
3. Use the arrow on the right of the page to scroll through the publishing options until you find Microsoft
Azure Vir tual Machines .
4. Select the Microsoft Azure Vir tual Machines icon and select Publish .
5. Choose the appropriate account (with Azure subscription connected to your virtual machine).
If you're signed in to Visual Studio, the account list is populated with all your authenticated accounts.
If you are not signed in, or if the account you need is not listed, choose "Add an account..." and follow the
prompts to log in.
NOTE
Populating this list can take some time.
7. Click OK to begin publishing.
8. When prompted for credentials, supply the username and password of a user account on the target VM that
is configured with publishing rights. These credentials are typically the admin username and password used
when creating the VM.
10. Watch the Output window to check the progress of the publish operation.
11. If publishing is successful, a browser launches to open the URL of the newly published site.
Success!
You have now successfully published your web app to an Azure virtual machine.
Automated updates
SQL Server on Azure Virtual Machines can use Automated Patching to schedule a maintenance window for
installing important windows and SQL Server updates automatically.
Automated backups
SQL Server on Azure Virtual Machines can take advantage of Automated Backup, which regularly creates backups
of your database to blob storage. You can also manually use this technique. For more information, see Use Azure
Storage for SQL Server Backup and Restore.
Azure also offers an enterprise-class backup solution for SQL Server running in Azure VMs. A fully-managed
backup solution, it supports Always On availability groups, long-term retention, point-in-time recovery, and central
management and monitoring. For more information, see Azure Backup for SQL Server in Azure VMs.
High availability
If you require high availability, consider configuring SQL Server Availability Groups. This involves multiple
instances of SQL Server on Azure Virtual Machines in a virtual network. You can configure your high-availability
solution manually, or you can use templates in the Azure portal for automatic configuration. For an overview of all
high-availability options, see High Availability and Disaster Recovery for SQL Server in Azure Virtual Machines.
Performance
Azure virtual machines offer different machine sizes to meet various workload demands. SQL Server VMs also
provide automated storage configuration, which is optimized for your performance requirements. For more
information about configuring storage for SQL Server VMs, see Storage configuration for SQL Server VMs. To fine-
tune performance, see the Performance best practices for SQL Server on Azure Virtual Machines.
Get started with SQL Server VMs
To get started, choose a SQL Server virtual machine image with your required version, edition, and operating
system. The following sections provide direct links to the Azure portal for the SQL Server virtual machine gallery
images.
TIP
For more information about how to understand pricing for SQL Server images, see Pricing guidance for SQL Server on Azure
Virtual Machines.
Pay as you go
The following table provides a matrix of pay-as-you-go SQL Server images.
SQL Ser ver 2019 Windows Server 2019 Enterprise, Standard, Web, Developer
SQL Ser ver 2017 Windows Server 2016 Enterprise, Standard, Web, Express,
Developer
SQL Ser ver 2016 SP2 Windows Server 2016 Enterprise, Standard, Web, Express,
Developer
SQL Ser ver 2014 SP2 Windows Server 2012 R2 Enterprise, Standard, Web, Express
SQL Ser ver 2012 SP4 Windows Server 2012 R2 Enterprise, Standard, Web, Express
SQL Ser ver 2008 R2 SP3 Windows Server 2008 R2 Enterprise, Standard, Web, Express
To see the available SQL Server on Linux virtual machine images, see Overview of SQL Server on Azure Virtual
Machines (Linux).
NOTE
It is now possible to change the licensing model of a pay-per-usage SQL Server VM to use your own license. For more
information, see How to change the licensing model for a SQL Server VM.
SQL Ser ver 2019 Windows Server 2019 Enterprise BYOL, Standard BYOL
SQL Ser ver 2017 Windows Server 2016 Enterprise BYOL, Standard BYOL
VERSIO N O P ERAT IN G SY ST EM EDIT IO N
SQL Ser ver 2016 SP2 Windows Server 2016 Enterprise BYOL, Standard BYOL
SQL Ser ver 2014 SP2 Windows Server 2012 R2 Enterprise BYOL, Standard BYOL
SQL Ser ver 2012 SP4 Windows Server 2012 R2 Enterprise BYOL, Standard BYOL
It is possible to deploy an older image of SQL Server that is not available in the Azure portal using PowerShell. To
view all available images using PowerShell, use the following command:
For more information about deploying SQL Server VMs using PowerShell, view How to provision SQL Server
virtual machines with Azure PowerShell.
Connect to the VM
After creating your SQL Server VM, connect to it from applications or tools, such as SQL Server Management
Studio (SSMS). For instructions, see Connect to a SQL Server virtual machine on Azure.
Migrate your data
If you have an existing database, you'll want to move that to the newly provisioned SQL Server VM. For a list of
migration options and guidance, see Migrating a Database to SQL Server on an Azure VM.
Create and manage Azure SQL resources with the Azure portal
The Azure portal provides a single page where you can manage all of your Azure SQL resources including your
SQL virtual machines.
To access the Azure SQL resources page, select Azure SQL in the Azure portal menu, or search for and select
Azure SQL from any page.
NOTE
Azure SQL provides a quick and easy way to access all of your Azure SQL databases, elastic pools, logical servers, managed
instances, and virtual machines. Azure SQL is not a service or resource.
To manage existing resources, select the desired item in the list. To create new Azure SQL resources, select + Add .
After selecting + Add , view additional information about the different options by selecting Show details on any
tile.
Next steps
Get started with SQL Server on Azure Virtual Machines:
Create a SQL Server VM in the Azure portal
Get answers to commonly asked questions about SQL Server VMs:
SQL Server on Azure Virtual Machines FAQ
View Reference Architectures for running N-tier applications on SQL Server in IaaS
Windows N-tier application on Azure with SQL Server
Run an N-tier application in multiple Azure regions for high availability
Install and configure MongoDB on a Windows VM in
Azure
11/2/2020 • 5 minutes to read • Edit Online
MongoDB is a popular open-source, high-performance NoSQL database. This article guides you through installing
and configuring MongoDB on a Windows Server 2016 virtual machine (VM) in Azure. You can also install
MongoDB on a Linux VM in Azure.
Prerequisites
Before you install and configure MongoDB, you need to create a VM and, ideally, add a data disk to it. See the
following articles to create a VM and add a data disk:
Create a Windows Server VM using the Azure portal or Azure PowerShell.
Attach a data disk to a Windows Server VM using the Azure portal or Azure PowerShell.
To begin installing and configuring MongoDB, log on to your Windows Server VM by using Remote Desktop.
Install MongoDB
IMPORTANT
MongoDB security features, such as authentication and IP address binding, are not enabled by default. Security features
should be enabled before deploying MongoDB to a production environment. For more information, see MongoDB Security
and Authentication.
1. After you've connected to your VM using Remote Desktop, open Internet Explorer from the taskbar.
2. Select Use recommended security, privacy, and compatibility settings when Internet Explorer first
opens, and click OK .
3. Internet Explorer enhanced security configuration is enabled by default. Add the MongoDB website to the list
of allowed sites:
Select the Tools icon in the upper-right corner.
In Internet Options , select the Security tab, and then select the Trusted Sites icon.
Click the Sites button. Add https://*.mongodb.com to the list of trusted sites, and then close the
dialog box.
4. Browse to the MongoDB - Downloads page (https://www.mongodb.com/downloads).
5. If needed, select the Community Ser ver edition and then select the latest current stable release
forWindows Server 2008 R2 64-bit and later. To download the installer, click DOWNLOAD (msi) .
Add the path to your MongoDB bin folder. MongoDB is typically installed in C:\Program
Files\MongoDB. Verify the installation path on your VM. The following example adds the default
MongoDB install location to the PATH variable:
;C:\Program Files\MongoDB\Server\3.6\bin
NOTE
Be sure to add the leading semicolon ( ; ) to indicate that you are adding a location to your PATH variable.
2. Create MongoDB data and log directories on your data disk. From the Star t menu, select Command
Prompt . The following examples create the directories on drive F:
mkdir F:\MongoData
mkdir F:\MongoLogs
3. Start a MongoDB instance with the following command, adjusting the path to your data and log directories
accordingly:
It may take several minutes for MongoDB to allocate the journal files and start listening for connections. All
log messages are directed to the F:\MongoLogs\mongolog.log file as mongod.exe server starts and allocates
journal files.
NOTE
The command prompt stays focused on this task while your MongoDB instance is running. Leave the command
prompt window open to continue running MongoDB. Or, install MongoDB as service, as detailed in the next step.
4. For a more robust MongoDB experience, install the mongod.exe as a service. Creating a service means you
don't need to leave a command prompt running each time you want to use MongoDB. Create the service as
follows, adjusting the path to your data and log directories accordingly:
The preceding command creates a service named MongoDB, with a description of "Mongo DB". The
following parameters are also specified:
The --dbpath option specifies the location of the data directory.
The --logpath option must be used to specify a log file, because the running service does not have a
command window to display output.
The --logappend option specifies that a restart of the service causes output to append to the existing log
file.
To start the MongoDB service, run the following command:
For more information about creating the MongoDB service, see Configure a Windows Service for MongoDB.
mongo
You can list the databases with the db command. Insert some data as follows:
db.foo.insert( { a : 1 } )
Search for data as follows:
db.foo.find()
exit
New-NetFirewallRule `
-DisplayName "Allow MongoDB" `
-Direction Inbound `
-Protocol TCP `
-LocalPort 27017 `
-Action Allow
You can also create the rule by using the Windows Firewall with Advanced Security graphical management
tool. Create a new inbound rule to allow TCP port 27017.
If needed, create a Network Security Group rule to allow access to MongoDB from outside of the existing Azure
virtual network subnet. You can create the Network Security Group rules by using the Azure portal or Azure
PowerShell. As with the Windows Firewall rules, allow TCP port 27017 to the virtual network interface of your
MongoDB VM.
NOTE
TCP port 27017 is the default port used by MongoDB. You can change this port by using the --port parameter when
starting mongod.exe manually or from a service. If you change the port, make sure to update the Windows Firewall and
Network Security Group rules in the preceding steps.
Next steps
In this tutorial, you learned how to install and configure MongoDB on your Windows VM. You can now access
MongoDB on your Windows VM, by following the advanced topics in the MongoDB documentation.
Use Azure to host and run SAP workload scenarios
11/6/2020 • 20 minutes to read • Edit Online
When you use Microsoft Azure, you can reliably run your mission-critical SAP workloads and scenarios on a
scalable, compliant, and enterprise-proven platform. You get the scalability, flexibility, and cost savings of Azure.
With the expanded partnership between Microsoft and SAP, you can run SAP applications across development and
test and production scenarios in Azure and be fully supported. From SAP NetWeaver to SAP S/4HANA, SAP BI on
Linux to Windows, and SAP HANA to SQL, we've got you covered.
Besides hosting SAP NetWeaver scenarios with the different DBMS on Azure, you can host other SAP workload
scenarios, like SAP BI on Azure.
The uniqueness of Azure for SAP HANA is an offer that sets Azure apart. To enable hosting more memory and CPU
resource-demanding SAP scenarios that involve SAP HANA, Azure offers the use of customer-dedicated bare-metal
hardware. Use this solution to run SAP HANA deployments that require up to 24 TB (120 TB scale-out) of memory
for S/4HANA or other SAP HANA workload.
Hosting SAP workload scenarios in Azure also can create requirements of identity integration and single sign-on.
This situation can occur when you use Azure Active Directory (Azure AD) to connect different SAP components and
SAP software-as-a-service (SaaS) or platform-as-a-service (PaaS) offers. A list of such integration and single sign-
on scenarios with Azure AD and SAP entities is described and documented in the section "Azure AD SAP identity
integration and single sign-on."
Change Log
11/05/2020: Changing link to new SAP note about HANA supported file system types in SAP HANA Azure
virtual machine storage configurations
10/26/2020: Changing some tables for Azure premium storage configuration to clarify provisioned versus burst
throughput in SAP HANA Azure virtual machine storage configurations
10/22/2020: Change in HA for SAP NW on Azure VMs on SLES for SAP applications, HA for SAP NW on Azure
VMs on SLES with ANF, HA for SAP NW on Azure VMs on RHEL for SAP applications and HA for SAP NW on
Azure VMs on RHEL with ANF to adjust the recommendation for net.ipv4.tcp_keepalive_time
10/16/2020: Change in HA of IBM Db2 LUW on Azure VMs on SLES with Pacemaker, HA for SAP NW on Azure
VMs on RHEL for SAP applications, HA of IBM Db2 LUW on Azure VMs on RHEL, HA for SAP NW on Azure VMs
on RHEL multi-SID guide, HA for SAP NW on Azure VMs on RHEL with ANF, HA for SAP NW on Azure VMs on
SLES for SAP applications, HA for SAP NNW on Azure VMs on SLES multi-SID guide, HA for SAP NW on Azure
VMs on SLES with ANF for SAP applications, HA for NFS on Azure VMs on SLES, HA of SAP HANA on Azure VMs
on SLES, HA for SAP HANA scale-up with ANF on RHEL, HA of SAP HANA on Azure VMs on RHEL, SAP HANA
scale-out HSR with Pacemaker on Azure VMs on RHEL, Prepare Azure infrastructure for SAP ASCS/SCS with
WSFC and shared disk, multi-SID HA guide for SAP ASCS/SCS with WSFC and Azure shared disk and multi-SID
HA guide for SAP ASCS/SCS with WSFC and shared disk to add a statement that floating IP is not supported in
load-balancing scenarios on secondary IPs
10/16/2020: Adding documentation to control storage snapshots of HANA Large Instances in Backup and
restore of SAP HANA on HANA Large Instances
10/15/2020: Release of SAP BusinessObjects BI Platform on Azure documentation, SAP BusinessObjects BI
platform planning and implementation guide on Azure and SAP BusinessObjects BI platform deployment guide
for linux on Azure
10/05/2020: Release of SAP HANA scale-out HSR with Pacemaker on Azure VMs on RHEL configuration guide
09/30/2020: Change in High availability of SAP HANA on Azure VMs on RHEL, HA for SAP HANA scale-up with
ANF on RHEL and Setting up Pacemaker on RHEL in Azure to adapt the instructions for RHEL 8.1
09/29/2020: Making restrictions and recommendations around usage of PPG more obvious in the article Azure
proximity placement groups for optimal network latency with SAP applications
09/28/2020: Adding a new storage operation guide for SAP HANA using Azure NetApp Files with the document
NFS v4.1 volumes on Azure NetApp Files for SAP HANA
09/23/2020: Add new certified SKUs for HLI in Available SKUs for HLI
09/20/2020: Changes in documents Considerations for Azure Virtual Machines DBMS deployment for SAP
workload, SQL Server Azure Virtual Machines DBMS deployment for SAP NetWeaver, Azure Virtual Machines
Oracle DBMS deployment for SAP workload, IBM Db2 Azure Virtual Machines DBMS deployment for SAP
workload to adapt to new configuration suggestion that recommend separation of DBMS binaries and SAP
binaries into different Azure disks. Also adding Ultra disk recommendations to the different guides.
09/08/2020: Change in High availability of SAP HANA on Azure VMs on SLES to clarify stonith definitions
09/03/2020: Change in SAP HANA Azure virtual machine storage configurations to adapt to minimal 2 IOPS per
1 GB capacity with Ultra disk
09/02/2020: Change in Available SKUs for HLI to get more transparent in what SKUs are HANA certified
August 25, 2020: Change in HA for SAP NW on Azure VMs on SLES with ANF to fix typo
August 25, 2020: Change in HA guide for SAP ASCS/SCS with WSFC and shared disk, Prepare Azure
infrastructure for SAP ASCS/SCS with WSFC and shared disk and Install SAP NW HA with WSFC and shared disk
to introduce the option of using Azure shared disk and document SAP ERS2 architecture
August 25, 2020: Release of multi-SID HA guide for SAP ASCS/SCS with WSFC and Azure shared disk
August 25, 2020: Change in HA guide for SAP ASCS/SCS with WSFC and Azure NetApp Files(SMB), Prepare
Azure infrastructure for SAP ASCS/SCS with WSFC and file share, multi-SID HA guide for SAP ASCS/SCS with
WSFC and shared disk and multi-SID HA guide for SAP ASCS/SCS with WSFC and SOFS file share as a result of
the content updates and restructuring in the HA guides for SAP ASCS/SCS with WFC and shared disk
August 21, 2020: Adding new OS release into Compatible Operating Systems for HANA Large Instances as
available operating system for HLI units of type I and II
August 18, 2020: Release of HA for SAP HANA scale-up with ANF on RHEL
August 17, 2020: Add information about using Azure Site Recovery for moving SAP NetWeaver systems from
on-premises to Azure in article Azure Virtual Machines planning and implementation for SAP NetWeaver
08/14/2020: Adding disk configuration advice for Db2 in article IBM Db2 Azure Virtual Machines DBMS
deployment for SAP workload
August 11, 2020: Adding RHEL 7.6 into Compatible Operating Systems for HANA Large Instances as available
operating system for HLI units of type I
August 10, 2020: Introducing cost conscious SAP HANA storage configuration in SAP HANA Azure virtual
machine storage configurations and making some updates to SAP workloads on Azure: planning and
deployment checklist
August 04, 2020: Change in Setting up Pacemaker on SLES in Azure and Setting up Pacemaker on RHEL in Azure
to emphasize the importance of reliable name resolution for Pacemaker clusters
August 04, 2020: Change in SAP NW HA on WFCS with file share, SAP NW HA on WFCS with shared disk, HA
for SAP NW on Azure VMs, HA for SAP NW on Azure VMs on SLES, HA for SAP NW on Azure VMs on SLES with
ANF, HA for SAP NW on Azure VMs on SLES multi-SID guide, High availability for SAP NetWeaver on Azure
VMs on RHEL, HA for SAP NW on Azure VMs on RHEL with ANF and HA for SAP NW on Azure VMs on RHEL
multi-SID guide to clarify the use of parameter enque/encni/set_so_keepalive
July 23, 2020: Added the Save on SAP HANA Large Instances with an Azure reservation article explaining what
you need to know before you buy an SAP HANA Large Instances reservation and how to make the purchase
July 16, 2020: Describe how to use Azure PowerShell to install new VM Extension for SAP in the Deployment
Guide
July 04,2020: Release of Azure monitor for SAP solutions(preview)
July 01, 2020: Suggesting less expensive storage configuration based on Azure premium storage burst
functionality in document SAP HANA Azure virtual machine storage configurations
June 24, 2020: Change in Setting up Pacemaker on SLES in Azure to release new improved Azure Fence Agent
and more resilient STONITH configuration for devices, based on Azure Fence Agent
June 24, 2020: Change in Setting up Pacemaker on RHEL in Azure to release more resilient STONITH
configuration
June 23, 2020: Changes to Azure Virtual Machines planning and implementation for SAP NetWeaver guide and
introduction of Azure Storage types for SAP workload guide
06/22/2020: Add installation steps for new VM Extension for SAP to the Deployment Guide
June 16, 2020: Change in Public endpoint connectivity for VMs using Azure Standard ILB in SAP HA scenarios to
add a link to SUSE Public Cloud Infrastructure 101 documentation
June 10, 2020: Adding new HLI SKUs into Available SKUs for HLI and SAP HANA (Large Instances) storage
architecture
May 21, 2020: Change in Setting up Pacemaker on SLES in Azure and Setting up Pacemaker on RHEL in Azure to
add a link to Public endpoint connectivity for VMs using Azure Standard ILB in SAP HA scenarios
May 19, 2020: Add important message not to use root volume group when using LVM for HANA related
volumes in SAP HANA Azure virtual machine storage configurations
May 19, 2020: Add new supported OS for HANA Large Instance Type II in [Compatible Operating Systems for
HANA Large Instances](/- azure/virtual-machines/workloads/sap/os-compatibility-matrix-hana-large-instance)
May 12, 2020: Change in Public endpoint connectivity for VMs using Azure Standard ILB in SAP HA scenarios to
update links and add information for 3rd party firewall configuration
May 11, 2020: Change in High availability of SAP HANA on Azure VMs on SLES to set resource stickiness to 0
for the netcat resource, as that leads to more streamlined failover
May 05, 2020: Changes in Azure Virtual Machines planning and implementation for SAP NetWeaver to express
that Gen2 deployments are available for Mv1 VM family
April 24, 2020: Changes in SAP HANA scale-out with standby node on Azure VMs with ANF on SLES, in SAP
HANA scale-out with standby node on Azure VMs with ANF on RHEL, High availability for SAP NetWeaver on
Azure VMs on SLES with ANF and High availability for SAP NetWeaver on Azure VMs on RHEL with ANF to add
clarification that the IP addresses for ANF volumes are automatically assigned
April 22, 2020: Change in High availability of SAP HANA on Azure VMs on SLES to remove meta attribute
is-managed from the instructions, as it conflicts with placing the cluster in or out of maintenance mode
April 21, 2020: Added SQL Azure DB as supported DBMS for SAP (Hybris) Commerce Platform 1811 and later in
articles What SAP software is supported for Azure deployments and SAP certifications and configurations
running on Microsoft Azure
April 16, 2020: Added SAP HANA as supported DBMS for SAP (Hybris) Commerce Platform in articles What SAP
software is supported for Azure deployments and SAP certifications and configurations running on Microsoft
Azure
April 13, 2020: Correct to exact SAP ASE release numbers in SAP ASE Azure Virtual Machines DBMS deployment
for SAP workload
April 07, 2020: Change in Setting up Pacemaker on SLES in Azure to clarify cloud-netconfig-azure instructions
April 06, 2020: Changes in SAP HANA scale-out with standby node on Azure VMs with Azure NetApp Files on
SLES and in SAP HANA scale-out with standby node on Azure VMs with Azure NetApp Files on RHEL to remove
references to NetApp TR-4435 (replaced by TR-4746)
March 31, 2020: Change in High availability of SAP HANA on Azure VMs on SLES and High availability of SAP
HANA on Azure VMs on RHEL to add instructions how to specify stripe size when creating striped volumes
March 27, 2020: Change in High availability for SAP NW on Azure VMs on SLES with ANF for SAP applications
to align the file system mount options to NetApp TR-4746 (remove the sync mount option)
March 26, 2020: Change in High availability for SAP NetWeaver on Azure VMs on SLES multi-SID guide to add
reference to NetApp TR-4746
March 26, 2020: Change in High availability for SAP NetWeaver on Azure VMs on SLES for SAP applications,
High availability for SAP NetWeaver on Azure VMs on SLES with Azure NetApp Files for SAP applications, High
availability for NFS on Azure VMs on SLES, High availability for SAP NetWeaver on Azure VMs on RHEL multi-
SID guide, High availability for SAP NetWeaver on Azure VMs on RHEL for SAP applications and High availability
for SAP NetWeaver on Azure VMs on RHEL with Azure NetApp Files for SAP applications to update diagrams
and clarify instructions for Azure Load Balancer backend pool creation
March 19, 2020: Major revision of document Quickstart: Manual installation of single-instance SAP HANA on
Azure Virtual Machines to Installation of SAP HANA on Azure Virtual Machines
March 17, 2020: Change in Setting up Pacemaker on SUSE Linux Enterprise Server in Azure to remove SBD
configuration setting that is no longer necessary
March 16 2020: Clarification of column certification scenario in SAP HANA IaaS certified platform in What SAP
software is supported for Azure deployments
03/11/2020: Change in SAP workload on Azure virtual machine supported scenarios to clarify multiple
databases per DBMS instance support
March 11, 2020: Change in Azure Virtual Machines planning and implementation for SAP NetWeaver explaining
Generation 1 and Generation 2 VMs
March 10, 2020: Change in SAP HANA Azure virtual machine storage configurations to clarify real existing
throughput limits of ANF
March 09, 2020: Change in High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server
for SAP applications, High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server with
Azure NetApp Files for SAP applications, High availability for NFS on Azure VMs on SUSE Linux Enterprise
Server, Setting up Pacemaker on SUSE Linux Enterprise Server in Azure, High availability of IBM Db2 LUW on
Azure VMs on SUSE Linux Enterprise Server with Pacemaker, High availability of SAP HANA on Azure VMs on
SUSE Linux Enterprise Server and High availability for SAP NetWeaver on Azure VMs on SLES multi-SID guide
to update cluster resources with resource agent azure-lb
March 05, 2020: Structure changes and content changes for Azure Regions and Azure Virtual machines in Azure
Virtual Machines planning and implementation for SAP NetWeaver
03/03/2020: Change in High availability for SAP NW on Azure VMs on SLES with ANF for SAP applications to
change to more efficient ANF volume layout
March 01, 2020: Reworked Backup guide for SAP HANA on Azure Virtual Machines to include Azure Backup
service. Reduced and condensed content in SAP HANA Azure Backup on file level and deleted a third document
dealing with backup through disk snapshot. Content gets handled in Backup guide for SAP HANA on Azure
Virtual Machines
February 27, 2020: Change in High availability for SAP NW on Azure VMs on SLES for SAP applications, High
availability for SAP NW on Azure VMs on SLES with ANF for SAP applications and High availability for SAP
NetWeaver on Azure VMs on SLES multi-SID guide to adjust "on fail" cluster parameter
February 26, 2020: Change in SAP HANA Azure virtual machine storage configurations to clarify file system
choice for HANA on Azure
February 26, 2020: Change in High availability architecture and scenarios for SAP to include the link to the HA
for SAP NetWeaver on Azure VMs on RHEL multi-SID guide
February 26, 2020: Change in High availability for SAP NW on Azure VMs on SLES for SAP applications, High
availability for SAP NW on Azure VMs on SLES with ANF for SAP applications, Azure VMs high availability for
SAP NetWeaver on RHEL and Azure VMs high availability for SAP NetWeaver on RHEL with Azure NetApp Files
to remove the statement that multi-SID ASCS/ERS cluster is not supported
February 26, 2020: Release of High availability for SAP NetWeaver on Azure VMs on RHEL multi-SID guide to
add a link to the SUSE multi-SID cluster guide
02/25/2020: Change in High availability architecture and scenarios for SAP to add links to newer HA articles
February 25, 2020: Change in High availability of IBM Db2 LUW on Azure VMs on SUSE Linux Enterprise Server
with Pacemaker to point to document that describes access to public endpoint with Standard Azure Load
balancer
February 21, 2020: Complete revision of the article SAP ASE Azure Virtual Machines DBMS deployment for SAP
workload
February 21, 2020: Change in SAP HANA Azure virtual machine storage configuration to represent new
recommendation in stripe size for /hana/data and adding setting of I/O scheduler
February 21, 2020: Changes in HANA Large Instance documents to represent newly certified SKUs of S224 and
S224m
February 21, 2020: Change in Azure VMs high availability for SAP NetWeaver on RHEL and Azure VMs high
availability for SAP NetWeaver on RHEL with Azure NetApp Files to adjust the cluster constraints for enqueue
server replication 2 architecture (ENSA2)
February 20, 2020: Change in High availability for SAP NetWeaver on Azure VMs on SLES multi-SID guide to
add a link to the SUSE multi-SID cluster guide
February 13, 2020: Changes to Azure Virtual Machines planning and implementation for SAP NetWeaver to
implement links to new documents
February 13, 2020: Added new document SAP workload on Azure virtual machine supported scenario
February 13, 2020: Added new document What SAP software is supported for Azure deployment
February 13, 2020: Change in High availability of IBM Db2 LUW on Azure VMs on Red Hat Enterprise Linux
Server to point to document that describes access to public endpoint with Standard Azure Load balancer
February 13, 2020: Add the new VM types to SAP certifications and configurations running on Microsoft Azure
February 13, 2020: Add new SAP support notes SAP workloads on Azure: planning and deployment checklist
February 13, 2020: Change in Azure VMs high availability for SAP NetWeaver on RHEL and Azure VMs high
availability for SAP NetWeaver on RHEL with Azure NetApp Files to align the cluster resources timeouts to the
Red Hat timeout recommendations
February 11, 2020: Release of SAP HANA on Azure Large Instance migration to Azure Virtual Machines
February 07, 2020: Change in Public endpoint connectivity for VMs using Azure Standard ILB in SAP HA
scenarios to update sample NSG screenshot
February 03, 2020: Change in High availability for SAP NW on Azure VMs on SLES for SAP applications and
High availability for SAP NW on Azure VMs on SLES with ANF for SAP applications to remove the warning
about using dash in the host names of cluster nodes on SLES
January 28, 2020: Change in High availability of SAP HANA on Azure VMs on RHEL to align the SAP HANA
cluster resources timeouts to the Red Hat timeout recommendations
January 17, 2020: Change in Azure proximity placement groups for optimal network latency with SAP
applications to change the section of moving existing VMs into a proximity placement group
January 17, 2020: Change in SAP workload configurations with Azure Availability Zones to point to procedure
that automates measurements of latency between Availability Zones
January 16, 2020: Change in How to install and configure SAP HANA (Large Instances) on Azure to adapt OS
releases to HANA IaaS hardware directory
January 16, 2020: Changes in High availability for SAP NetWeaver on Azure VMs on SLES multi-SID guide to
add instructions for SAP systems, using enqueue server 2 architecture (ENSA2)
January 10, 2020: Changes in SAP HANA scale-out with standby node on Azure VMs with Azure NetApp Files on
SLES and in SAP HANA scale-out with standby node on Azure VMs with Azure NetApp Files on RHEL to add
instructions on how to make nfs4_disable_idmapping changes permanent.
January 10, 2020: Changes in High availability for SAP NetWeaver on Azure VMs on SLES with Azure NetApp
Files for SAP applications and in Azure Virtual Machines high availability for SAP NetWeaver on RHEL with
Azure NetApp Files for SAP applications to add instructions how to mount Azure NetApp Files NFSv4 volumes.
December 23, 2019: Release of High availability for SAP NetWeaver on Azure VMs on SLES multi-SID guide
December 18, 2019: Release of SAP HANA scale-out with standby node on Azure VMs with Azure NetApp Files
on RHEL
November 21, 2019: Changes in SAP HANA scale-out with standby node on Azure VMs with Azure NetApp Files
on SUSE Linux Enterprise Server to simplify the configuration for NFS ID mapping and change the
recommended primary network interface to simplify routing.
November 15, 2019: Minor changes in High availability for SAP NetWeaver on SUSE Linux Enterprise Server
with Azure NetApp Files for SAP applications and High availability for SAP NetWeaver on Red Hat Enterprise
Linux with Azure NetApp Files for SAP applications to clarify capacity pool size restrictions and remove
statement that only NFSv3 version is supported.
November 12, 2019: Release of High availability for SAP NetWeaver on Windows with Azure NetApp Files
(SMB)
November 8, 2019: Changes in High availability of SAP HANA on Azure VMs on SUSE Linux Enterprise Server,
Set up SAP HANA System Replication on Azure virtual machines (VMs), Azure Virtual Machines high availability
for SAP NetWeaver on SUSE Linux Enterprise Server for SAP applications, Azure Virtual Machines high
availability for SAP NetWeaver on SUSE Linux Enterprise Server with Azure NetApp Files, Azure Virtual
Machines high availability for SAP NetWeaver on Red Hat Enterprise Linux, Azure Virtual Machines high
availability for SAP NetWeaver on Red Hat Enterprise Linux with Azure NetApp Files, High availability for NFS on
Azure VMs on SUSE Linux Enterprise Server, GlusterFS on Azure VMs on Red Hat Enterprise Linux for SAP
NetWeaver to recommend Azure standard load balancer
November 8, 2019: Changes in SAP workload planning and deployment checklist to clarify encryption
recommendation
November 4, 2019: Changes in Setting up Pacemaker on SUSE Linux Enterprise Server in Azure to create the
cluster directly with unicast configuration
Create MATLAB Distributed Computing Server
clusters on Azure VMs
11/2/2020 • 3 minutes to read • Edit Online
Use Microsoft Azure virtual machines to create one or more MATLAB Distributed Computing Server clusters to run
your compute-intensive parallel MATLAB workloads. Install your MATLAB Distributed Computing Server software
on a VM to use as a base image and use an Azure quickstart template or Azure PowerShell script (available on
GitHub) to deploy and manage the cluster. After deployment, connect to the cluster to run your workloads.
IMPORTANT
Since this article was written, there is now formal support for using MATLAB applications in Azure. It is recommended that
these more recent capabilities are used instead of the template and scripts referenced in this article. Search the Azure
Marketplace for "matlab"; further information about running MATLAB applications on Azure is available from MathWorks.
Prerequisites
Client computer - You'll need a Windows-based client computer to communicate with Azure and the MATLAB
Distributed Computing Server cluster after deployment.
Azure PowerShell - See How to install and configure Azure PowerShell to install it on your client computer.
Azure subscription - If you don't have a subscription, you can create a free account in just a couple of minutes.
For larger clusters, consider a pay-as-you-go subscription or other purchase options.
vCPUs quota - You might need to increase the vCPU quota to deploy a large cluster or more than one MATLAB
Distributed Computing Server cluster. To increase a quota, open an online customer support request at no
charge.
MATL AB, Parallel Computing Toolbox, and MATL AB Distributed Computing Ser ver licenses - The
scripts assume that the MathWorks Hosted License Manager is used for all licenses.
MATL AB Distributed Computing Ser ver software - Will be installed on a VM that will be used as the base
VM image for the cluster VMs.
NOTE
This process can take a couple of hours, but you only have to do it once for each version of MATLAB you use.
Cluster configurations
Currently, the cluster creation script and template enable you to create a single MATLAB Distributed Computing
Server topology. If you want, create one or more additional clusters, with each cluster having a different number of
worker VMs, using different VM sizes, and so on.
MATLAB client and cluster in Azure
The MATLAB client node, MATLAB Job Scheduler node, and MATLAB Distributed Computing Server "worker" nodes
are all configured as Azure VMs in a virtual network, as shown in the following figure.
To use the cluster, connect by Remote Desktop to the client node. The client node runs the MATLAB client.
The client node has a file share that can be accessed by all workers.
MathWorks Hosted License Manager is used for the license checks for all MATLAB software.
By default, one MATLAB Distributed Computing Server worker per vCPU is created on the worker VMs, but you
can specify any number.
Next steps
For detailed instructions to deploy and manage MATLAB Distributed Computing Server clusters in Azure, see
the GitHub repository containing the templates and scripts.
Go to the MathWorks site for detailed documentation for MATLAB and MATLAB Distributed Computing Server.
Visual Studio images on Azure
11/2/2020 • 4 minutes to read • Edit Online
Using Visual Studio in a preconfigured Azure virtual machine (VM) is a quick, easy way to go from nothing to an
up-and-running development environment. System images with different Visual Studio configurations are available
in the Azure Marketplace.
New to Azure? Create a free Azure account.
NOTE
Not all subscriptions are eligible to deploy Windows 10 images. For more information see Use Windows client in Azure for
dev/test scenarios
Visual Studio 2019: Latest (Version 16.5) Enterprise, Community Version 16.5.4
Visual Studio 2017: Latest (Version 15.9) Enterprise, Community Version 15.9.22
NOTE
In accordance with Microsoft servicing policy, the originally released (RTW) version of Visual Studio 2015 has expired for
servicing. Visual Studio 2015 Update 3 is the only remaining version offered for the Visual Studio 2015 product line.
If the images don't include a Visual Studio feature that you require, provide feedback through the feedback tool in
the upper-right corner of the page.
IMPORTANT
Don’t forget to use Sysprep to prepare the VM. If you miss that step, Azure can't provision a VM from the image.
NOTE
You still incur some cost for storage of the images, but that incremental cost can be insignificant compared to the overhead
costs to rebuild the VM from scratch for each team member who needs one. For instance, it costs a few dollars to create and
store a 127-GB image for a month that's reusable by your entire team. However, these costs are insignificant compared to
hours each employee invests to build out and validate a properly configured dev box for their individual use.
Additionally, your development tasks or technologies might need more scale, like varieties of development
configurations and multiple machine configurations. You can use Azure DevTest Labs to create recipes that
automate construction of your "golden image." You can also use DevTest Labs to manage policies for your team’s
running VMs. Using Azure DevTest Labs for developers is the best source for more information on DevTest Labs.
Next steps
Now that you know about the preconfigured Visual Studio images, the next step is to create a new VM:
Create a VM through the Azure portal
Windows Virtual Machines overview
Attach a data disk to a Windows VM with PowerShell
11/2/2020 • 2 minutes to read • Edit Online
This article shows you how to attach both new and existing disks to a Windows virtual machine by using
PowerShell.
First, review these tips:
The size of the virtual machine controls how many data disks you can attach. For more information, see Sizes
for virtual machines.
To use premium SSDs, you'll need a premium storage-enabled VM type, like the DS-series or GS-series virtual
machine.
This article uses PowerShell within the Azure Cloud Shell, which is constantly updated to the latest version. To
open the Cloud Shell, select Tr y it from the top of any code block.
$rgName = 'myResourceGroup'
$vmName = 'myVM'
$location = 'East US'
$storageType = 'Premium_LRS'
$dataDiskName = $vmName + '_datadisk1'
$diskConfig = New-AzDiskConfig -SkuName $storageType -Location $location -CreateOption Empty -DiskSizeGB 128
$dataDisk1 = New-AzDisk -DiskName $dataDiskName -Disk $diskConfig -ResourceGroupName $rgName
$rgName = 'myResourceGroup'
$vmName = 'myVM'
$location = 'East US 2'
$storageType = 'Premium_LRS'
$dataDiskName = $vmName + '_datadisk1'
$diskConfig = New-AzDiskConfig -SkuName $storageType -Location $location -CreateOption Empty -DiskSizeGB 128
-Zone 1
$dataDisk1 = New-AzDisk -DiskName $dataDiskName -Disk $diskConfig -ResourceGroupName $rgName
$location = "location-name"
$scriptName = "script-name"
$fileName = "script-file-name"
Set-AzVMCustomScriptExtension -ResourceGroupName $rgName -Location $locName -VMName $vmName -Name
$scriptName -TypeHandlerVersion "1.4" -StorageAccountName "mystore1" -StorageAccountKey "primary-key" -
FileName $fileName -ContainerName "scripts"
The script file can contain code to initialize the disks, for example:
$rgName = "myResourceGroup"
$vmName = "myVM"
$location = "East US"
$dataDiskName = "myDisk"
$disk = Get-AzDisk -ResourceGroupName $rgName -DiskName $dataDiskName
Next steps
You can also deploy managed disks using templates. For more information, see Using Managed Disks in Azure
Resource Manager Templates or the quickstart template for deploying multiple data disks.
Attach a managed data disk to a Windows VM by
using the Azure portal
11/2/2020 • 2 minutes to read • Edit Online
This article shows you how to attach a new managed data disk to a Windows virtual machine (VM) by using the
Azure portal. The size of the VM determines how many data disks you can attach. For more information, see
Sizes for virtual machines.
Next steps
You can also attach a data disk by using PowerShell.
If your application needs to use the D: drive to store data, you can change the drive letter of the Windows
temporary disk.
Performance tiers for managed disks (preview)
11/6/2020 • 2 minutes to read • Edit Online
Azure Disk Storage currently offers built-in bursting capabilities to provide higher performance for handling short-
term unexpected traffic. Premium SSDs have the flexibility to increase disk performance without increasing the
actual disk size. This capability allows you to match your workload performance needs and reduce costs.
NOTE
This feature is currently in preview.
This feature is ideal for events that temporarily require a consistently higher level of performance, like holiday
shopping, performance testing, or running a training environment. To handle these events, you can use a higher
performance tier for as long as you need it. You can then return to the original tier when you no longer need the
additional performance.
How it works
When you first deploy or provision a disk, the baseline performance tier for that disk is set based on the
provisioned disk size. You can use a higher performance tier to meet higher demand. When you no longer need
that performance level, you can return to the initial baseline performance tier.
Your billing changes as your tier changes. For example, if you provision a P10 disk (128 GiB), your baseline
performance tier is set as P10 (500 IOPS and 100 MBps). You'll be billed at the P10 rate. You can upgrade the tier to
match the performance of P50 (7,500 IOPS and 250 MBps) without increasing the disk size. During the time of the
upgrade, you'll be billed at the P50 rate. When you no longer need the higher performance, you can return to the
P10 tier. The disk will once again be billed at the P10 rate.
4 GiB P1 P2, P3, P4, P6, P10, P15, P20, P30, P40,
P50
Restrictions
This feature is currently supported only for premium SSDs.
You must either deallocate your VM or detach your disk from a running VM before you can change the disk's
tier.
Use of the P60, P70, and P80 performance tiers is restricted to disks of 4,096 GiB or higher.
A disk's performance tier can be downgraded only once every 24 hours.
Create an empty data disk with a tier higher than the baseline tier
subscriptionId=<yourSubscriptionIDHere>
resourceGroupName=<yourResourceGroupNameHere>
diskName=<yourDiskNameHere>
diskSize=<yourDiskSizeHere>
performanceTier=<yourDesiredPerformanceTier>
region=westcentralus
az login
az disk create -n $diskName -g $resourceGroupName -l $region --sku Premium_LRS --size-gb $diskSize --tier
$performanceTier
Create an OS disk with a tier higher than the baseline tier from an
Azure Marketplace image
resourceGroupName=<yourResourceGroupNameHere>
diskName=<yourDiskNameHere>
performanceTier=<yourDesiredPerformanceTier>
region=westcentralus
image=Canonical:UbuntuServer:18.04-LTS:18.04.202002180
az disk create -n $diskName -g $resourceGroupName -l $region --image-reference $image --sku Premium_LRS --tier
$performanceTier
Next steps
If you need to resize a disk to take advantage of the higher performance tiers, see these articles:
Expand virtual hard disks on a Linux VM with the Azure CLI
Expand a managed disk attached to a Windows virtual machine
How to detach a data disk from a Windows virtual
machine
11/2/2020 • 2 minutes to read • Edit Online
When you no longer need a data disk that's attached to a virtual machine, you can easily detach it. This removes the
disk from the virtual machine, but doesn't remove it from storage.
WARNING
If you detach a disk it is not automatically deleted. If you have subscribed to Premium storage, you will continue to incur
storage charges for the disk. For more information, see Pricing and Billing when using Premium Storage.
If you want to use the existing data on the disk again, you can reattach it to the same virtual machine, or another
one.
$VirtualMachine = Get-AzVM `
-ResourceGroupName "myResourceGroup" `
-Name "myVM"
Remove-AzVMDataDisk `
-VM $VirtualMachine `
-Name "myDisk"
Update-AzVM `
-ResourceGroupName "myResourceGroup" `
-VM $VirtualMachine
Next steps
If you want to reuse the data disk, you can just attach it to another VM
Using disks in Azure Resource Manager Templates
11/2/2020 • 5 minutes to read • Edit Online
This document walks through the differences between managed and unmanaged disks when using Azure Resource
Manager templates to provision virtual machines. The examples help you to update existing templates that are
using unmanaged Disks to managed disks. For reference, we are using the 101-vm-simple-windows template as a
guide. You can see the template using both managed Disks and a prior version using unmanaged disks if you'd like
to directly compare them.
{
"type": "Microsoft.Storage/storageAccounts",
"apiVersion": "2018-07-01",
"name": "[variables('storageAccountName')]",
"location": "[resourceGroup().location]",
"sku": {
"name": "Standard_LRS"
},
"kind": "Storage",
"properties": {}
}
Within the virtual machine object, add a dependency on the storage account to ensure that it's created before the
virtual machine. Within the storageProfile section, specify the full URI of the VHD location, which references the
storage account and is needed for the OS disk and any data disks.
{
"type": "Microsoft.Compute/virtualMachines",
"apiVersion": "2018-10-01",
"name": "[variables('vmName')]",
"location": "[resourceGroup().location]",
"dependsOn": [
"[resourceId('Microsoft.Storage/storageAccounts/', variables('storageAccountName'))]",
"[resourceId('Microsoft.Network/networkInterfaces/', variables('nicName'))]"
],
"properties": {
"hardwareProfile": {...},
"osProfile": {...},
"storageProfile": {
"imageReference": {
"publisher": "MicrosoftWindowsServer",
"offer": "WindowsServer",
"sku": "[parameters('windowsOSVersion')]",
"version": "latest"
},
"osDisk": {
"name": "osdisk",
"vhd": {
"uri": "[concat(reference(resourceId('Microsoft.Storage/storageAccounts/',
variables('storageAccountName'))).primaryEndpoints.blob, 'vhds/osdisk.vhd')]"
},
"caching": "ReadWrite",
"createOption": "FromImage"
},
"dataDisks": [
{
"name": "datadisk1",
"diskSizeGB": 1023,
"lun": 0,
"vhd": {
"uri": "[concat(reference(resourceId('Microsoft.Storage/storageAccounts/',
variables('storageAccountName'))).primaryEndpoints.blob, 'vhds/datadisk1.vhd')]"
},
"createOption": "Empty"
}
]
},
"networkProfile": {...},
"diagnosticsProfile": {...}
}
}
NOTE
It is recommended to use an API version later than 2016-04-30-preview as there were breaking changes between
2016-04-30-preview and 2017-03-30 .
{
"type": "Microsoft.Compute/virtualMachines",
"apiVersion": "2018-10-01",
"name": "[variables('vmName')]",
"location": "[resourceGroup().location]",
"dependsOn": [
"[resourceId('Microsoft.Storage/storageAccounts/', variables('storageAccountName'))]",
"[resourceId('Microsoft.Network/networkInterfaces/', variables('nicName'))]"
],
"properties": {
"hardwareProfile": {...},
"osProfile": {...},
"storageProfile": {
"imageReference": {
"publisher": "MicrosoftWindowsServer",
"offer": "WindowsServer",
"sku": "[parameters('windowsOSVersion')]",
"version": "latest"
},
"osDisk": {
"createOption": "FromImage"
},
"dataDisks": [
{
"diskSizeGB": 1023,
"lun": 0,
"createOption": "Empty"
}
]
},
"networkProfile": {...},
"diagnosticsProfile": {...}
}
}
Within the VM object, reference the disk object to be attached. Specifying the resource ID of the managed disk
created in the managedDisk property allows the attachment of the disk as the VM is created. The apiVersion for the
VM resource is set to 2017-03-30 . A dependency on the disk resource is added to ensure it's successfully created
before VM creation.
{
"type": "Microsoft.Compute/virtualMachines",
"apiVersion": "2018-10-01",
"name": "[variables('vmName')]",
"location": "[resourceGroup().location]",
"dependsOn": [
"[resourceId('Microsoft.Storage/storageAccounts/', variables('storageAccountName'))]",
"[resourceId('Microsoft.Network/networkInterfaces/', variables('nicName'))]",
"[resourceId('Microsoft.Compute/disks/', concat(variables('vmName'),'-datadisk1'))]"
],
"properties": {
"hardwareProfile": {...},
"osProfile": {...},
"storageProfile": {
"imageReference": {
"publisher": "MicrosoftWindowsServer",
"offer": "WindowsServer",
"sku": "[parameters('windowsOSVersion')]",
"version": "latest"
},
"osDisk": {
"createOption": "FromImage"
},
"dataDisks": [
{
"lun": 0,
"name": "[concat(variables('vmName'),'-datadisk1')]",
"createOption": "attach",
"managedDisk": {
"id": "[resourceId('Microsoft.Compute/disks/', concat(variables('vmName'),'-
datadisk1'))]"
}
}
]
},
"networkProfile": {...},
"diagnosticsProfile": {...}
}
}
{
"type": "Microsoft.Compute/availabilitySets",
"apiVersion": "2018-10-01",
"location": "[resourceGroup().location]",
"name": "[variables('avSetName')]",
"properties": {
"PlatformUpdateDomainCount": 3,
"PlatformFaultDomainCount": 2
},
"sku": {
"name": "Aligned"
}
}
The following example shows the properties.storageProfile.osDisk section for a VM that uses Standard SSD Disks:
"osDisk": {
"osType": "Windows",
"name": "myOsDisk",
"caching": "ReadWrite",
"createOption": "FromImage",
"managedDisk": {
"storageAccountType": "StandardSSD_LRS"
}
}
For a complete template example of how to create a Standard SSD disk with a template, see Create a VM from a
Windows Image with Standard SSD Data Disks.
Additional scenarios and customizations
To find full information on the REST API specifications, please review the create a managed disk REST API
documentation. You will find additional scenarios, as well as default and acceptable values that can be submitted to
the API through template deployments.
Next steps
For full templates that use managed disks visit the following Azure Quickstart Repo links.
Windows VM with managed disk
Linux VM with managed disk
Visit the Azure Managed Disks Overview document to learn more about managed disks.
Review the template reference documentation for virtual machine resources by visiting the
Microsoft.Compute/virtualMachines template reference document.
Review the template reference documentation for disk resources by visiting the Microsoft.Compute/disks
template reference document.
For information on how to use managed disks in Azure virtual machine scale sets, visit the Use data disks with
scale sets document.
Enable shared disk
11/2/2020 • 6 minutes to read • Edit Online
This article covers how to enable the shared disks feature for Azure managed disks. Azure shared disks is a new
feature for Azure managed disks that enables you to attach a managed disk to multiple virtual machines (VMs)
simultaneously. Attaching a managed disk to multiple VMs allows you to either deploy new or migrate existing
clustered applications to Azure.
If you are looking for conceptual information on managed disks that have shared disks enabled, refer to:
For Linux: Azure shared disks
For Windows: Azure shared disks
Limitations
Enabling shared disks is only available to a subset of disk types. Currently only ultra disks and premium SSDs can
enable shared disks. Each managed disk that have shared disks enabled are subject to the following limitations,
organized by disk type:
Ultra disks
Ultra disks have their own separate list of limitations, unrelated to shared disks. For ultra disk limitations, refer to
Using Azure ultra disks.
When sharing ultra disks, they have the following additional limitations:
Currently limited to Azure Resource Manager or SDK support.
Only basic disks can be used with some versions of Windows Server Failover Cluster, for details see Failover
clustering hardware requirements and storage options.
Shared ultra disks are available in all regions that support ultra disks by default, and do not require you to sign up
for access to use them.
Premium SSDs
Currently limited to Azure Resource Manager or SDK support.
Can only be enabled on data disks, not OS disks.
ReadOnly host caching is not available for premium SSDs with maxShares>1 .
Disk bursting is not available for premium SSDs with maxShares>1 .
When using Availability sets and virtual machine scale sets with Azure shared disks, storage fault domain
alignment with virtual machine fault domain is not enforced for the shared data disk.
When using proximity placement groups (PPG), all virtual machines sharing a disk must be part of the same
PPG.
Only basic disks can be used with some versions of Windows Server Failover Cluster, for details see Failover
clustering hardware requirements and storage options.
Azure Backup and Azure Site Recovery support is not yet available.
Regional availability
Shared premium SSDs are available in all regions that managed disks are available.
Disk sizes
For now, only ultra disks and premium SSDs can enable shared disks. Different disk sizes may have a different
maxShares limit, which you cannot exceed when setting the maxShares value. For premium SSDs, the disk sizes
that support sharing their disks are P15 and greater.
For each disk, you can define a maxShares value that represents the maximum number of nodes that can
simultaneously share the disk. For example, if you plan to set up a 2-node failover cluster, you would set
maxShares=2 . The maximum value is an upper bound. Nodes can join or leave the cluster (mount or unmount the
disk) as long as the number of nodes is lower than the specified maxShares value.
NOTE
The maxShares value can only be set or edited when the disk is detached from all nodes.
P15, P20 2
The IOPS and bandwidth limits for a disk are not affected by the maxShares value. For example, the max IOPS of a
P15 disk is 1100 whether maxShares = 1 or maxShares > 1.
Ultra disk ranges
The minimum maxShares value is 1, while the maximum maxShares value is 5. There are no size restrictions on
ultra disks, any size ultra disk can use any value for maxShares , up to and including the maximum value.
IMPORTANT
The value of maxShares can only be set or changed when a disk is unmounted from all VMs. See the Disk sizes for the
allowed values for maxShares .
Azure CLI
PowerShell
Resource Manager Template
az disk create -g myResourceGroup -n mySharedDisk --size-gb 1024 -l westcentralus --sku Premium_LRS --max-
shares 2
IMPORTANT
The value of maxShares can only be set or changed when a disk is unmounted from all VMs. See the Disk sizes for the
allowed values for maxShares .
Azure CLI
PowerShell
Resource Manager Template
R e g i o n a l d i sk e x a m p l e
Z o n a l d i sk e x a m p l e
This example is almost the same as the previous, except it creates a disk in availability zone 1.
NOTE
If you are deploying an ultra disk, make sure it matches the necessary requirements. See Using Azure ultra disks for details.
$resourceGroup = "myResourceGroup"
$location = "WestCentralUS"
$vm = New-AzVm -ResourceGroupName $resourceGroup -Name "myVM" -Location $location -VirtualNetworkName "myVnet"
-SubnetName "mySubnet" -SecurityGroupName "myNetworkSecurityGroup" -PublicIpAddressName "myPublicIpAddress"
$vm = Add-AzVMDataDisk -VM $vm -Name "mySharedDisk" -CreateOption Attach -ManagedDiskId $dataDisk.Id -Lun 0
PR_REGISTER_KEY
PR_REGISTER_AND_IGNORE
PR_GET_CONFIGURATION
PR_RESERVE
PR_PREEMPT_RESERVATION
PR_CLEAR_RESERVATION
PR_RELEASE_RESERVATION
PR_NONE
PR_WRITE_EXCLUSIVE
PR_EXCLUSIVE_ACCESS
PR_WRITE_EXCLUSIVE_REGISTRANTS_ONLY
PR_EXCLUSIVE_ACCESS_REGISTRANTS_ONLY
PR_WRITE_EXCLUSIVE_ALL_REGISTRANTS
PR_EXCLUSIVE_ACCESS_ALL_REGISTRANTS
Next steps
If you prefer to use Azure Resource Manager templates to deploy your disk, the following sample templates are
available:
Premium SSD
Regional ultra disks
Zonal ultra disks
Upload a VHD to Azure or copy a managed disk to
another region - Azure PowerShell
11/2/2020 • 5 minutes to read • Edit Online
This article explains how to either upload a VHD from your local machine to an Azure managed disk or copy a
managed disk to another region, using AzCopy. This process, direct upload, also enables you to upload a VHD up
to 32 TiB in size directly into a managed disk. Currently, direct upload is supported for standard HDD, standard
SSD, and premium SSD managed disks. It isn't supported for ultra disks, yet.
If you're providing a backup solution for IaaS VMs in Azure, we recommend using direct upload to restore
customer backups to managed disks. When uploading a VHD from a source external to Azure, speeds would
depend on your local bandwidth. When uploading or copying from an Azure VM, then your bandwidth would be
the same as standard HDDs.
Prerequisites
Download the latest version of AzCopy v10.
Install Azure PowerShell module.
If you intend to upload a VHD from on-premises: A fixed size VHD that has been prepared for Azure, stored
locally.
Or, a managed disk in Azure, if you intend to perform a copy action.
Getting started
If you'd prefer to upload disks through a GUI, you can do so using Azure Storage Explorer. For details refer to: Use
Azure Storage Explorer to manage Azure managed disks
To upload your VHD to Azure, you'll need to create an empty managed disk that is configured for this upload
process. Before you create one, there's some additional information you should know about these disks.
This kind of managed disk has two unique states:
ReadToUpload, which means the disk is ready to receive an upload but, no secure access signature (SAS) has
been generated.
ActiveUpload, which means that the disk is ready to receive an upload and the SAS has been generated.
NOTE
While in either of these states, the managed disk will be billed at standard HDD pricing, regardless of the actual type of disk.
For example, a P10 will be billed as an S10. This will be true until revoke-access is called on the managed disk, which is
required in order to attach the disk to a VM.
TIP
If you are creating an OS disk, add -HyperVGeneration '' to New-AzDiskConfig .
If you would like to upload either a premium SSD or a standard SSD, replace Standard_LRS with either
Premium_LRS or StandardSSD_LRS . Ultra disks are not yet supported.
Now that you've created an empty managed disk that is configured for the upload process, you can upload a VHD
to it. To upload a VHD to the disk, you'll need a writeable SAS, so that you can reference it as the destination for
your upload.
To generate a writable SAS of your empty managed disk, replace <yourdiskname> and <yourresourcegroupname> ,
then use the following commands:
Upload a VHD
Now that you have a SAS for your empty managed disk, you can use it to set your managed disk as the destination
for your upload command.
Use AzCopy v10 to upload your local VHD file to a managed disk by specifying the SAS URI you generated.
This upload has the same throughput as the equivalent standard HDD. For example, if you have a size that equates
to S4, you will have a throughput of up to 60 MiB/s. But, if you have a size that equates to S70, you will have a
throughput of up to 500 MiB/s.
After the upload is complete, and you no longer need to write any more data to the disk, revoke the SAS. Revoking
the SAS will change the state of the managed disk and allow you to attach the disk to a VM.
Replace <yourdiskname> and <yourresourcegroupname> , then run the following command:
IMPORTANT
You need to add an offset of 512 when you're providing the disk size in bytes of a managed disk from Azure. This is because
Azure omits the footer when returning the disk size. The copy will fail if you do not do this. The following script already does
this for you.
TIP
If you are creating an OS disk, add -HyperVGeneration '' to New-AzDiskConfig .
$sourceRG = <sourceResourceGroupHere>
$sourceDiskName = <sourceDiskNameHere>
$targetDiskName = <targetDiskNameHere>
$targetRG = <targetResourceGroupHere>
$targetLocate = <yourTargetLocationHere>
#Expected value for OS is either "Windows" or "Linux"
$targetOS = <yourOSTypeHere>
# Adding the sizeInBytes with the 512 offset, and the -Upload flag
$targetDiskconfig = New-AzDiskConfig -SkuName 'Standard_LRS' -osType $targetOS -UploadSizeInBytes
$($sourceDisk.DiskSizeBytes+512) -Location $targetLocate -CreateOption 'Upload'
Next steps
Now that you've successfully uploaded a VHD to a managed disk, you can attach your disk to a VM and begin
using it.
To learn how to attach a data disk to a VM, see our article on the subject: Attach a data disk to a Windows VM with
PowerShell. To use the disk as the OS disk, see Create a Windows VM from a specialized disk.
Use the Azure portal to restrict import/export access
for managed disks with Private Links
11/6/2020 • 3 minutes to read • Edit Online
Private Links support for managed disks allows you to restrict the export and import of managed disks so that it
only occurs within your Azure virtual network. You can generate a time bound Shared Access Signature (SAS) URI
for unattached managed disks and snapshots for exporting the data to other region for regional expansion,
disaster recovery and to read the data for forensic analysis. You can also use the SAS URI to directly upload VHD to
an empty disk from your on-premises. Network traffic between clients on their virtual network and managed disks
only traverses over the virtual network and a private link on the Microsoft backbone network, eliminating exposure
to the public internet.
You can create a disk access resource and link it to your virtual network in the same subscription by creating a
private endpoint. You must associate a disk or a snapshot with a disk access for exporting and importing the data
via Private Links. Also, you must set the NetworkAccessPolicy property of the disk or the snapshot to AllowPrivate .
You can set the NetworkAccessPolicy property to DenyAll to prevent anybody from generating the SAS URI for a
disk or a snapshot. The default value for the NetworkAccessPolicy property is AllowAll .
Limitations
Only one virtual network can be linked to a disk access object.
Your virtual network must be in the same subscription as your disk access object to link them.
Up to 10 disks or snapshots can be imported or exported at the same time with the same disk access object.
You cannot request manual approval to link a virtual network to a disk access object.
Incremental snapshots cannot be exported when they are associated with a disk access object.
You cannot use AzCopy to download VHD of a disk/snapshot that are secured via private links to a storage
account. However, you can use AzCopy to download VHD to your VMs.
Regional availability
Private links for managed disk importing or exporting is currently only available in:
East US
East US 2
North Central US
South Central US
West US
West US 2
Central India
US Gov Virginia
US Gov Arizona
NOTE
If you have a network security group (NGS) enabled for the selected subnet, it will be disabled for private endpoints
on this subnet only. Other resources on this subnet will still have NSG enforcement.
You've now completed configuring Private Links that you can use when importing/exporting your managed disk.
Next steps
FAQ for Private Links
Export/Copy managed snapshots as VHD to a storage account in different region with PowerShell
How to expand the OS drive of a virtual machine
11/6/2020 • 5 minutes to read • Edit Online
When you create a new virtual machine (VM) in a resource group by deploying an image from Azure Marketplace,
the default OS drive is often 127 GB (some images have smaller OS disk sizes by default). Even though it's possible
to add data disks to the VM (the number depends on the SKU you chose) and we recommend installing
applications and CPU-intensive workloads on these addendum disks, often, customers need to expand the OS
drive to support specific scenarios:
To support legacy applications that install components on the OS drive.
To migrate a physical PC or VM from on-premises with a larger OS drive.
IMPORTANT
Resizing an OS or Data Disk of an Azure Virtual Machine requires the virtual machine to be deallocated.
Shrinking an existing disk isn’t supported, and can potentially result in data loss.
After expanding the disks, you need to expand the volume within the OS to take advantage of the larger disk.
WARNING
The new size should be greater than the existing disk size. The maximum allowed is 2,048 GB for OS disks. (It's
possible to expand the VHD blob beyond that size, but the OS works only with the first 2,048 GB of space.)
6. Select Save .
Connect-AzAccount
Select-AzSubscription –SubscriptionName 'my-subscription-name'
$rgName = 'my-resource-group-name'
$vmName = 'my-vm-name'
WARNING
The new size should be greater than the existing disk size. The maximum allowed is 2,048 GB for OS disks. (It is
possible to expand the VHD blob beyond that size, but the OS works only with the first 2,048 GB of space.)
6. Updating the VM might take a few seconds. When the command finishes executing, restart the VM:
And that’s it! Now RDP into the VM, open Computer Management (or Disk Management) and expand the drive
using the newly allocated space.
Connect-AzAccount
Select-AzSubscription –SubscriptionName 'my-subscription-name'
$rgName = 'my-resource-group-name'
$vmName = 'my-vm-name'
5. Set the size of the unmanaged OS disk to the desired value and update the VM:
$vm.StorageProfile.OSDisk.DiskSizeGB = 1023
Update-AzVM -ResourceGroupName $rgName -VM $vm
WARNING
The new size should be greater than the existing disk size. The maximum allowed is 2,048 GB for OS disks. (It's
possible to expand the VHD blob beyond that size, but the OS will only be able to work with the first 2,048 GB of
space.)
6. Updating the VM might take a few seconds. When the command finishes executing, restart the VM:
Connect-AzAccount
Select-AzSubscription -SubscriptionName 'my-subscription-name'
$rgName = 'my-resource-group-name'
$vmName = 'my-vm-name'
$vm = Get-AzVM -ResourceGroupName $rgName -Name $vmName
Stop-AzVM -ResourceGroupName $rgName -Name $vmName
$disk= Get-AzDisk -ResourceGroupName $rgName -DiskName $vm.StorageProfile.OsDisk.Name
$disk.DiskSizeGB = 1023
Update-AzDisk -ResourceGroupName $rgName -Disk $disk -DiskName $disk.Name
Start-AzVM -ResourceGroupName $rgName -Name $vmName
Unmanaged disks
Connect-AzAccount
Select-AzSubscription -SubscriptionName 'my-subscription-name'
$rgName = 'my-resource-group-name'
$vmName = 'my-vm-name'
$vm = Get-AzVM -ResourceGroupName $rgName -Name $vmName
Stop-AzVM -ResourceGroupName $rgName -Name $vmName
$vm.StorageProfile.OSDisk.DiskSizeGB = 1023
Update-AzVM -ResourceGroupName $rgName -VM $vm
Start-AzVM -ResourceGroupName $rgName -Name $vmName
Unmanaged disk
$vm.StorageProfile.DataDisks[0].DiskSizeGB = 1023
Similarly, you can reference other data disks attached to the VM, either by using an index as shown above or the
Name property of the disk:
Managed disk
Unmanaged disk
Next steps
You can also attach disks using the Azure portal.
Use Azure Storage Explorer to manage Azure
managed disks
11/2/2020 • 2 minutes to read • Edit Online
Storage Explorer 1.10.0 enables users to upload, download, and copy managed disks, as well as create snapshots.
Because of these additional capabilities, you can use Storage Explorer to migrate data from on-premises to Azure,
and migrate data across Azure regions.
Prerequisites
To complete this article, you'll need the following:
An Azure subscription
One or more Azure managed disks
The latest version of Azure Storage Explorer
2. Select Upload .
3. In Upload VHD specify your source VHD, the name of the disk, the OS type, the region you want to upload
the disk to, as well as the account type. In some regions Availability zones are supported, for those regions
you can select a zone of your choice.
4. Select Create to begin uploading your disk.
5. The status of the upload will now display in Activities .
6. If the upload has finished and you don't see the disk in the right pane, select Refresh .
4. Select Save and your disk will begin downloading. The status of the download will display in Activities .
3. On the left pane, select the resource group you'd like to paste the disk in.
4. Select Paste on the right pane.
5. In the Paste Disk dialog, fill in the values. You can also specify an Availability zone in supported regions.
6. Select Paste and your disk will begin copying, the status is displayed in Activities .
Create a snapshot
1. From the Disks dropdown on the left, select the resource group that contains the disk you want to snapshot.
2. On the right, select the disk you'd like to snapshot and select Create Snapshot .
3. In Create Snapshot , specify the name of the snapshot as well as the resource group you want to create it
in. Then select Create .
4. Once the snapshot has been created, you can select Open in Por tal in Activities to view the snapshot in
the Azure portal.
Next steps
Learn how to Create a VM from a VHD by using the Azure portal.
Learn how to Attach a managed data disk to a Windows VM by using the Azure portal.
Create a snapshot using the portal or PowerShell
11/2/2020 • 2 minutes to read • Edit Online
A snapshot is a full, read-only copy of a virtual hard drive (VHD). You can take a snapshot of an OS or data disk
VHD to use as a backup, or to troubleshoot virtual machine (VM) issues.
If you are going to use the snapshot to create a new VM, we recommend that you cleanly shut down the VM
before taking a snapshot, to clear out any processes that are in progress.
Use PowerShell
The following steps show how to copy the VHD disk and create the snapshot configuration. You can then take a
snapshot of the disk by using the New-AzSnapshot cmdlet.
1. Set some parameters:
$resourceGroupName = 'myResourceGroup'
$location = 'eastus'
$vmName = 'myVM'
$snapshotName = 'mySnapshot'
$vm = Get-AzVM `
-ResourceGroupName $resourceGroupName `
-Name $vmName
3. Create the snapshot configuration. For this example, the snapshot is of the OS disk:
$snapshot = New-AzSnapshotConfig `
-SourceUri $vm.StorageProfile.OsDisk.ManagedDisk.Id `
-Location $location `
-CreateOption copy
NOTE
If you would like to store your snapshot in zone-resilient storage, create it in a region that supports availability
zones and include the -SkuName Standard_ZRS parameter.
New-AzSnapshot `
-Snapshot $snapshot `
-SnapshotName $snapshotName `
-ResourceGroupName $resourceGroupName
Next steps
Create a virtual machine from a snapshot by creating a managed disk from a snapshot and then attaching the
new managed disk as the OS disk. For more information, see the sample in Create a VM from a snapshot with
PowerShell.
Reduce costs with Azure Disks Reservation
11/2/2020 • 6 minutes to read • Edit Online
Save on your Azure Disk Storage usage with reserved capacity. Azure Disk Storage reservations combined with
Azure Reserved Virtual Machine Instances let you lower your total virtual machine (VM) costs. The reservation
discount is applied automatically to the matching disks in the selected reservation scope. Because of this automatic
application, you don't need to assign a reservation to a managed disk to get the discounts.
Discounts are applied hourly depending on the disk usage. Unused reserved capacity doesn't carry over. Azure
Disk Storage reservation discounts don't apply to unmanaged disks, ultra disks, or page blob consumption.
P RE
M IU
M
SSD
SIZ E
S P1 P2 P3 P4 P6 P 10 P 15 P 20 P 30 P 40 P 50 P 60 P 70 P 80
Disk 4 8 16 32 64 128 256 512 1,02 2,04 4,09 8,19 16,3 32,7
size 4 8 6 2 84 67
in
GiB
Pro 120 120 120 120 240 500 1,10 2,30 5,00 7,50 7,50 16,0 18,0 20,0
visi 0 0 0 0 0 00 00 00
one
d
IOP
S
per
disk
P RE
M IU
M
SSD
SIZ E
S P1 P2 P3 P4 P6 P 10 P 15 P 20 P 30 P 40 P 50 P 60 P 70 P 80
Pro 25 25 25 25 50 100 125 150 200 250 250 500 750 900
visi MB/ MB/ MB/ MB/ MB/ MB/ MB/ MB/ MB/ MB/ MB/ MB/ MB/ MB/
one sec sec sec sec sec sec sec sec sec sec sec sec sec sec
d
Thr
oug
hpu
t
per
disk
Max 30 30 30 30 30 30 30 30
bur min min min min min min min min
st
dur
atio
n
Purchase considerations
We recommend the following practices when considering disk reservation purchase:
Analyze your usage information to help determine which reservations you should purchase. Make sure you
track the usage in disk SKUs instead of provisioned or used disk capacity.
Examine your disk reservation along with your VM reservation. We highly recommend making reservations
for both VM usage and disk usage for maximum savings. You can start with determining the right VM
reservation and then evaluate the disk reservation. Generally, you'll have a standard configuration for each
of your workloads. For example, a SQL Server server might have two P40 data disks and one P30 operating
system disk.
This kind of pattern can help you determine the reserved amount you might purchase. This approach can
simplify the evaluation process and ensure that you have an aligned plan for both your VM and disks. The
plan contains considerations like subscriptions or regions.
Purchase restrictions
Reservation discounts are currently unavailable for the following:
Unmanaged disks or page blobs.
Standard SSDs or standard hard-disk drives (HDDs).
Premium SSD SKUs smaller than P30: P1, P2, P3, P4, P6, P10, P15, and P20 SSD SKUs.
Disks in Azure Government, Azure Germany, or Azure China regions.
In rare circumstances, Azure limits the purchase of new reservations to a subset of disk SKUs because of low
capacity in a region.
EL EM EN T DESC RIP T IO N
EL EM EN T DESC RIP T IO N
Subscription The subscription you use to pay for the Azure Storage
reservation. The payment method on the selected
subscription is used in charging the costs. The subscription
must be one of the following types:
Enterprise Agreement (offer numbers MS-AZR-
0017P and MS-AZR-0148P). For an Enterprise
subscription, the charges are deducted from the
enrollment's monetary commitment balance or
charged as overage.
Billing frequency How often the account is billed for the reservation.
Options include Monthly and Upfront .
4. After you specify the values for your reservation, the Azure portal displays the cost. The portal also shows
the discount percentage over pay-as-you-go billing. Select Next to continue to the Purchase reser vations
pane.
5. On the Purchase reser vations pane, you can name your reservation and select the total quantity of
reservations you want to make. The number of reservations maps to the number of disks. For example, if
you want to reserve a hundred disks, enter the Quantity value 100 .
6. Review the total cost of the reservation.
After you purchase a reservation, it's automatically applied to any existing Disk Storage resources that match the
reservation terms. If you haven't created any Disk Storage resources yet, the reservation applies whenever you
create a resource that matches the reservation terms. In either case, the reservation term begins immediately after
a successful purchase.
Expiration of a reservation
When a reservation expires, any Azure Disk Storage capacity that you use under that reservation is billed at the
pay-as-you-go rate. Reservations don't renew automatically.
You'll receive an email notification 30 days before the expiration of the reservation and again on the expiration
date. To continue taking advantage of the cost savings that a reservation provides, renew it no later than the
expiration date.
Next steps
What are Azure Reservations?
Understand how your reservation discount is applied to Azure Disk Storage
Create an incremental snapshot for managed disks -
PowerShell
11/2/2020 • 5 minutes to read • Edit Online
Incremental snapshots are point in time backups for managed disks that, when taken, consist only of the changes
since the last snapshot. When you restore a disk from an incremental snapshot, the system reconstruct the full disk
which represents the point in time backup of the disk when the incremental snapshot was taken. This new
capability for managed disk snapshots potentially allows them to be more cost effective, since, unless you choose
to, you do not have to store the entire disk with each individual snapshot. Just like full snapshots, incremental
snapshots can be used to either create a full managed disk or a full snapshot.
There are a few differences between an incremental snapshot and a full snapshot. Incremental snapshots will
always use standard HDDs storage, irrespective of the storage type of the disk, whereas full snapshots can use
premium SSDs. If you are using full snapshots on Premium Storage to scale up VM deployments, we recommend
you use custom images on standard storage in the Shared Image Gallery. It will help you achieve a more massive
scale with lower cost. Additionally, incremental snapshots potentially offer better reliability with zone-redundant
storage (ZRS). If ZRS is available in the selected region, an incremental snapshot will use ZRS automatically. If ZRS
is not available in the region, then the snapshot will default to locally-redundant storage (LRS). You can override
this behavior and select one manually but, we do not recommend that.
Incremental snapshots also offer a differential capability, only available to managed disks. They enable you to get
the changes between two incremental snapshots of the same managed disks, down to the block level. You can use
this capability to reduce your data footprint when copying snapshots across regions. For example, you can
download the first incremental snapshot as a base blob in another region. For the subsequent incremental
snapshots, you can copy only the changes since the last snapshot to the base blob. After copying the changes, you
can take snapshots on the base blob that represent your point in time backup of the disk in another region. You can
restore your disk either from the base blob or from a snapshot on the base blob in another region.
Incremental snapshots are billed for the used size only. You can find the used size of your snapshots by looking at
the Azure usage report. For example, if the used data size of a snapshot is 10 GiB, the daily usage report will show
10 GiB/(31 days) = 0.3226 as the consumed quantity.
Supported regions
Incremental snapshots is now available in all the public regions in GA.
Restrictions
Incremental snapshots currently cannot be moved between subscriptions.
You can currently only generate SAS URIs of up to five snapshots of a particular snapshot family at any given
time.
You cannot create an incremental snapshot for a particular disk outside of that disk's subscription.
Up to seven incremental snapshots per disk can be created every five minutes.
A total of 200 incremental snapshots can be created for a single disk.
You cannot get the changes between snapshots taken before and after you changed the size of the parent disk
across 4 TB boundary. For example, You took an incremental snapshot snapshot-a when the size of a disk was 2
TB. Now you increased the size of the disk to 6 TB and then took another incremental snapshot snapshot-b. You
cannot get the changes between snapshot-a and snapshot-b. You have to again download the full copy of
snapshot-b created after the resize. Subsequently, you can get the changes between snapshot-b and snapshots
created after snapshot-b.
PowerShell
You can use Azure PowerShell to create an incremental snapshot. You will need the latest version of Azure
PowerShell, the following command will either install it or update your existing installation to latest:
$diskName = "yourDiskNameHere>"
$resourceGroupName = "yourResourceGroupNameHere"
$snapshotName = "yourDesiredSnapshotNameHere"
# Get the disk that you need to backup by creating an incremental snapshot
$yourDisk = Get-AzDisk -DiskName $diskName -ResourceGroupName $resourceGroupName
# Create an incremental snapshot by setting the SourceUri property with the value of the Id property of the
disk
$snapshotConfig=New-AzSnapshotConfig -SourceUri $yourDisk.Id -Location $yourDisk.Location -CreateOption Copy -
Incremental
New-AzSnapshot -ResourceGroupName $resourceGroupName -SnapshotName $snapshotName -Snapshot $snapshotConfig
You can identify incremental snapshots from the same disk with the SourceResourceId and the SourceUniqueId
properties of snapshots. SourceResourceId is the Azure Resource Manager resource ID of the parent disk.
SourceUniqueId is the value inherited from the UniqueId property of the disk. If you were to delete a disk and then
create a new disk with the same name, the value of the UniqueId property changes.
You can use SourceResourceId and SourceUniqueId to create a list of all snapshots associated with a particular disk.
Replace <yourResourceGroupNameHere> with your value and then you can use the following example to list your
existing incremental snapshots:
$snapshots = Get-AzSnapshot -ResourceGroupName $resourceGroupName
$incrementalSnapshots.Add($snapshot)
}
}
$incrementalSnapshots
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"diskName": {
"type": "string",
"defaultValue": "contosodisk1"
},
"diskResourceId": {
"defaultValue": "<your_managed_disk_resource_ID>",
"type": "String"
}
},
"resources": [
{
"type": "Microsoft.Compute/snapshots",
"name": "[concat( parameters('diskName'),'_snapshot1')]",
"location": "[resourceGroup().location]",
"apiVersion": "2019-03-01",
"properties": {
"creationData": {
"createOption": "Copy",
"sourceResourceId": "[parameters('diskResourceId')]"
},
"incremental": true
}
}
]
}
Next steps
If you'd like to see sample code demonstrating the differential capability of incremental snapshots, using .NET, see
Copy Azure Managed Disks backups to another region with differential capability of incremental snapshots.
Creating an incremental snapshot for managed disks
11/2/2020 • 3 minutes to read • Edit Online
Incremental snapshots are point in time backups for managed disks that, when taken, consist only of the changes
since the last snapshot. When you restore a disk from an incremental snapshot, the system reconstruct the full disk
which represents the point in time backup of the disk when the incremental snapshot was taken. This new
capability for managed disk snapshots potentially allows them to be more cost effective, since, unless you choose
to, you do not have to store the entire disk with each individual snapshot. Just like full snapshots, incremental
snapshots can be used to either create a full managed disk or a full snapshot.
There are a few differences between an incremental snapshot and a full snapshot. Incremental snapshots will
always use standard HDDs storage, irrespective of the storage type of the disk, whereas full snapshots can use
premium SSDs. If you are using full snapshots on Premium Storage to scale up VM deployments, we recommend
you use custom images on standard storage in the Shared Image Gallery. It will help you achieve a more massive
scale with lower cost. Additionally, incremental snapshots potentially offer better reliability with zone-redundant
storage (ZRS). If ZRS is available in the selected region, an incremental snapshot will use ZRS automatically. If ZRS
is not available in the region, then the snapshot will default to locally-redundant storage (LRS). You can override
this behavior and select one manually but, we do not recommend that.
Incremental snapshots also offer a differential capability, only available to managed disks. They enable you to get
the changes between two incremental snapshots of the same managed disks, down to the block level. You can use
this capability to reduce your data footprint when copying snapshots across regions. For example, you can
download the first incremental snapshot as a base blob in another region. For the subsequent incremental
snapshots, you can copy only the changes since the last snapshot to the base blob. After copying the changes, you
can take snapshots on the base blob that represent your point in time backup of the disk in another region. You can
restore your disk either from the base blob or from a snapshot on the base blob in another region.
Incremental snapshots are billed for the used size only. You can find the used size of your snapshots by looking at
the Azure usage report. For example, if the used data size of a snapshot is 10 GiB, the daily usage report will show
10 GiB/(31 days) = 0.3226 as the consumed quantity.
Regional availability
Incremental snapshots is now available in all the public regions in GA.
Restrictions
Incremental snapshots currently cannot be moved between subscriptions.
You can currently only generate SAS URIs of up to five snapshots of a particular snapshot family at any given
time.
You cannot create an incremental snapshot for a particular disk outside of that disk's subscription.
Up to seven incremental snapshots per disk can be created every five minutes.
A total of 200 incremental snapshots can be created for a single disk.
You cannot get the changes between snapshots taken before and after you changed the size of the parent disk
across 4 TB boundary. For example, You took an incremental snapshot snapshot-a when the size of a disk was 2
TB. Now you increased the size of the disk to 6 TB and then took another incremental snapshot snapshot-b. You
cannot get the changes between snapshot-a and snapshot-b. You have to again download the full copy of
snapshot-b created after the resize. Subsequently, you can get the changes between snapshot-b and snapshots
created after snapshot-b.
Portal
1. Sign into the Azure portal and navigate to the disk you'd like to snapshot.
2. On your disk, select Create a Snapshot
3. Select the resource group you'd like to use and enter a name.
4. Select Incremental and select Review + Create
5. Select Create
Next steps
If you'd like to see sample code demonstrating the differential capability of incremental snapshots, using .NET, see
Copy Azure Managed Disks backups to another region with differential capability of incremental snapshots.
Back up Azure unmanaged VM disks with
incremental snapshots
4/22/2020 • 7 minutes to read • Edit Online
Overview
Azure Storage provides the capability to take snapshots of blobs. Snapshots capture the blob state at that point in
time. In this article, we describe a scenario in which you can maintain backups of virtual machine disks using
snapshots. You can use this methodology when you choose not to use Azure Backup and Recovery Service, and
wish to create a custom backup strategy for your virtual machine disks. For virtual machines running business or
mission critical workloads, it's recommended to use Azure Backup as part of the backup strategy.
Azure virtual machine disks are stored as page blobs in Azure Storage. Since we are describing a backup strategy
for virtual machine disks in this article, we refer to snapshots in the context of page blobs. To learn more about
snapshots, refer to Creating a Snapshot of a Blob.
What is a snapshot?
A blob snapshot is a read-only version of a blob that is captured at a point in time. Once a snapshot has been
created, it can be read, copied, or deleted, but not modified. Snapshots provide a way to back up a blob as it
appears at a moment in time. Until REST version 2015-04-05, you had the ability to copy full snapshots. With the
REST version 2015-07-08 and above, you can also copy incremental snapshots.
NOTE
If you copy the base blob to another destination, the snapshots of the blob are not copied along with it. Similarly, if you
overwrite a base blob with a copy, snapshots associated with the base blob are not affected and stay intact under the base
blob name.
NOTE
This feature is available for Premium and Standard Azure Page Blobs.
When you have a custom backup strategy using snapshots, copying the snapshots from one storage account to
another can be slow and can consume much storage space. Instead of copying the entire snapshot to a backup
storage account, you can write the difference between consecutive snapshots to a backup page blob. This way, the
time to copy and the space to store backups is substantially reduced.
Implementing Incremental Snapshot Copy
You can implement incremental snapshot copy by doing the following,
Take a snapshot of the base blob using Snapshot Blob.
Copy the snapshot to the target backup storage account in same or any other Azure region using Copy Blob.
This is the backup page blob. Take a snapshot of the backup page blob and store it in the backup account.
Take another snapshot of the base blob using Snapshot Blob.
Get the difference between the first and second snapshots of the base blob using GetPageRanges. Use the new
parameter prevsnapshot , to specify the DateTime value of the snapshot you want to get the difference with.
When this parameter is present, the REST response includes only the pages that were changed between target
snapshot and previous snapshot including clear pages.
Use PutPage to apply these changes to the backup page blob.
Finally, take a snapshot of the backup page blob and store it in the backup storage account.
In the next section, we will describe in more detail how you can maintain backups of disks using Incremental
Snapshot Copy
Scenario
In this section, we describe a scenario that involves a custom backup strategy for virtual machine disks using
snapshots.
Consider a DS-series Azure VM with a premium storage P30 disk attached. The P30 disk called mypremiumdisk is
stored in a premium storage account called mypremiumaccount. A standard storage account called
mybackupstdaccount is used for storing the backup of mypremiumdisk. We would like to keep a snapshot of
mypremiumdisk every 12 hours.
To learn about creating a storage account, see Create a storage account.
To learn about backing up Azure VMs, refer to Plan Azure VM backups.
Next Steps
Use the following links to learn more about creating snapshots of a blob and planning your VM backup
infrastructure.
Creating a Snapshot of a Blob
Plan your VM Backup Infrastructure
Use the Azure PowerShell module to enable end-to-
end encryption using encryption at host
11/2/2020 • 8 minutes to read • Edit Online
When you enable encryption at host, data stored on the VM host is encrypted at rest and flows encrypted to the
Storage service. For conceptual information on encryption at host, as well as other managed disk encryption types,
see Encryption at host - End-to-end encryption for your VM data.
Restrictions
Does not support ultra disks.
Cannot be enabled if Azure Disk Encryption (guest-VM encryption using bitlocker/VM-Decrypt) is enabled on
your VMs/virtual machine scale sets.
Azure Disk Encryption cannot be enabled on disks that have encryption at host enabled.
The encryption can be enabled on existing virtual machine scale set. However, only new VMs created after
enabling the encryption are automatically encrypted.
Existing VMs must be deallocated and reallocated in order to be encrypted.
Supported regions
Currently available only in the following regions:
West US
West US 2
East US
East US 2
South Central US
US Gov Virginia
US Gov Arizona
Supported VM sizes
All the latest generation of VM sizes support encryption at host:
General purpose Dv3, Dav4, Dv2, Av2 B, DSv2, Dsv3, DC, DCv2, Dasv4
Prerequisites
In order to be able to use encryption at host for your VMs or virtual machine scale sets, you must get the feature
enabled on your subscription. Send an email to encryptionAtHost@microsoft .com with your subscription Ids to
get the feature enabled for your subscriptions.
Create an Azure Key Vault and DiskEncryptionSet
Once the feature is enabled, you'll need to set up an Azure Key Vault and a DiskEncryptionSet, if you haven't
already.
1. Make sure that you have installed latest Azure PowerShell version, and you are signed in to an Azure
account in with Connect-AzAccount
2. Create an instance of Azure Key Vault and encryption key.
When creating the Key Vault instance, you must enable soft delete and purge protection. Soft delete ensures
that the Key Vault holds a deleted key for a given retention period (90 day default). Purge protection ensures
that a deleted key cannot be permanently deleted until the retention period lapses. These settings protect
you from losing data due to accidental deletion. These settings are mandatory when using a Key Vault for
encrypting managed disks.
$ResourceGroupName="yourResourceGroupName"
$LocationName="westcentralus"
$keyVaultName="yourKeyVaultName"
$keyName="yourKeyName"
$keyDestination="Software"
$diskEncryptionSetName="yourDiskEncryptionSetName"
NOTE
It may take few minutes for Azure to create the identity of your DiskEncryptionSet in your Azure Active Directory. If
you get an error like "Cannot find the Active Directory object" when running the following command, wait a few
minutes and try again.
Set-AzKeyVaultAccessPolicy -VaultName $keyVaultName -ObjectId $des.Identity.PrincipalId -
PermissionsToKeys wrapkey,unwrapkey,get
Examples
Create a VM with encryption at host enabled with customer-managed keys.
Create a VM with managed disks using the resource URI of the DiskEncryptionSet created earlier to encrypt cache
of OS and data disks with customer-managed keys. The temp disks are encrypted with platform-managed keys.
$VMLocalAdminUser = "yourVMLocalAdminUserName"
$VMLocalAdminSecurePassword = ConvertTo-SecureString <password> -AsPlainText -Force
$LocationName = "yourRegion"
$ResourceGroupName = "yourResourceGroupName"
$ComputerName = "yourComputerName"
$VMName = "yourVMName"
$VMSize = "yourVMSize"
$diskEncryptionSetName="yourdiskEncryptionSetName"
$NetworkName = "yourNetworkName"
$NICName = "yourNICName"
$SubnetName = "yourSubnetName"
$SubnetAddressPrefix = "10.0.0.0/24"
$VnetAddressPrefix = "10.0.0.0/16"
# Enable encryption with a customer managed key for OS disk by setting DiskEncryptionSetId property
# Add a data disk encrypted with a customer managed key by setting DiskEncryptionSetId property
$NetworkName = "yourNetworkName"
$NICName = "yourNICName"
$SubnetName = "yourSubnetName"
$SubnetAddressPrefix = "10.0.0.0/24"
$VnetAddressPrefix = "10.0.0.0/16"
$ResourceGroupName = "yourResourceGroupName"
$VMName = "yourVMName"
$ResourceGroupName = "yourResourceGroupName"
$VMName = "yourVMName"
$VM.SecurityProfile.EncryptionAtHost
Create a virtual machine scale set with encryption at host enabled with customer-managed keys.
Create a virtual machine scale set with managed disks using the resource URI of the DiskEncryptionSet created
earlier to encrypt cache of OS and data disks with customer-managed keys. The temp disks are encrypted with
platform-managed keys.
$VMLocalAdminUser = "yourLocalAdminUser"
$VMLocalAdminSecurePassword = ConvertTo-SecureString Password@123 -AsPlainText -Force
$LocationName = "westcentralus"
$ResourceGroupName = "yourResourceGroupName"
$ComputerNamePrefix = "yourComputerNamePrefix"
$VMScaleSetName = "yourVMSSName"
$VMSize = "Standard_DS3_v2"
$diskEncryptionSetName="yourDiskEncryptionSetName"
$NetworkName = "yourVNETName"
$SubnetName = "yourSubnetName"
$SubnetAddressPrefix = "10.0.0.0/24"
$VnetAddressPrefix = "10.0.0.0/16"
# Enable encryption with a customer managed key for the OS disk by setting DiskEncryptionSetId property
# Add a data disk encrypted with a customer managed key by setting DiskEncryptionSetId property
Create a virtual machine scale set with encryption at host enabled with platform-managed keys.
Create a virtual machine scale set with encryption at host enabled to encrypt cache of OS/data disks and temp
disks with platform-managed keys.
$VMLocalAdminUser = "yourLocalAdminUser"
$VMLocalAdminSecurePassword = ConvertTo-SecureString Password@123 -AsPlainText -Force
$LocationName = "westcentralus"
$ResourceGroupName = "yourResourceGroupName"
$ComputerNamePrefix = "yourComputerNamePrefix"
$VMScaleSetName = "yourVMSSName"
$VMSize = "Standard_DS3_v2"
$NetworkName = "yourVNETName"
$SubnetName = "yourSubnetName"
$SubnetAddressPrefix = "10.0.0.0/24"
$VnetAddressPrefix = "10.0.0.0/16"
$ResourceGroupName = "yourResourceGroupName"
$VMScaleSetName = "yourVMSSName"
Check the status of encryption at host for a virtual machine scale set
$ResourceGroupName = "yourResourceGroupName"
$VMScaleSetName = "yourVMSSName"
$VMSS.VirtualMachineProfile.SecurityProfile.EncryptionAtHost
Finding supported VM sizes
Legacy VM Sizes are not supported. You can find the list of supported VM sizes by either:
Calling the Resource Skus API and checking that the EncryptionAtHostSupported capability is set to True .
{
"resourceType": "virtualMachines",
"name": "Standard_DS1_v2",
"tier": "Standard",
"size": "DS1_v2",
"family": "standardDSv2Family",
"locations": [
"CentralUSEUAP"
],
"capabilities": [
{
"name": "EncryptionAtHostSupported",
"value": "True"
}
]
}
foreach($vmSize in $vmSizes)
{
foreach($capability in $vmSize.capabilities)
{
if($capability.Name -eq 'EncryptionAtHostSupported' -and $capability.Value -eq 'true')
{
$vmSize
}
}
Next steps
Now that you've created and configured these resources, you can use them to secure your managed disks. The
following link contains example scripts, each with a respective scenario, that you can use to secure your managed
disks.
Azure Resource Manager template samples
Use the Azure portal to enable end-to-end
encryption using encryption at host
11/2/2020 • 4 minutes to read • Edit Online
When you enable encryption at host, data stored on the VM host is encrypted at rest and flows encrypted to the
Storage service. For conceptual information on encryption at host, as well as other managed disk encryption types,
see:
Linux: Encryption at host - End-to-end encryption for your VM data.
Windows: Encryption at host - End-to-end encryption for your VM data.
Restrictions
Does not support ultra disks.
Cannot be enabled if Azure Disk Encryption (guest-VM encryption using bitlocker/VM-Decrypt) is enabled on
your VMs/virtual machine scale sets.
Azure Disk Encryption cannot be enabled on disks that have encryption at host enabled.
The encryption can be enabled on existing virtual machine scale set. However, only new VMs created after
enabling the encryption are automatically encrypted.
Existing VMs must be deallocated and reallocated in order to be encrypted.
Supported regions
Currently available only in the following regions:
West US
West US 2
East US
East US 2
South Central US
US Gov Virginia
US Gov Arizona
Supported VM sizes
All the latest generation of VM sizes support encryption at host:
General purpose Dv3, Dav4, Dv2, Av2 B, DSv2, Dsv3, DC, DCv2, Dasv4
Upgrading the VM size will result in validation to check if the new VM size supports the EncryptionAtHost feature.
Prerequisites
In order to be able to use encryption at host for your VMs or virtual machine scale sets, you must get the feature
enabled on your subscription. Send an email to encryptionAtHost@microsoft. com with your subscription Ids to
get the feature enabled for your subscriptions.
Sign in to the Azure portal using the provided link.
IMPORTANT
You must use the provided link to access the Azure portal. Encryption at host is not currently visible in the public Azure
portal without using the link.
NOTE
When creating the Key Vault instance, you must enable soft delete and purge protection. Soft delete ensures that the
Key Vault holds a deleted key for a given retention period (90 day default). Purge protection ensures that a deleted
key cannot be permanently deleted until the retention period lapses. These settings protect you from losing data due
to accidental deletion. These settings are mandatory when using a Key Vault for encrypting managed disks.
10. Leave both Key Type set to RSA and RSA Key Size set to 2048 .
11. Fill in the remaining selections as you like and then select Create .
NOTE
Once you create a disk encryption set with a particular encryption type, it cannot be changed. If you want to use a
different encryption type, you must create a new disk encryption set.
9. Open the disk encryption set once it finishes creating and select the alert that pops up.
Two notifications should pop up and succeed. This allows you to use the disk encryption set with your key
vault.
Deploy a VM
You must deploy a new VM to enable encryption at host, it cannot be enabled on existing VMs.
1. Search for Vir tual Machines and select + Add to create a VM.
2. Create a new virtual machine, select an appropriate region and a supported VM size.
3. Fill in the other values on the Basic blade as you like, then proceed to the Disks blade.
4. On the Disks blade, select Yes for Encr yption at host .
5. Make the remaining selections as you like.
6. Finish the VM deployment process, make selections that fit your environment.
You have now deployed a VM with encryption at host enabled, all its associated disks will be encrypted using
encryption at host.
Next steps
Azure Resource Manager template samples
Azure PowerShell - Enable customer-managed keys
with server-side encryption - managed disks
11/2/2020 • 6 minutes to read • Edit Online
Azure Disk Storage allows you to manage your own keys when using server-side encryption (SSE) for managed
disks, if you choose. For conceptual information on SSE with customer managed keys, as well as other managed
disk encryption types, see the Customer-managed keys section of our disk encryption article.
Restrictions
For now, customer-managed keys have the following restrictions:
If this feature is enabled for your disk, you cannot disable it. If you need to work around this, you must copy all
the data to an entirely different managed disk that isn't using customer-managed keys.
Only software and HSM RSA keys of sizes 2,048-bit, 3,072-bit and 4,096-bit are supported, no other keys or
sizes.
HSM keys require the premium tier of Azure Key vaults.
Disks created from custom images that are encrypted using server-side encryption and customer-managed
keys must be encrypted using the same customer-managed keys and must be in the same subscription.
Snapshots created from disks that are encrypted with server-side encryption and customer-managed keys
must be encrypted with the same customer-managed keys.
All resources related to your customer-managed keys (Azure Key Vaults, disk encryption sets, VMs, disks, and
snapshots) must be in the same subscription and region.
Disks, snapshots, and images encrypted with customer-managed keys cannot move to another resource group
and subscription.
Managed disks currently or previously encrypted using Azure Disk Encryption cannot be encrypted using
customer-managed keys.
Can only create up to 50 disk encryption sets per region per subscription.
For information about using customer-managed keys with shared image galleries, see Preview: Use customer-
managed keys for encrypting images.
NOTE
It may take few minutes for Azure to create the identity of your DiskEncryptionSet in your Azure Active Directory. If
you get an error like "Cannot find the Active Directory object" when running the following command, wait a few
minutes and try again.
Examples
Now that you've created and configured these resources, you can use them to secure your managed disks. The
following are example scripts, each with a respective scenario, that you can use to secure your managed disks.
Create a VM using a Marketplace image, encrypting the OS and data disks with customer-managed keys
Copy the script, replace all of the example values with your own parameters, and then run it.
$VMLocalAdminUser = "yourVMLocalAdminUserName"
$VMLocalAdminSecurePassword = ConvertTo-SecureString <password> -AsPlainText -Force
$LocationName = "yourRegion"
$ResourceGroupName = "yourResourceGroupName"
$ComputerName = "yourComputerName"
$VMName = "yourVMName"
$VMSize = "yourVMSize"
$diskEncryptionSetName="yourdiskEncryptionSetName"
$NetworkName = "yourNetworkName"
$NICName = "yourNICName"
$SubnetName = "yourSubnetName"
$SubnetAddressPrefix = "10.0.0.0/24"
$VnetAddressPrefix = "10.0.0.0/16"
Create an empty disk encrypted using server-side encryption with customer-managed keys and attach it to a
VM
Copy the script, replace all of the example values with your own parameters, and then run it.
$vmName = "yourVMName"
$LocationName = "westcentralus"
$ResourceGroupName = "yourResourceGroupName"
$diskName = "yourDiskName"
$diskSKU = "Premium_LRS"
$diskSizeinGiB = 30
$diskLUN = 1
$diskEncryptionSetName="yourDiskEncryptionSetName"
$vm = Add-AzVMDataDisk -VM $vm -Name $diskName -CreateOption Empty -DiskSizeInGB $diskSizeinGiB -
StorageAccountType $diskSKU -Lun $diskLUN -DiskEncryptionSetId $diskEncryptionSet.Id
$rgName = "yourResourceGroupName"
$diskName = "yourDiskName"
$diskEncryptionSetName = "yourDiskEncryptionSetName"
Create a virtual machine scale set using a Marketplace image, encrypting the OS and data disks with customer-
managed keys
Copy the script, replace all of the example values with your own parameters, and then run it.
$VMLocalAdminUser = "yourLocalAdminUser"
$VMLocalAdminSecurePassword = ConvertTo-SecureString Password@123 -AsPlainText -Force
$LocationName = "westcentralus"
$ResourceGroupName = "yourResourceGroupName"
$ComputerNamePrefix = "yourComputerNamePrefix"
$VMScaleSetName = "yourVMSSName"
$VMSize = "Standard_DS3_v2"
$diskEncryptionSetName="yourDiskEncryptionSetName"
$NetworkName = "yourVNETName"
$SubnetName = "yourSubnetName"
$SubnetAddressPrefix = "10.0.0.0/24"
$VnetAddressPrefix = "10.0.0.0/16"
# Enable encryption at rest with customer managed keys for OS disk by setting DiskEncryptionSetId property
# Add a data disk encrypted at rest with customer managed keys by setting DiskEncryptionSetId property
Change the key of a DiskEncryptionSet to rotate the key for all the resources referencing the DiskEncryptionSet
Copy the script, replace all of the example values with your own parameters, and then run it.
$ResourceGroupName="yourResourceGroupName"
$keyVaultName="yourKeyVaultName"
$keyName="yourKeyName"
$diskEncryptionSetName="yourDiskEncryptionSetName"
IMPORTANT
Customer-managed keys rely on managed identities for Azure resources, a feature of Azure Active Directory (Azure AD).
When you configure customer-managed keys, a managed identity is automatically assigned to your resources under the
covers. If you subsequently move the subscription, resource group, or managed disk from one Azure AD directory to
another, the managed identity associated with the managed disks is not transferred to the new tenant, so customer-
managed keys may no longer work. For more information, see Transferring a subscription between Azure AD directories.
Next steps
Explore the Azure Resource Manager templates for creating encrypted disks with customer-managed keys
Replicate machines with customer-managed keys enabled disks
Set up disaster recovery of VMware VMs to Azure with PowerShell
Set up disaster recovery to Azure for Hyper-V VMs using PowerShell and Azure Resource Manager
Use the Azure portal to enable server-side
encryption with customer-managed keys for
managed disks
11/2/2020 • 5 minutes to read • Edit Online
Azure Disk Storage allows you to manage your own keys when using server-side encryption (SSE) for managed
disks, if you choose. For conceptual information on SSE with customer managed keys, as well as other managed
disk encryption types, see the Customer-managed keys section of our disk encryption article:
For Linux: Customer-managed keys
For Windows: Customer-managed keys
Restrictions
For now, customer-managed keys have the following restrictions:
If this feature is enabled for your disk, you cannot disable it. If you need to work around this, you must copy
all the data to an entirely different managed disk that isn't using customer-managed keys:
For Linux: Copy a managed disk
For Windows: Copy a managed disk
Only software and HSM RSA keys of sizes 2,048-bit, 3,072-bit and 4,096-bit are supported, no other keys or
sizes.
HSM keys require the premium tier of Azure Key vaults.
Only software and HSM RSA keys of sizes 2,048-bit, 3,072-bit and 4,096-bit are supported, no other keys or
sizes.
HSM keys require the premium tier of Azure Key vaults.
Disks created from custom images that are encrypted using server-side encryption and customer-managed
keys must be encrypted using the same customer-managed keys and must be in the same subscription.
Snapshots created from disks that are encrypted with server-side encryption and customer-managed keys
must be encrypted with the same customer-managed keys.
All resources related to your customer-managed keys (Azure Key Vaults, disk encryption sets, VMs, disks, and
snapshots) must be in the same subscription and region.
Disks, snapshots, and images encrypted with customer-managed keys cannot move to another resource group
and subscription.
Managed disks currently or previously encrypted using Azure Disk Encryption cannot be encrypted using
customer-managed keys.
Can only create up to 50 disk encryption sets per region per subscription.
For information about using customer-managed keys with shared image galleries, see Preview: Use customer-
managed keys for encrypting images.
The following sections cover how to enable and use customer-managed keys for managed disks:
Setting up customer-managed keys for your disks will require you to create resources in a particular order, if you're
doing it for the first time. First, you will need to create and set up an Azure Key Vault.
Set up your Azure Key Vault
1. Sign into the Azure portal.
2. Search for and select Key Vaults .
IMPORTANT
Your Azure key vault, disk encryption set, VM, disks, and snapshots must all be in the same region and subscription
for deployment to succeed.
NOTE
When creating the Key Vault instance, you must enable soft delete and purge protection. Soft delete ensures that the
Key Vault holds a deleted key for a given retention period (90 day default). Purge protection ensures that a deleted
key cannot be permanently deleted until the retention period lapses. These settings protect you from losing data due
to accidental deletion. These settings are mandatory when using a Key Vault for encrypting managed disks.
3. Select your resource group, name your encryption set, and select the same region as your key vault.
4. For Encr yption type select Encr yption at-rest with a customer-managed key .
NOTE
Once you create a disk encryption set with a particular encryption type, it cannot be changed. If you want to use a
different encryption type, you must create a new disk encryption set.
Two notifications should pop up and succeed. This allows you to use the disk encryption set with your key
vault.
Deploy a VM
Now that you've created and set up your key vault and the disk encryption set, you can deploy a VM using the
encryption. The VM deployment process is similar to the standard deployment process, the only differences are
that you need to deploy the VM in the same region as your other resources and you opt to use a customer
managed key.
1. Search for Vir tual Machines and select + Add to create a VM.
2. On the Basic blade, select the same region as your disk encryption set and Azure Key Vault.
3. Fill in the other values on the Basic blade as you like.
4. On the Disks blade, select Encr yption at rest with a customer-managed key .
5. Select your disk encryption set in the Disk encr yption set drop-down.
6. Make the remaining selections as you like.
Enable on an existing disk
Cau t i on
Enabling disk encryption on any disks attached to a VM will require that you stop the VM.
1. Navigate to a VM that is in the same region as one of your disk encryption sets.
2. Open the VM and select Stop .
3. After the VM has finished stopping, select Disks and then select the disk you want to encrypt.
4. Select Encr yption and select Encr yption at rest with a customer-managed key and then select your
disk encryption set in the drop-down list.
5. Select Save .
6. Repeat this process for any other disks attached to the VM you'd like to encrypt.
7. When your disks finish switching over to customer-managed keys, if there are no there no other attached
disks you'd like to encrypt, you may start your VM.
IMPORTANT
Customer-managed keys rely on managed identities for Azure resources, a feature of Azure Active Directory (Azure AD).
When you configure customer-managed keys, a managed identity is automatically assigned to your resources under the
covers. If you subsequently move the subscription, resource group, or managed disk from one Azure AD directory to
another, the managed identity associated with the managed disks is not transferred to the new tenant, so customer-
managed keys may no longer work. For more information, see Transferring a subscription between Azure AD directories.
Next steps
Explore the Azure Resource Manager templates for creating encrypted disks with customer-managed keys
What is Azure Key Vault?
Replicate machines with customer-managed keys enabled disks
Set up disaster recovery of VMware VMs to Azure with PowerShell
Set up disaster recovery to Azure for Hyper-V VMs using PowerShell and Azure Resource Manager
Use the Azure PowerShell module to enable double
encryption at rest for managed disks
11/6/2020 • 2 minutes to read • Edit Online
Azure Disk Storage supports double encryption at rest for managed disks. For conceptual information on double
encryption at rest, as well as other managed disk encryption types, see the Double encryption at rest section of our
disk encryption article.
Prerequisites
Install the latest Azure PowerShell version, and sign in to an Azure account using Connect-AzAccount.
Getting started
1. Create an instance of Azure Key Vault and encryption key.
When creating the Key Vault instance, you must enable soft delete and purge protection. Soft delete ensures
that the Key Vault holds a deleted key for a given retention period (90 day default). Purge protection ensures
that a deleted key cannot be permanently deleted until the retention period lapses. These settings protect
you from losing data due to accidental deletion. These settings are mandatory when using a Key Vault for
encrypting managed disks.
$ResourceGroupName="yourResourceGroupName"
$LocationName="westus2"
$keyVaultName="yourKeyVaultName"
$keyName="yourKeyName"
$keyDestination="Software"
$diskEncryptionSetName="yourDiskEncryptionSetName"
Next steps
Now that you've created and configured these resources, you can use them to secure your managed disks. The
following links contain example scripts, each with a respective scenario, that you can use to secure your managed
disks.
Azure PowerShell - Enable customer-managed keys with server-side encryption - managed disks
Azure Resource Manager template samples
Use the Azure portal to enable double encryption at
rest for managed disks
11/6/2020 • 2 minutes to read • Edit Online
Azure Disk Storage supports double encryption at rest for managed disks. For conceptual information on double
encryption at rest, as well as other managed disk encryption types, see the Double encryption at rest section of our
disk encryption article.
Getting started
1. Sign in to the Azure portal.
IMPORTANT
You must use the provided link to access the Azure portal. Double encryption at rest is not currently visible in the
public Azure portal without using the link.
3. Select + Add .
NOTE
Once you create a disk encryption set with a particular encryption type, it cannot be changed. If you want to use a
different encryption type, you must create a new disk encryption set.
7. Select an Azure Key Vault and key, or create a new one if necessary.
NOTE
If you create a Key Vault instance, you must enable soft delete and purge protection. These settings are mandatory
when using a Key Vault for encrypting managed disks, and protect you from losing data due to accidental deletion.
8. Select Create .
9. Navigate to the disk encryption set you created, and select the error that is displayed. This will configure
your disk encryption set to work.
A notification should pop up and succeed. Doing this will allow you to use the disk encryption set with your
key vault.
You have now enabled double encryption at rest on your managed disk.
Next steps
Azure PowerShell - Enable customer-managed keys with server-side encryption - managed disks
Azure Resource Manager template samples
Enable customer-managed keys with server-side encryption - Examples
Update the storage type of a managed disk
11/2/2020 • 3 minutes to read • Edit Online
There are four disk types of Azure managed disks: Azure ultra disks, premium SSD, standard SSD, and standard
HDD. You can switch between the three GA disk types (premium SSD, standard SSD, and standard HDD) based on
your performance needs. You are not yet able to switch from or to an ultra disk, you must deploy a new one.
This functionality is not supported for unmanaged disks. But you can easily convert an unmanaged disk to a
managed disk to be able to switch between disk types.
Prerequisites
Because conversion requires a restart of the virtual machine (VM), you should schedule the migration of your
disk storage during a pre-existing maintenance window.
If your disk is unmanaged, first convert it to a managed disk so you can switch between storage options.
# For disks that belong to the selected VM, convert to Premium storage
foreach ($disk in $vmDisks)
{
if ($disk.ManagedBy -eq $vm.Id)
{
$disk.Sku = [Microsoft.Azure.Management.Compute.Models.DiskSku]::new($storageType)
$disk | Update-AzDisk
}
}
Next steps
Make a read-only copy of a VM by using a snapshot.
Migrate to Premium Storage by using Azure Site
Recovery
11/2/2020 • 11 minutes to read • Edit Online
Azure premium SSDs deliver high-performance, low-latency disk support for virtual machines (VMs) that are
running I/O-intensive workloads. This guide helps you migrate your VM disks from a standard storage account to a
premium storage account by using Azure Site Recovery.
Site Recovery is an Azure service that contributes to your strategy for business continuity and disaster recovery by
orchestrating the replication of on-premises physical servers and VMs to the cloud (Azure) or to a secondary
datacenter. When outages occur in your primary location, you fail over to the secondary location to keep
applications and workloads available. You fail back to your primary location when it returns to normal operation.
Site Recovery provides test failovers to support disaster recovery drills without affecting production environments.
You can run failovers with minimal data loss (depending on replication frequency) for unexpected disasters. In the
scenario of migrating to Premium Storage, you can use the failover in Site Recovery to migrate target disks to a
premium storage account.
We recommend migrating to Premium Storage by using Site Recovery because this option provides minimal
downtime. This option also avoids the manual execution of copying disks and creating new VMs. Site Recovery will
systematically copy your disks and create new VMs during failover.
Site Recovery supports a number of types of failover with minimal or no downtime. To plan your downtime and
estimate data loss, see the types of failover in Site Recovery. If you prepare to connect to Azure VMs after failover,
you should be able to connect to the Azure VM by using RDP after failover.
NOTE
Site Recovery does not support the migration of Storage Spaces disks.
Azure essentials
These are the Azure requirements for this migration scenario:
An Azure subscription.
An Azure premium storage account to store replicated data.
An Azure virtual network to which VMs will connect when they're created at failover. The Azure virtual network
must be in the same region as the one in which Site Recovery runs.
An Azure standard storage account to store replication logs. This can be the same storage account for the VM
disks that are being migrated.
Prerequisites
Understand the relevant migration scenario components in the preceding section.
Plan your downtime by learning about failover in Site Recovery.
NOTE
Backup and Site Recovery was formerly part of the OMS suite.
3. Specify a region that VMs will be replicated to. For the purpose of migration in the same region, select the
region where your source VMs and source storage accounts are.
Step 2: Choose your protection goals
1. On the VM where you want to install the configuration server, open the Azure portal.
2. Go to Recover y Ser vices vaults > Settings > Site Recover y > Step 1: Prepare Infrastructure >
Protection goal .
3. Under Protection goal , in the first drop-down list, select To Azure . In the second drop-down list, select
Not vir tualized / Other , and then select OK .
Step 3: Set up the source environment (configuration server)
1. Download Azure Site Recover y Unified Setup and the vault registration key by going to the Prepare
infrastructure > Prepare source > Add Ser ver panes.
You will need the vault registration key to run the unified setup. The key is valid for five days after you
generate it.
c. In Environment Details , select whether you're going to replicate VMware VMs. For this migration
scenario, choose No .
4. After the installation is complete, do the following in the Microsoft Azure Site Recover y Configuration
Ser ver window:
a. Use the Manage Accounts tab to create the account that Site Recovery can use for automatic
discovery. (In the scenario about protecting physical machines, setting up the account isn't relevant,
but you need at least one account to enable one of the following steps. In this case, you can name the
account and password as any.)
b. Use the Vault Registration tab to upload the vault credential file.
NOTE
If you're using a premium storage account for replicated data, you need to set up an additional standard storage account to
store replication logs.
The failed-over VM will have two temporary disks: one from the primary VM and the other created during
the provisioning of the VM in the recovery region. To exclude the temporary disk before replication, install
the mobility service before you enable replication. To learn more about how to exclude the temporary disk,
see Exclude disks from replication.
2. Enable replication as follows:
a. Select Replicate Application > Source . After you've enabled replication for the first time, select
+Replicate in the vault to enable replication for additional machines.
b. In step 1, set up Source as your process server.
c. In step 2, specify the post-failover deployment model, a premium storage account to migrate to, a
standard storage account to save logs, and a virtual network to fail to.
d. In step 3, add protected VMs by IP address. (You might need an internal IP address to find them.)
e. In step 4, configure the properties by selecting the accounts that you set up previously on the process
server.
f. In step 5, choose the replication policy that you created previously in "Step 5: Set up replication settings."
g. Select OK .
NOTE
When an Azure VM is deallocated and started again, there is no guarantee that it will get the same IP address. If the
IP address of the configuration server/process server or the protected Azure VMs changes, the replication in this
scenario might not work correctly.
When you design your Azure Storage environment, we recommend that you use separate storage accounts for
each VM in an availability set. We recommend that you follow the best practice in the storage layer to use multiple
storage accounts for each availability set. Distributing VM disks to multiple storage accounts helps to improve
storage availability and distributes the I/O across the Azure storage infrastructure.
If your VMs are in an availability set, instead of replicating disks of all VMs into one storage account, we highly
recommend migrating multiple VMs multiple times. That way, the VMs in the same availability set do not share a
single storage account. Use the Enable Replication pane to set up a destination storage account for each VM, one
at a time.
You can choose a post-failover deployment model according to your need. If you choose Azure Resource Manager
as your post-failover deployment model, you can fail over a VM (Resource Manager) to a VM (Resource Manager),
or you can fail over a VM (classic) to a VM (Resource Manager).
Step 8: Run a test failover
To check whether your replication is complete, select your Site Recovery instance and then select Settings >
Replicated Items . You will see the status and percentage of your replication process.
After initial replication is complete, run a test failover to validate your replication strategy. For detailed steps of a
test failover, see Run a test failover in Site Recovery.
NOTE
Before you run any failover, make sure that your VMs and replication strategy meet the requirements. For more information
about running a test failover, see Test failover to Azure in Site Recovery.
You can see the status of your test failover in Settings > Jobs > YOUR_FAILOVER_PLAN_NAME. In the pane, you
can see a breakdown of the steps and success/failure results. If the test failover fails at any step, select the step to
check the error message.
Step 9: Run a failover
After the test failover is completed, run a failover to migrate your disks to Premium Storage and replicate the VM
instances. Follow the detailed steps in Run a failover.
Be sure to select Shut down VMs and synchronize the latest data . This option specifies that Site Recovery
should try to shut down the protected VMs and synchronize the data so that the latest version of the data will be
failed over. If you don't select this option or the attempt doesn't succeed, the failover will be from the latest available
recovery point for the VM.
Site Recovery will create a VM instance whose type is the same as or similar to a Premium Storage-capable VM. You
can check the performance and price of various VM instances by going to Windows Virtual Machines Pricing or
Linux Virtual Machines Pricing.
Post-migration steps
1. Configure replicated VMs to the availability set if applicable . Site Recovery does not support
migrating VMs along with the availability set. Depending on the deployment of your replicated VM, do one
of the following:
For a VM created through the classic deployment model: Add the VM to the availability set in the Azure
portal. For detailed steps, go to Add an existing virtual machine to an availability set.
For a VM created through the Resource Manager deployment model: Save your configuration of the VM
and then delete and re-create the VMs in the availability set. To do so, use the script at Set Azure Resource
Manager VM Availability Set. Before you run this script, check its limitations and plan your downtime.
2. Delete old VMs and disks . Make sure that the Premium disks are consistent with source disks and that
the new VMs perform the same function as the source VMs. Delete the VM and delete the disks from your
source storage accounts in the Azure portal. If there's a problem in which the disk is not deleted even though
you deleted the VM, see Troubleshoot storage resource deletion errors.
3. Clean the Azure Site Recover y infrastructure . If Site Recovery is no longer needed, you can clean its
infrastructure. Delete replicated items, the configuration server, and the recovery policy, and then delete the
Azure Site Recovery vault.
Troubleshooting
Monitor and troubleshoot protection for virtual machines and physical servers
Microsoft Q&A question page for Microsoft Azure Site Recovery
Next steps
For specific scenarios for migrating virtual machines, see the following resources:
Migrate Azure Virtual Machines between Storage Accounts
Create and upload a Windows Server VHD to Azure
Migrating Virtual Machines from Amazon AWS to Microsoft Azure
Also, see the following resources to learn more about Azure Storage and Azure Virtual Machines:
Azure Storage
Azure Virtual Machines
Migrate Azure VMs to Managed Disks in Azure
11/2/2020 • 2 minutes to read • Edit Online
Azure Managed Disks simplifies your storage management by removing the need to separately manage storage
accounts. You can also migrate your existing Azure VMs to Managed Disks to benefit from better reliability of VMs
in an Availability Set. It ensures that the disks of different VMs in an Availability Set are sufficiently isolated from
each other to avoid single point of failures. It automatically places disks of different VMs in an Availability Set in
different Storage scale units (stamps) which limits the impact of single Storage scale unit failures caused due to
hardware and software failures. Based on your needs, you can choose from four types of storage options. To learn
about the available disk types, see our article Select a disk type
Migration scenarios
You can migrate to Managed Disks in following scenarios:
SC EN A RIO A RT IC L E
Convert stand alone VMs and VMs in an availability set to Convert VMs to use managed disks
managed disks
Convert a single VM from classic to Resource Manager on Create a VM from a classic VHD
managed disks
Convert all the VMs in a vNet from classic to Resource Migrate IaaS resources from classic to Resource Manager and
Manager on managed disks then Convert a VM from unmanaged disks to managed disks
Upgrade VMs with standard unmanaged disks to VMs with First, Convert a Windows virtual machine from unmanaged
managed premium disks disks to managed disks. Then Update the storage type of a
managed disk.
IMPORTANT
Classic VMs will be retired on March 1, 2023.
If you use IaaS resources from ASM, please complete your migration by March 1, 2023. We encourage you to make the
switch sooner to take advantage of the many feature enhancements in Azure Resource Manager.
For more information, see Migrate your IaaS resources to Azure Resource Manager by March 1, 2023.
Next steps
Learn more about Managed Disks
Review the pricing for Managed Disks.
Convert a Windows virtual machine from
unmanaged disks to managed disks
11/2/2020 • 4 minutes to read • Edit Online
If you have existing Windows virtual machines (VMs) that use unmanaged disks, you can convert the VMs to use
managed disks through the Azure Managed Disks service. This process converts both the OS disk and any
attached data disks.
$rgName = "myResourceGroup"
$vmName = "myVM"
Stop-AzVM -ResourceGroupName $rgName -Name $vmName -Force
2. Convert the VM to managed disks by using the ConvertTo-AzVMManagedDisk cmdlet. The following
process converts the previous VM, including the OS disk and any data disks, and starts the Virtual
Machine:
$rgName = 'myResourceGroup'
$avSetName = 'myAvailabilitySet'
If the region where your availability set is located has only 2 managed fault domains but the number of
unmanaged fault domains is 3, this command shows an error similar to "The specified fault domain count
3 must fall in the range 1 to 2." To resolve the error, update the fault domain to 2 and update Sku to
Aligned as follows:
$avSet.PlatformFaultDomainCount = 2
Update-AzAvailabilitySet -AvailabilitySet $avSet -Sku Aligned
2. Deallocate and convert the VMs in the availability set. The following script deallocates each VM by using
the Stop-AzVM cmdlet, converts it by using ConvertTo-AzVMManagedDisk, and restarts it automatically as
apart of the conversion process:
foreach($vmInfo in $avSet.VirtualMachinesReferences)
{
$vm = Get-AzVM -ResourceGroupName $rgName | Where-Object {$_.Id -eq $vmInfo.id}
Stop-AzVM -ResourceGroupName $rgName -Name $vm.Name -Force
ConvertTo-AzVMManagedDisk -ResourceGroupName $rgName -VMName $vm.Name
}
Troubleshooting
If there is an error during conversion, or if a VM is in a failed state because of issues in a previous conversion, run
the ConvertTo-AzVMManagedDisk cmdlet again. A simple retry usually unblocks the situation. Before converting,
make sure all the VM extensions are in the 'Provisioning succeeded' state or the conversion will fail with the error
code 409.
Next steps
Convert standard managed disks to premium
Take a read-only copy of a VM by using snapshots.
Enable Write Accelerator
11/2/2020 • 8 minutes to read • Edit Online
Write Accelerator is a disk capability for M-Series Virtual Machines (VMs) on Premium Storage with Azure
Managed Disks exclusively. As the name states, the purpose of the functionality is to improve the I/O latency of
writes against Azure Premium Storage. Write Accelerator is ideally suited where log file updates are required to
persist to disk in a highly performant manner for modern databases.
Write Accelerator is generally available for M-series VMs in the Public Cloud.
IMPORTANT
Enabling Write Accelerator for the operating system disk of the VM will reboot the VM.
To enable Write Accelerator to an existing Azure disk that is NOT part of a volume build out of multiple disks with Windows
disk or volume managers, Windows Storage Spaces, Windows Scale-out file server (SOFS), Linux LVM, or MDADM, the
workload accessing the Azure disk needs to be shut down. Database applications using the Azure disk MUST be shut down.
If you want to enable or disable Write Accelerator for an existing volume that is built out of multiple Azure Premium Storage
disks and striped using Windows disk or volume managers, Windows Storage Spaces, Windows Scale-out file server (SOFS),
Linux LVM or MDADM, all disks building the volume must be enabled or disabled for Write Accelerator in separate steps.
Before enabling or disabling Write Accelerator in such a configuration, shut down the Azure VM .
Enabling Write Accelerator for OS disks should not be necessary for SAP-related VM configurations.
Restrictions when using Write Accelerator
When using Write Accelerator for an Azure disk/VHD, these restrictions apply:
The Premium disk caching must be set to 'None' or 'Read Only'. All other caching modes are not supported.
Snapshot are not currently supported for Write Accelerator-enabled disks. During backup, the Azure Backup
service automatically excludes Write Accelerator-enabled disks attached to the VM.
Only smaller I/O sizes (<=512 KiB) are taking the accelerated path. In workload situations where data is getting
bulk loaded or where the transaction log buffers of the different DBMS are filled to a larger degree before
getting persisted to the storage, chances are that the I/O written to disk is not taking the accelerated path.
There are limits of Azure Premium Storage VHDs per VM that can be supported by Write Accelerator. The current
limits are:
The IOPS limits are per VM and not per disk. All Write Accelerator disks share the same IOPS limit per VM.
Attached disks cannot exceed the write accelerator IOPS limit for a VM. For an example, even though the attached
disks can do 30,000 IOPS, the system does not allow the disks to go above 20,000 IOPS for M416ms_v2.
To attach a disk with Write Accelerator enabled use az vm disk attach, you may use the following example if you
substitute in your own values: az vm disk attach -g group1 -vm-name vm1 -disk d1 --enable-write-accelerator
To disable Write Accelerator, use az vm update, setting the properties to false:
az vm update -g group1 -n vm1 -write-accelerator 0=false 1=false
Now you can install the armclient by using the following command in either cmd.exe or PowerShell
choco install armclient
Replace the terms within '<< >>' with your data, including the file name the JSON file should have.
The output could look like:
{
"properties": {
"vmId": "2444c93e-f8bb-4a20-af2d-1658d9dbbbcb",
"hardwareProfile": {
"vmSize": "Standard_M64s"
},
"storageProfile": {
"imageReference": {
"publisher": "SUSE",
"offer": "SLES-SAP",
"sku": "12-SP3",
"version": "latest"
},
"osDisk": {
"osType": "Linux",
"name": "mylittlesap_OsDisk_1_754a1b8bb390468e9b4c429b81cc5f5a",
"createOption": "FromImage",
"caching": "ReadWrite",
"managedDisk": {
"storageAccountType": "Premium_LRS",
"id":
"/subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/mylittlesap/providers/Microsoft.Compute/di
sks/mylittlesap_OsDisk_1_754a1b8bb390468e9b4c429b81cc5f5a"
},
"diskSizeGB": 30
},
"dataDisks": [
{
"lun": 0,
"name": "data1",
"createOption": "Attach",
"caching": "None",
"managedDisk": {
"storageAccountType": "Premium_LRS",
"id":
"/subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/mylittlesap/providers/Microsoft.Compute/di
sks/data1"
},
"diskSizeGB": 1023
},
{
"lun": 1,
"name": "log1",
"createOption": "Attach",
"caching": "None",
"managedDisk": {
"storageAccountType": "Premium_LRS",
"id":
"/subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/mylittlesap/providers/Microsoft.Compute/di
sks/data2"
sks/data2"
},
"diskSizeGB": 1023
}
]
},
"osProfile": {
"computerName": "mylittlesapVM",
"adminUsername": "pl",
"linuxConfiguration": {
"disablePasswordAuthentication": false
},
"secrets": []
},
"networkProfile": {
"networkInterfaces": [
{
"id":
"/subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/mylittlesap/providers/Microsoft.Network/ne
tworkInterfaces/mylittlesap518"
}
]
},
"diagnosticsProfile": {
"bootDiagnostics": {
"enabled": true,
"storageUri": "https://mylittlesapdiag895.blob.core.windows.net/"
}
},
"provisioningState": "Succeeded"
},
"type": "Microsoft.Compute/virtualMachines",
"location": "westeurope",
"id":
"/subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/mylittlesap/providers/Microsoft.Compute/vi
rtualMachines/mylittlesapVM",
"name": "mylittlesapVM"
Next, update the JSON file and to enable Write Accelerator on the disk called 'log1'. This can be accomplished by
adding this attribute into the JSON file after the cache entry of the disk.
{
"lun": 1,
"name": "log1",
"createOption": "Attach",
"caching": "None",
"writeAcceleratorEnabled": true,
"managedDisk": {
"storageAccountType": "Premium_LRS",
"id":
"/subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/mylittlesap/providers/Microsoft.Compute/di
sks/data2"
},
"diskSizeGB": 1023
}
The output should look like the one below. You can see that Write Accelerator enabled for one disk.
{
"properties": {
"properties": {
"vmId": "2444c93e-f8bb-4a20-af2d-1658d9dbbbcb",
"hardwareProfile": {
"vmSize": "Standard_M64s"
},
"storageProfile": {
"imageReference": {
"publisher": "SUSE",
"offer": "SLES-SAP",
"sku": "12-SP3",
"version": "latest"
},
"osDisk": {
"osType": "Linux",
"name": "mylittlesap_OsDisk_1_754a1b8bb390468e9b4c429b81cc5f5a",
"createOption": "FromImage",
"caching": "ReadWrite",
"managedDisk": {
"storageAccountType": "Premium_LRS",
"id":
"/subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/mylittlesap/providers/Microsoft.Compute/di
sks/mylittlesap_OsDisk_1_754a1b8bb390468e9b4c429b81cc5f5a"
},
"diskSizeGB": 30
},
"dataDisks": [
{
"lun": 0,
"name": "data1",
"createOption": "Attach",
"caching": "None",
"managedDisk": {
"storageAccountType": "Premium_LRS",
"id":
"/subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/mylittlesap/providers/Microsoft.Compute/di
sks/data1"
},
"diskSizeGB": 1023
},
{
"lun": 1,
"name": "log1",
"createOption": "Attach",
"caching": "None",
"writeAcceleratorEnabled": true,
"managedDisk": {
"storageAccountType": "Premium_LRS",
"id":
"/subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/mylittlesap/providers/Microsoft.Compute/di
sks/data2"
},
"diskSizeGB": 1023
}
]
},
"osProfile": {
"computerName": "mylittlesapVM",
"adminUsername": "pl",
"linuxConfiguration": {
"disablePasswordAuthentication": false
},
"secrets": []
},
"networkProfile": {
"networkInterfaces": [
{
"id":
"/subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/mylittlesap/providers/Microsoft.Network/ne
tworkInterfaces/mylittlesap518"
}
}
]
},
"diagnosticsProfile": {
"bootDiagnostics": {
"enabled": true,
"storageUri": "https://mylittlesapdiag895.blob.core.windows.net/"
}
},
"provisioningState": "Succeeded"
},
"type": "Microsoft.Compute/virtualMachines",
"location": "westeurope",
"id":
"/subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/mylittlesap/providers/Microsoft.Compute/vi
rtualMachines/mylittlesapVM",
"name": "mylittlesapVM"
Once you've made this change, the drive should be supported by Write Accelerator.
Using Azure ultra disks
11/2/2020 • 12 minutes to read • Edit Online
This article explains how to deploy and use an ultra disk, for conceptual information about ultra disks, refer to
What disk types are available in Azure?.
Azure ultra disks offer high throughput, high IOPS, and consistent low latency disk storage for Azure IaaS virtual
machines (VMs). This new offering provides top of the line performance at the same availability levels as our
existing disks offerings. One major benefit of ultra disks is the ability to dynamically change the performance of
the SSD along with your workloads without the need to restart your VMs. Ultra disks are suited for data-intensive
workloads such as SAP HANA, top tier databases, and transaction-heavy workloads.
NOTE
If a region in the following list has no ultra disk capable availability zones, then VMs in that region must be deployed
without any infrastructure redundancy options in order to attach an ultra disk.
REGIO N S REDUN DA N C Y O P T IO N S
Brazil South Single VMs only (Availability sets and virtual machine scale
sets are not supported)
Central India Single VMs only (Availability sets and virtual machine scale
sets are not supported)
East Asia Single VMs only (Availability sets and virtual machine scale
sets are not supported)
Germany West Central Single VMs only (Availability sets and virtual machine scale
sets are not supported)
Korea Central Single VMs only (Availability sets and virtual machine scale
sets are not supported)
South Central US Single VMs only (Availability sets and virtual machine scale
sets are not supported)
US Gov Arizona Single VMs only (Availability sets and virtual machine scale
sets are not supported)
REGIO N S REDUN DA N C Y O P T IO N S
US Gov Virginia Single VMs only (Availability sets and virtual machine scale
sets are not supported)
West US Single VMs only (Availability sets and virtual machine scale
sets are not supported)
Australia Central Single VMs only (Availability sets and virtual machine scale
sets are not supported)
* Contact Azure Support to get access to Availability Zones for this region.
Are only supported on the following VM series:
ESv3
Easv4
Edsv4
Esv4
DSv3
Dasv4
Ddsv4
Dsv4
FSv2
LSv2
M
Mv2
Not every VM size is available in every supported region with ultra disks.
Are only available as data disks.
Support 4k physical sector size by default. 512E sector size is available as a generally available offering but,
you must sign up for it. Most applications are compatible with 4k sector sizes but, some require 512 byte
sector sizes. One example would be Oracle Database, which requires release 12.2 or later in order to support
the 4k native disks. For older versions of Oracle DB, 512 byte sector size is required.
Can only be created as empty disks.
Doesn't currently support disk snapshots, VM images, availability sets, Azure Dedicated Hosts, or Azure disk
encryption.
Doesn't currently support integration with Azure Backup or Azure Site Recovery.
Only supports un-cached reads and un-cached writes.
The current maximum limit for IOPS on GA VMs is 80,000.
Azure ultra disks offer up to 16 TiB per region per subscription by default, but ultra disks support higher capacity
by request. To request an increase in capacity, contact Azure Support.
subscription="<yourSubID>"
# example value is southeastasia
region="<yourLocation>"
# example value is Standard_E64s_v3
vmSize="<yourVMSize>"
PowerShell
$region = "southeastasia"
$vmSize = "Standard_E64s_v3"
$sku = (Get-AzComputeResourceSku | where {$_.Locations.Contains($region) -and ($_.Name -eq $vmSize) -and
$_.LocationInfo[0].ZoneDetails.Count -gt 0})
if($sku){$sku[0].LocationInfo[0].ZoneDetails} Else {Write-host "$vmSize is not supported with Ultra Disk in
$region region"}
The response will be similar to the form below, where X is the zone to use for deploying in your chosen region. X
could be either 1, 2, or 3.
Preserve the Zones value, it represents your availability zone and you will need it in order to deploy an Ultra disk.
RESO URC ET Y
PE NAME LO C AT IO N Z O N ES REST RIC T IO N C A PA B IL IT Y VA L UE
NOTE
If there was no response from the command, then the selected VM size is not supported with ultra disks in the selected
region.
Now that you know which zone to deploy to, follow the deployment steps in this article to either deploy a VM with
an ultra disk attached or attach an ultra disk to an existing VM.
VMs with no redundancy options
Ultra disks deployed in select regions must be deployed without any redundancy options, for now. However, not
every disk size that supports ultra disks may be in these region. To determine which disk sizes support ultra disks,
you can use either of the following code snippets. Make sure to replace the vmSize and subscription values first:
subscription="<yourSubID>"
region="westus"
# example value is Standard_E64s_v3
vmSize="<yourVMSize>"
$region = "westus"
$vmSize = "Standard_E64s_v3"
(Get-AzComputeResourceSku | where {$_.Locations.Contains($region) -and ($_.Name -eq $vmSize) })
[0].Capabilities
The response will be similar to the following form, UltraSSDAvailable True indicates whether the VM size
supports ultra disks in this region.
Name Value
---- -----
MaxResourceVolumeMB 884736
OSVhdSizeMB 1047552
vCPUs 64
HyperVGenerations V1,V2
MemoryGB 432
MaxDataDiskCount 32
LowPriorityCapable True
PremiumIO True
VMDeploymentTypes IaaS
vCPUsAvailable 64
ACUs 160
vCPUsPerCore 2
CombinedTempDiskAndCachedIOPS 128000
CombinedTempDiskAndCachedReadBytesPerSecond 1073741824
CombinedTempDiskAndCachedWriteBytesPerSecond 1073741824
CachedDiskBytes 1717986918400
UncachedDiskIOPS 80000
UncachedDiskBytesPerSecond 1258291200
EphemeralOSDiskSupported True
AcceleratedNetworkingEnabled True
RdmaEnabled False
MaxNetworkInterfaces 8
UltraSSDAvailable True
On the Disks blade, select Yes for Enable Ultra Disk compatibility .
Select Create and attach a new disk to attach an ultra disk now.
On the Create a new disk blade, enter a name, then select Change size .
Select Save .
Select Add data disk then in the dropdown for Name select Create disk .
Fill in a name for your new disk, then select Change size .
Change the Account type to Ultra Disk .
Change the values of Custom disk size (GiB) , Disk IOPS , and Disk throughput to ones of your choice.
Next steps
Use Azure ultra disks on Azure Kubernetes Service (preview).
Migrate log disk to an ultra disk.
Benchmarking a disk
11/2/2020 • 8 minutes to read • Edit Online
Benchmarking is the process of simulating different workloads on your application and measuring the application
performance for each workload. Using the steps described in the designing for high performance article, you have
gathered the application performance requirements. By running benchmarking tools on the VMs hosting the
application, you can determine the performance levels that your application can achieve with Premium Storage. In
this article, we provide you examples of benchmarking a Standard DS14 VM provisioned with Azure Premium
Storage disks.
We have used common benchmarking tools Iometer and FIO, for Windows and Linux respectively. These tools
spawn multiple threads simulating a production like workload, and measure the system performance. Using the
tools you can also configure parameters like block size and queue depth, which you normally cannot change for an
application. This gives you more flexibility to drive the maximum performance on a high scale VM provisioned
with premium disks for different types of application workloads. To learn more about each benchmarking tool visit
Iometer and FIO.
To follow the examples below, create a Standard DS14 VM and attach 11 Premium Storage disks to the VM. Of the
11 disks, configure 10 disks with host caching as "None" and stripe them into a volume called NoCacheWrites.
Configure host caching as "ReadOnly" on the remaining disk and create a volume called CacheReads with this disk.
Using this setup, you are able to see the maximum Read and Write performance from a Standard DS14 VM. For
detailed steps about creating a DS14 VM with premium SSDs, go to Designing for high performance.
Warming up the Cache
The disk with ReadOnly host caching are able to give higher IOPS than the disk limit. To get this maximum read
performance from the host cache, first you must warm up the cache of this disk. This ensures that the Read IOs
that the benchmarking tool will drive on CacheReads volume, actually hits the cache, and not the disk directly. The
cache hits result in additional IOPS from the single cache enabled disk.
IMPORTANT
You must warm up the cache before running benchmarking, every time VM is rebooted.
Tools
Iometer
Download the Iometer tool on the VM.
Test file
Iometer uses a test file that is stored on the volume on which you run the benchmarking test. It drives Reads and
Writes on this test file to measure the disk IOPS and Throughput. Iometer creates this test file if you have not
provided one. Create a 200 GB test file called iobw.tst on the CacheReads and NoCacheWrites volumes.
Access specifications
The specifications, request IO size, % read/write, % random/sequential are configured using the "Access
Specifications" tab in Iometer. Create an access specification for each of the scenarios described below. Create the
access specifications and "Save" with an appropriate name like – RandomWrites_8K, RandomReads_8K. Select the
corresponding specification when running the test scenario.
An example of access specifications for maximum Write IOPS scenario is shown below,
Maximum IOPS test specifications
To demonstrate maximum IOPs, use smaller request size. Use 8K request size and create specifications for Random
Writes and Reads.
RandomWrites_8K 8K 100 0
RandomWrites_64K 64 K 100 0
RandomWrites_1MB 1 MB 100 0
3. Run the Iometer test for warming up cache disk with following parameters. Use three worker threads for
the target volume and a queue depth of 128. Set the "Run time" duration of the test to 2 hrs on the "Test
Setup" tab.
After cache disk is warmed up, proceed with the test scenarios listed below. To run the Iometer test, use at least
three worker threads for each target volume. For each worker thread, select the target volume, set queue depth
and select one of the saved test specifications, as shown in the table below, to run the corresponding test scenario.
The table also shows expected results for IOPS and Throughput when running these tests. For all scenarios, a small
IO size of 8 KB and a high queue depth of 128 is used.
NoCacheWrites RandomReads_8K
NoCacheWrites RandomReads_64K
Below are screenshots of the Iometer test results for combined IOPS and Throughput scenarios.
Combined reads and writes maximum IOPS
Combined reads and writes maximum throughput
FIO
FIO is a popular tool to benchmark storage on the Linux VMs. It has the flexibility to select different IO sizes,
sequential or random reads and writes. It spawns worker threads or processes to perform the specified I/O
operations. You can specify the type of I/O operations each worker thread must perform using job files. We created
one job file per scenario illustrated in the examples below. You can change the specifications in these job files to
benchmark different workloads running on Premium Storage. In the examples, we are using a Standard DS 14 VM
running Ubuntu . Use the same setup described in the beginning of the Benchmarking section and warm up the
cache before running the benchmarking tests.
Before you begin, download FIO and install it on your virtual machine.
Run the following command for Ubuntu,
apt-get install fio
We use four worker threads for driving Write operations and four worker threads for driving Read operations on
the disks. The Write workers are driving traffic on the "nocache" volume, which has 10 disks with cache set to
"None". The Read workers are driving traffic on the "readcache" volume, which has one disk with cache set to
"ReadOnly".
Maximum write IOPS
Create the job file with following specifications to get maximum Write IOPS. Name it "fiowrite.ini".
[global]
size=30g
direct=1
iodepth=256
ioengine=libaio
bs=8k
[writer1]
rw=randwrite
directory=/mnt/nocache
[writer2]
rw=randwrite
directory=/mnt/nocache
[writer3]
rw=randwrite
directory=/mnt/nocache
[writer4]
rw=randwrite
directory=/mnt/nocache
Note the follow key things that are in line with the design guidelines discussed in previous sections. These
specifications are essential to drive maximum IOPS,
A high queue depth of 256.
A small block size of 8 KB.
Multiple threads performing random writes.
Run the following command to kick off the FIO test for 30 seconds,
While the test runs, you are able to see the number of write IOPS the VM and Premium disks are delivering. As
shown in the sample below, the DS14 VM is delivering its maximum write IOPS limit of 50,000 IOPS.
[reader1]
rw=randread
directory=/mnt/readcache
[reader2]
rw=randread
directory=/mnt/readcache
[reader3]
rw=randread
directory=/mnt/readcache
[reader4]
rw=randread
directory=/mnt/readcache
Note the follow key things that are in line with the design guidelines discussed in previous sections. These
specifications are essential to drive maximum IOPS,
A high queue depth of 256.
A small block size of 8 KB.
Multiple threads performing random writes.
Run the following command to kick off the FIO test for 30 seconds,
While the test runs, you are able to see the number of read IOPS the VM and Premium disks are delivering. As
shown in the sample below, the DS14 VM is delivering more than 64,000 Read IOPS. This is a combination of the
disk and the cache performance.
[reader1]
rw=randread
directory=/mnt/readcache
[reader2]
rw=randread
directory=/mnt/readcache
[reader3]
rw=randread
directory=/mnt/readcache
[reader4]
rw=randread
directory=/mnt/readcache
[writer1]
rw=randwrite
directory=/mnt/nocache
rate_iops=12500
[writer2]
rw=randwrite
directory=/mnt/nocache
rate_iops=12500
[writer3]
rw=randwrite
directory=/mnt/nocache
rate_iops=12500
[writer4]
rw=randwrite
directory=/mnt/nocache
rate_iops=12500
Note the follow key things that are in line with the design guidelines discussed in previous sections. These
specifications are essential to drive maximum IOPS,
A high queue depth of 128.
A small block size of 4 KB.
Multiple threads performing random reads and writes.
Run the following command to kick off the FIO test for 30 seconds,
While the test runs, you are able to see the number of combined read and write IOPS the VM and Premium disks
are delivering. As shown in the sample below, the DS14 VM is delivering more than 100,000 combined Read and
Write IOPS. This is a combination of the disk and the cache performance.
Next steps
Proceed to our article on designing for high performance.
In that article, you create a checklist similar to your existing application for the prototype. Using Benchmarking
tools you can simulate the workloads and measure performance on the prototype application. By doing so, you
can determine which disk offering can match or surpass your application performance requirements. Then you can
implement the same guidelines for your production application.
Find and delete unattached Azure managed and
unmanaged disks
11/2/2020 • 3 minutes to read • Edit Online
When you delete a virtual machine (VM) in Azure, by default, any disks that are attached to the VM aren't deleted.
This feature helps to prevent data loss due to the unintentional deletion of VMs. After a VM is deleted, you will
continue to pay for unattached disks. This article shows you how to find and delete any unattached disks and
reduce unnecessary costs.
IMPORTANT
First, run the script by setting the deleteUnattachedDisks variable to 0. This action lets you find and view all the
unattached managed disks.
After you review all the unattached disks, run the script again and set the deleteUnattachedDisks variable to 1. This action
lets you delete all the unattached managed disks.
Next steps
For more information, see Delete a storage account and Identify Orphaned Disks Using PowerShell
Find and delete unattached Azure managed and
unmanaged disks - Azure portal
11/2/2020 • 2 minutes to read • Edit Online
When you delete a virtual machine (VM) in Azure, by default, any disks that are attached to the VM aren't deleted.
This helps to prevent data loss due to the unintentional deletion of VMs. After a VM is deleted, you will continue to
pay for unattached disks. This article shows you how to find and delete any unattached disks using the Azure
portal, and reduce unnecessary costs.
3. Select the unattached disk you'd like to delete, this opens the disk's blade.
4. On the disk's blade, you can confirm the disk state is unattached, then select Delete .
3. Select the unattached disk you'd like to delete, this brings up the disk's blade.
4. On the disk's blade, you can confirm it is unattached, since Attached to will still be - .
5. Select Delete .
Next steps
If you'd like an automated way of finding and deleting unattached storage accounts, see our CLI or PowerShell
articles.
For more information, see Delete a storage account and Identify Orphaned Disks Using PowerShell
Frequently asked questions about Azure IaaS VM
disks and managed and unmanaged premium disks
11/2/2020 • 22 minutes to read • Edit Online
This article answers some frequently asked questions about Azure Managed Disks and Azure Premium SSD disks.
Managed Disks
What is Azure Managed Disks?
Managed Disks is a feature that simplifies disk management for Azure IaaS VMs by handling storage account
management for you. For more information, see the Managed Disks overview.
If I create a standard managed disk from an existing VHD that's 80 GB, how much will that cost me?
A standard managed disk created from an 80-GB VHD is treated as the next available standard disk size, which is
an S10 disk. You're charged according to the S10 disk pricing. For more information, see the pricing page.
Are there any transaction costs for standard managed disks?
Yes. You're charged for each transaction. For more information, see the pricing page.
For a standard managed disk , will I be charged for the actual size of the data on the disk or for the
provisioned capacity of the disk?
You're charged based on the provisioned capacity of the disk. For more information, see the pricing page.
How is pricing of premium managed disks different from unmanaged disks?
The pricing of premium managed disks is the same as unmanaged premium disks.
Can I change the storage account type (Standard or Premium) of my managed disks?
Yes. You can change the storage account type of your managed disks by using the Azure portal, PowerShell, or the
Azure CLI.
Can I use a VHD file in an Azure storage account to create a managed disk with a different
subscription?
Yes.
Can I use a VHD file in an Azure storage account to create a managed disk in a different region?
No.
Are there any scale limitations for customers that use managed disks?
Managed Disks eliminates the limits associated with storage accounts. However, the maximum limit is 50,000
managed disks per region and per disk type for a subscription.
Can VMs in an availability set consist of a combination of managed and unmanaged disks?
No. The VMs in an availability set must use either all managed disks or all unmanaged disks. When you create an
availability set, you can choose which type of disks you want to use.
Is Managed Disks the default option in the Azure por tal?
Yes.
Can I create an empty managed disk?
Yes. You can create an empty disk. A managed disk can be created independently of a VM, for example, without
attaching it to a VM.
What is the suppor ted fault domain count for an availability set that uses Managed Disks?
Depending on the region where the availability set that uses Managed Disks is located, the supported fault domain
count is 2 or 3.
How is the standard storage account for diagnostics set up?
You set up a private storage account for VM diagnostics.
What kind of Role-Based Access Control suppor t is available for Managed Disks?
Managed Disks supports three key default roles:
Owner: Can manage everything, including access
Contributor: Can manage everything except access
Reader: Can view everything, but can't make changes
Is there a way that I can copy or expor t a managed disk to a private storage account?
You can generate a read-only shared access signature (SAS) URI for the managed disk and use it to copy the
contents to a private storage account or on-premises storage. You can use the SAS URI using the Azure portal,
Azure PowerShell, the Azure CLI, or AzCopy
Can I create a copy of my managed disk?
Customers can take a snapshot of their managed disks and then use the snapshot to create another managed disk.
Are unmanaged disks still suppor ted?
Yes, both unmanaged and managed disks are supported. We recommend that you use managed disks for new
workloads and migrate your current workloads to managed disks.
Can I colocate unmanaged and managed disks on the same VM?
No.
If I create a 128-GB disk and then increase the size to 130 gibibytes (GiB), will I be charged for the
next disk size (256 GiB)?
Yes.
Can I create locally redundant storage, geo-redundant storage, and zone-redundant storage
managed disks?
Azure Managed Disks currently supports only locally redundant storage managed disks.
Can I shrink or downsize my managed disks?
No. This feature is not supported currently.
Can I break a lease on my disk?
No. This is not supported currently as a lease is present to prevent accidental deletion when the disk is being used.
Can I change the computer name proper ty when a specialized (not created by using the System
Preparation tool or generalized) operating system disk is used to provision a VM?
No. You can't update the computer name property. The new VM inherits it from the parent VM, which was used to
create the operating system disk.
Where can I find sample Azure Resource Manager templates to create VMs with managed disks?
List of templates using Managed Disks
https://github.com/chagarw/MDPP
When creating a disk from a blob, is there any continually existing relationship with that source
blob?
No, when the new disk is created it is a full standalone copy of that blob at that time and there is no connection
between the two. If you like, once you've created the disk, the source blob may be deleted without affecting the
newly created disk in any way.
Can I rename a managed or unmanaged disk after it has been created?
For managed disks you cannot rename them. However, you may rename an unmanaged disk as long as it is not
currently attached to a VHD or VM.
Can I use GPT par titioning on an Azure Disk?
Generation 1 images can only use GPT partitioning on data disks, not OS disks. OS disks must use the MBR
partition style.
Generation 2 images can use GPT partitioning on the OS disk as well as the data disks.
What disk types suppor t snapshots?
Premium SSD, standard SSD, and standard HDD support snapshots. For these three disk types, snapshots are
supported for all disk sizes (including disks up to 32 TiB in size). Ultra disks do not support snapshots.
What are Azure disk reser vations? Disk reservation is the option to purchase one year of disk storage in
advance, reducing your total cost. For details regarding Azure disk reservations, see our article on the subject:
Understand how your reservation discount is applied to Azure Disk.
What options does Azure disk reser vation offer?
Azure disk reservation provides the option to purchase Premium SSDs in the specified SKUs from P30 (1 TiB) up to
P80 (32 TiB) for a one-year term. There is no limitation on the minimum amount of disks necessary to purchase a
disk reservation. Additionally, you can choose to pay with a single, upfront payment or monthly payments. There is
no additional transactional cost applied for Premium SSD Managed Disks.
Reservations are made in the form of disks, not capacity. In other words, when you reserve a P80 (32 TiB) disk, you
get a single P80 disk, you cannot then divide that specific reservation up into two smaller P70 (16 TiB) disks. You
can, of course, reserve as many or as few disks as you like, including two separate P70 (16 TiB) disks.
How is Azure disk reser vation applied?
Disks reservation follows a model similar to reserved virtual machine (VM) instances. The difference being that a
disk reservation cannot be applied to different SKUs, while a VM instance can. See Save costs with Azure Reserved
VM Instances for more information on VM instances.
Can I use my data storage purchased through Azure disks reser vation across multiple regions?
Azure disks reservation are purchased for a specific region and SKU (like P30 in East US 2), and therefore cannot be
used outside these constructs. You can always purchase an additional Azure Disks Reservation for your disk storage
needs in other regions or SKUs.
What happens when my Azure disks reser vation expires?
You will receive email notifications 30 days prior to expiration and again on the expiration date. Once the
reservation expires, deployed disks will continue to run and will be billed with the latest pay-as-you-go rates.
Do Standard SSD Disks suppor t "single instance VM SL A"?
Yes, all disk types support single instance VM SLA.
Azure shared disks
Is the shared disks feature suppor ted for unmanaged disks or page blobs?
No, it is only supported for ultra disks and premium SSD managed disks.
What regions suppor t shared disks?
For regional information, see our conceptual article.
Can shared disks be used as an OS disk?
No, shared disks are only supported for data disks.
What disk sizes suppor t shared disks?
For supported sizes, see our conceptual article.
If I have an existing disk , can I enable shared disks on it?
All managed disks created with API version 2019-07-01 or higher can enable shared disks. To do this, you need to
unmount the disk from all VMs that it is attached to. Next, edit the maxShares property on the disk.
If I no longer want to use a disk in shared mode, how do I disable it?
Unmount the disk from all VMs that it is attached to. Then edit the maxShare property on the disk to 1.
Can you resize a shared disk?
Yes.
Can I enable write accelerator on a disk that also has shared disks enabled?
No.
Can I enable host caching for a disk that has shared disk enabled?
The only supported host caching option is None .
Ultra disks
What should I set my ultra disk throughput to? If you are unsure what to set your disk throughput to, we
recommend you start by assuming an IO size of 16 KiB and adjust the performance from there as you monitor
your application. The formula is: Throughput in MBps = # of IOPS * 16 / 1000.
I configured my disk to 40000 IOPS but I'm only seeing 12800 IOPS, why am I not seeing the
performance of the disk? In addition to the disk throttle, there is an IO throttle that gets imposed at the VM
level. Ensure that the VM size you are using can support the levels that are configured on your disks. For details
regarding IO limits imposed by your VM, see Sizes for virtual machines in Azure.
Can I use caching levels with an ultra disk? No, ultra disks do not support the different caching methods that
are supported on other disk types. Set the disk caching to None .
Can I attach an ultra disk to my existing VM? Maybe, your VM has to be in a region and availability zone pair
that supports Ultra disks. See getting started with ultra disks for details.
Can I use an ultra disk as the OS disk for my VM? No, ultra Disks are only supported as data disks and are
only supported as 4K native disks.
Can I conver t an existing disk to an ultra disk? No, but you can migrate the data from an existing disk to an
ultra disk. To migrate an existing disk to an ultra Disk, attach both disks to the same VM, and copy the disk's data
from one disk to the other or leverage a 3rd party solution for data migration.
Can I create snapshots for ultra disks? No, snapshots are not yet available.
Is Azure Backup available for ultra disks? No, Azure Backup support is not yet available.
Can I attach an ultra disk to a VM running in an availability set? No, this is not yet supported.
Can I enable Azure Site Recover y for VMs using ultra disks? No, Azure Site Recovery is not yet supported
for ultra disks.
If you want to create a virtual machine, you need to create a virtual network or know about an existing virtual
network in which the VM can be added. Typically, when you create a VM, you also need to consider creating the
resources described in this article.
See How to install and configure Azure PowerShell for information about installing the latest version of Azure
PowerShell, selecting your subscription, and signing in to your account.
Some variables might be useful for you if running more than one of the commands in this article:
$location - The location of the network resources. You can use Get-AzLocation to find a geographical region that
works for you.
$myResourceGroup - The name of the resource group where the network resources are located.
Next Steps
Use the network interface that you just created when you create a VM.
Quickstart: Create a virtual network using the Azure
portal
11/2/2020 • 5 minutes to read • Edit Online
In this quickstart, you learn how to create a virtual network using the Azure portal. You deploy two virtual
machines (VMs). Next, you securely communicate between VMs and connect to VMs from the internet. A virtual
network is the fundamental building block for your private network in Azure. It enables Azure resources, like VMs,
to securely communicate with each other and with the internet.
Prerequisites
An Azure account with an active subscription. Create one for free.
Sign in to Azure
Sign in to the Azure portal.
SET T IN G VA L UE
3. Select Next: IP Addresses , and for IPv4 address space , enter 10.1.0.0/16.
4. Select Add subnet , then enter myVirtualSubnet for Subnet name and 10.1.0.0/24 for Subnet address
range .
5. Select Add , then select Review + create . Leave the rest as default and select Create .
6. In Create vir tual network , select Create .
SET T IN G VA L UE
Project details
Instance details
Administrator account
Save money
SET T IN G VA L UE
SET T IN G VA L UE
10. Select OK , then select Review + create . You're taken to the Review + create page where Azure validates
your configuration.
11. When you see the Validation passed message, select Create .
Create the second VM
Repeat the procedure in the previous section to create another virtual machine.
IMPORTANT
For the Vir tual machine name , enter myVm2.
For Diagnosis storage account , make sure you select myvmstorageaccount , instead of creating one.
NOTE
You may need to select More choices > Use a different account , to specify the credentials you entered when
you created the VM.
6. Select OK .
7. You may receive a certificate warning when you sign in. If you receive a certificate warning, select Yes or
Continue .
8. Once the VM desktop appears, minimize it to go back to your local desktop.
Pinging myVm2.0v0zze1s0uiedpvtxz5z0r0cxg.bx.internal.clouda
Request timed out.
Request timed out.
Request timed out.
Request timed out.
The ping fails, because ping uses the Internet Control Message Protocol (ICMP). By default, ICMP isn't
allowed through the Windows firewall.
3. To allow myVm2 to ping myVm1 in a later step, enter this command:
New-NetFirewallRule –DisplayName "Allow ICMPv4-In" –Protocol ICMPv4
You receive replies from myVm1, because you allowed ICMP through the Windows firewall on the myVm1
VM in step 3.
7. Close the remote desktop connection to myVm2.
Clean up resources
In this quickstart, you created a default virtual network and two VMs. You connected to one VM from the internet
and securely communicated between the two VMs.
When you're done using the virtual network and the VMs, delete the resource group and all of the resources it
contains:
1. Search for and select myResourceGroup.
2. Select Delete resource group .
3. Enter myResourceGroup for TYPE THE RESOURCE GROUP NAME and select Delete .
Next steps
To learn more about virtual network settings, see Create, change, or delete a virtual network.
By default, Azure allows secure communication between VMs. Azure only allows inbound remote desktop
connections to Windows VMs from the internet. To learn more about types of VM network communications, see
Filter network traffic.
NOTE
Azure services cost money. Azure Cost Management helps you set budgets and configure alerts to keep spending under
control. Analyze, manage, and optimize your Azure costs with Cost Management. To learn more, see the quickstart on
analyzing your costs.
How to open ports to a virtual machine with the
Azure portal
11/2/2020 • 2 minutes to read • Edit Online
You open a port, or create an endpoint, to a virtual machine (VM) in Azure by creating a network filter on a subnet
or a VM network interface. You place these filters, which control both inbound and outbound traffic, on a network
security group attached to the resource that receives the traffic.
The example in this article demonstrates how to create a network filter that uses the standard TCP port 80 (it's
assumed you've already started the appropriate services and opened any OS firewall rules on the VM).
After you've created a VM that's configured to serve web requests on the standard TCP port 80, you can:
1. Create a network security group.
2. Create an inbound security rule allowing traffic and assign values to the following settings:
Destination por t ranges : 80
Source por t ranges : * (allows any source port)
Priority value : Enter a value that is less than 65,500 and higher in priority than the default catch-all
deny inbound rule.
3. Associate the network security group with the VM network interface or subnet.
Although this example uses a simple rule to allow HTTP traffic, you can also use network security groups and rules
to create more complex network configurations.
Sign in to Azure
Sign in to the Azure portal at https://portal.azure.com.
Additional information
You can also perform the steps in this article by using Azure PowerShell.
The commands described in this article allow you to quickly get traffic flowing to your VM. Network security
groups provide many great features and granularity for controlling access to your resources. For more
information, see Filter network traffic with a network security group.
For highly available web applications, consider placing your VMs behind an Azure load balancer. The load balancer
distributes traffic to VMs, with a network security group that provides traffic filtering. For more information, see
Load balance Windows virtual machines in Azure to create a highly available application.
Next steps
In this article, you created a network security group, created an inbound rule that allows HTTP traffic on port 80,
and then associated that rule with a subnet.
You can find information on creating more detailed environments in the following articles:
Azure Resource Manager overview
Security groups
How to open ports and endpoints to a VM using
PowerShell
11/2/2020 • 2 minutes to read • Edit Online
You open a port, or create an endpoint, to a virtual machine (VM) in Azure by creating a network filter on a subnet
or a VM network interface. You place these filters, which control both inbound and outbound traffic, on a network
security group attached to the resource that receives the traffic.
The example in this article demonstrates how to create a network filter that uses the standard TCP port 80 (it's
assumed you've already started the appropriate services and opened any OS firewall rules on the VM).
After you've created a VM that's configured to serve web requests on the standard TCP port 80, you can:
1. Create a network security group.
2. Create an inbound security rule allowing traffic and assign values to the following settings:
Destination por t ranges : 80
Source por t ranges : * (allows any source port)
Priority value : Enter a value that is less than 65,500 and higher in priority than the default catch-all
deny inbound rule.
3. Associate the network security group with the VM network interface or subnet.
Although this example uses a simple rule to allow HTTP traffic, you can also use network security groups and rules
to create more complex network configurations.
Quick commands
To create a Network Security Group and ACL rules you need the latest version of Azure PowerShell installed. You
can also perform these steps using the Azure portal.
Log in to your Azure account:
Connect-AzAccount
In the following examples, replace parameter names with your own values. Example parameter names included
myResourceGroup, myNetworkSecurityGroup, and myVnet.
Create a rule with New-AzNetworkSecurityRuleConfig. The following example creates a rule named
myNetworkSecurityGroupRule to allow tcp traffic on port 80:
$httprule = New-AzNetworkSecurityRuleConfig `
-Name "myNetworkSecurityGroupRule" `
-Description "Allow HTTP" `
-Access "Allow" `
-Protocol "Tcp" `
-Direction "Inbound" `
-Priority "100" `
-SourceAddressPrefix "Internet" `
-SourcePortRange * `
-DestinationAddressPrefix * `
-DestinationPortRange 80
Next, create your Network Security group with New-AzNetworkSecurityGroup and assign the HTTP rule you just
created as follows. The following example creates a Network Security Group named myNetworkSecurityGroup:
$nsg = New-AzNetworkSecurityGroup `
-ResourceGroupName "myResourceGroup" `
-Location "EastUS" `
-Name "myNetworkSecurityGroup" `
-SecurityRules $httprule
Now let's assign your Network Security Group to a subnet. The following example assigns an existing virtual
network named myVnet to the variable $vnet with Get-AzVirtualNetwork:
$vnet = Get-AzVirtualNetwork `
-ResourceGroupName "myResourceGroup" `
-Name "myVnet"
Associate your Network Security Group with your subnet with Set-AzVirtualNetworkSubnetConfig. The following
example associates the subnet named mySubnet with your Network Security Group:
Set-AzVirtualNetworkSubnetConfig `
-VirtualNetwork $vnet `
-Name "mySubnet" `
-AddressPrefix $subnetPrefix.AddressPrefix `
-NetworkSecurityGroup $nsg
Finally, update your virtual network with Set-AzVirtualNetwork in order for your changes to take effect:
Next steps
In this example, you created a simple rule to allow HTTP traffic. You can find information on creating more detailed
environments in the following articles:
Azure Resource Manager overview
What is a network security group?
Azure Load Balancer Overview
Create a virtual machine with a static public IP
address using the Azure portal
11/2/2020 • 3 minutes to read • Edit Online
You can create a virtual machine with a static public IP address. A public IP address enables you to communicate to
a virtual machine from the internet. Assign a static public IP address, rather than a dynamic address, to ensure that
the address never changes. Learn more about static public IP addresses. To change a public IP address assigned to
an existing virtual machine from dynamic to static, or to work with private IP addresses, see Add, change, or
remove IP addresses. Public IP addresses have a nominal charge, and there is a limit to the number of public IP
addresses that you can use per subscription.
Sign in to Azure
Sign in to the Azure portal at https://portal.azure.com.
SET T IN G VA L UE
Name myVM
WARNING
Do not modify the IP address settings within the virtual machine's operating system. The operating system is unaware of
Azure public IP addresses. Though you can add private IP address settings to the operating system, we recommend not
doing so unless necessary, and not until after reading Add a private IP address to an operating system.
Clean up resources
When no longer needed, delete the resource group and all of the resources it contains:
1. Enter myResourceGroup in the Search box at the top of the portal. When you see myResourceGroup in the
search results, select it.
2. Select Delete resource group .
3. Enter myResourceGroup for TYPE THE RESOURCE GROUP NAME: and select Delete .
Next steps
Learn more about public IP addresses in Azure
Learn more about all public IP address settings
Learn more about private IP addresses and assigning a static private IP address to an Azure virtual machine
Learn more about creating Linux and Windows virtual machines
Create and manage a Windows virtual machine that
has multiple NICs
11/2/2020 • 8 minutes to read • Edit Online
Virtual machines (VMs) in Azure can have multiple virtual network interface cards (NICs) attached to them. A
common scenario is to have different subnets for front-end and back-end connectivity. You can associate multiple
NICs on a VM to multiple subnets, but those subnets must all reside in the same virtual network (vNet). This article
details how to create a VM that has multiple NICs attached to it. You also learn how to add or remove NICs from an
existing VM. Different VM sizes support a varying number of NICs, so size your VM accordingly.
Prerequisites
In the following examples, replace example parameter names with your own values. Example parameter names
include myResourceGroup, myVnet, and myVM.
2. Create your virtual network and subnets with New-AzVirtualNetwork. The following example creates a
virtual network named myVnet:
Typically you also create a network security group to filter network traffic to the VM and a load balancer to
distribute traffic across multiple VMs.
Create the virtual machine
Now start to build your VM configuration. Each VM size has a limit for the total number of NICs that you can add to
a VM. For more information, see Windows VM sizes.
1. Set your VM credentials to the $cred variable as follows:
$cred = Get-Credential
2. Define your VM with New-AzVMConfig. The following example defines a VM named myVM and uses a VM
size that supports more than two NICs (Standard_DS3_v2):
3. Create the rest of your VM configuration with Set-AzVMOperatingSystem and Set-AzVMSourceImage. The
following example creates a Windows Server 2016 VM:
4. Attach the two NICs that you previously created with Add-AzVMNetworkInterface:
6. Add routes for secondary NICs to the OS by completing the steps in Configure the operating system for
multiple NICs.
Add a NIC to an existing VM
To add a virtual NIC to an existing VM, you deallocate the VM, add the virtual NIC, then start the VM. Different VM
sizes support a varying number of NICs, so size your VM accordingly. If needed, you can resize a VM.
1. Deallocate the VM with Stop-AzVM. The following example deallocates the VM named myVM in
myResourceGroup:
2. Get the existing configuration of the VM with Get-AzVm. The following example gets information for the VM
named myVM in myResourceGroup:
3. The following example creates a virtual NIC with New-AzNetworkInterface named myNic3 that is attached
to mySubnetBackEnd. The virtual NIC is then attached to the VM named myVM in myResourceGroup with
Add-AzVMNetworkInterface:
5. Add routes for secondary NICs to the OS by completing the steps in Configure the operating system for
multiple NICs.
Remove a NIC from an existing VM
To remove a virtual NIC from an existing VM, you deallocate the VM, remove the virtual NIC, then start the VM.
1. Deallocate the VM with Stop-AzVM. The following example deallocates the VM named myVM in
myResourceGroup:
2. Get the existing configuration of the VM with Get-AzVm. The following example gets information for the VM
named myVM in myResourceGroup:
3. Get information about the NIC remove with Get-AzNetworkInterface. The following example gets
information about myNic3:
4. Remove the NIC with Remove-AzVMNetworkInterface and then update the VM with Update-AzVm. The
following example removes myNic3 as obtained by $nicId in the preceding step:
"copy": {
"name": "multiplenics",
"count": "[parameters('count')]"
}
===========================================================================
Interface List
3...00 0d 3a 10 92 ce ......Microsoft Hyper-V Network Adapter #3
7...00 0d 3a 10 9b 2a ......Microsoft Hyper-V Network Adapter #4
===========================================================================
In this example, Microsoft Hyper-V Network Adapter #4 (interface 7) is the secondary network interface
that doesn't have a default gateway assigned to it.
2. From a command prompt, run the ipconfig command to see which IP address is assigned to the secondary
network interface. In this example, 192.168.2.4 is assigned to interface 7. No default gateway address is
returned for the secondary network interface.
3. To route all traffic destined for addresses outside the subnet of the secondary network interface to the
gateway for the subnet, run the following command:
The gateway address for the subnet is the first IP address (ending in .1) in the address range defined for the
subnet. If you don't want to route all traffic outside the subnet, you could add individual routes to specific
destinations, instead. For example, if you only wanted to route traffic from the secondary network interface
to the 192.168.3.0 network, you enter the command:
4. To confirm successful communication with a resource on the 192.168.3.0 network, for example, enter the
following command to ping 192.168.3.4 using interface 7 (192.168.2.4):
You may need to open ICMP through the Windows firewall of the device you're pinging with the following
command:
5. To confirm the added route is in the route table, enter the route print command, which returns output
similar to the following text:
===========================================================================
Active Routes:
Network Destination Netmask Gateway Interface Metric
0.0.0.0 0.0.0.0 192.168.1.1 192.168.1.4 15
0.0.0.0 0.0.0.0 192.168.2.1 192.168.2.4 5015
The route listed with 192.168.1.1 under Gateway , is the route that is there by default for the primary
network interface. The route with 192.168.2.1 under Gateway , is the route you added.
Next steps
Review Windows VM sizes when you're trying to create a VM that has multiple NICs. Pay attention to the maximum
number of NICs that each VM size supports.
Create a Windows VM with accelerated networking
using Azure PowerShell
11/2/2020 • 10 minutes to read • Edit Online
In this tutorial, you learn how to create a Windows virtual machine (VM) with accelerated networking.
NOTE
To use accelerated networking with a Linux virtual machine, see Create a Linux VM with accelerated networking.
Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, greatly improving its networking
performance. This high-performance path bypasses the host from the data path, which reduces latency, jitter, and
CPU utilization for the most demanding network workloads on supported VM types. The following diagram
illustrates how two VMs communicate with and without accelerated networking:
Without accelerated networking, all networking traffic in and out of the VM must traverse the host and the virtual
switch. The virtual switch provides all policy enforcement, such as network security groups, access control lists,
isolation, and other network virtualized services to network traffic.
NOTE
To learn more about virtual switches, see Hyper-V Virtual Switch.
With accelerated networking, network traffic arrives at the VM's network interface (NIC) and is then forwarded to
the VM. All network policies that the virtual switch applies are now offloaded and applied in hardware. Because
policy is applied in hardware, the NIC can forward network traffic directly to the VM. The NIC bypasses the host
and the virtual switch, while it maintains all the policy it applied in the host.
The benefits of accelerated networking only apply to the VM that it's enabled on. For the best results, enable this
feature on at least two VMs connected to the same Azure virtual network. When communicating across virtual
networks or connecting on-premises, this feature has minimal impact to overall latency.
Benefits
Lower Latency / Higher packets per second (pps) : Eliminating the virtual switch from the data path
removes the time packets spend in the host for policy processing. It also increases the number of packets
that can be processed inside the VM.
Reduced jitter : Virtual switch processing depends on the amount of policy that needs to be applied. It also
depends on the workload of the CPU that's doing the processing. Offloading the policy enforcement to the
hardware removes that variability by delivering packets directly to the VM. Offloading also removes the
host-to-VM communication, all software interrupts, and all context switches.
Decreased CPU utilization : Bypassing the virtual switch in the host leads to less CPU utilization for
processing network traffic.
After you create the VM, you can confirm whether accelerated networking is enabled. Follow these instructions:
1. Go to the Azure portal to manage your VMs. Search for and select Vir tual machines .
2. In the virtual machine list, choose your new VM.
3. In the VM menu bar, choose Networking .
In the network interface information, next to the Accelerated networking label, the portal displays either
Disabled or Enabled for the accelerated networking status.
$subnet = New-AzVirtualNetworkSubnetConfig `
-Name "mySubnet" `
-AddressPrefix "192.168.1.0/24"
2. Create a network security group with New-AzNetworkSecurityGroup and assign the Allow-RDP-All security
rule to it. Aside from the Allow-RDP-All rule, the network security group contains several default rules. One
default rule disables all inbound access from the internet. Once it's created, the Allow-RDP-All rule is
assigned to the network security group so that you can remotely connect to the VM.
$nsg = New-AzNetworkSecurityGroup `
-ResourceGroupName myResourceGroup `
-Location centralus `
-Name "myNsg" `
-SecurityRules $rdp
3. Associate the network security group to the mySubnet subnet with Set-AzVirtualNetworkSubnetConfig. The
rule in the network security group is effective for all resources deployed in the subnet.
Set-AzVirtualNetworkSubnetConfig `
-VirtualNetwork $vnet `
-Name 'mySubnet' `
-AddressPrefix "192.168.1.0/24" `
-NetworkSecurityGroup $nsg
$publicIp = New-AzPublicIpAddress `
-ResourceGroupName myResourceGroup `
-Name 'myPublicIp' `
-location centralus `
-AllocationMethod Dynamic
2. Create a network interface with New-AzNetworkInterface with accelerated networking enabled, and assign
the public IP address to the network interface. The following example creates a network interface named
myNic in the mySubnet subnet of the myVnet virtual network, assigning the myPublicIp public IP address to
it:
$nic = New-AzNetworkInterface `
-ResourceGroupName "myResourceGroup" `
-Name "myNic" `
-Location "centralus" `
-SubnetId $vnet.Subnets[0].Id `
-PublicIpAddressId $publicIp.Id `
-EnableAcceleratedNetworking
$cred = Get-Credential
2. Define your VM with New-AzVMConfig. The following command defines a VM named myVM with a VM size
that supports accelerated networking (Standard_DS4_v2):
4. Attach the network interface that you previously created with Add-AzVMNetworkInterface:
NOTE
If the Mellanox adapter fails to start, open an administrator prompt in the remote desktop session and enter the following
command:
netsh int tcp set global rss = enabled
NOTE
When you create a VM individually, without an availability set, you only need to stop or deallocate the individual VM
to enable accelerated networking. If your VM was created with an availability set, you must stop or deallocate all VMs
contained in the availability set before enabling accelerated networking on any of the NICs, so that the VMs end up
on a cluster that supports accelerated networking. The stop or deallocate requirement is unnecessary if you disable
accelerated networking, because clusters that support accelerated networking also work fine with NICs that don't use
accelerated networking.
$nic.EnableAcceleratedNetworking = $true
$nic | Set-AzNetworkInterface
3. Restart your VM or, if in an availability set, all the VMs in the set, and confirm that accelerated networking is
enabled:
$vmss.VirtualMachineProfile.NetworkProfile.NetworkInterfaceConfigurations[0].EnableAcceleratedNetworking
= $true
3. Set the applied updates to automatic so that the changes are immediately picked up:
$vmss.UpgradePolicy.Mode = "Automatic"
Once you restart, wait for the upgrades to finish. After the upgrades are done, the virtual function (VF) appears
inside the VM. Make sure you're using a supported OS and VM size.
Resizing existing VMs with accelerated networking
If a VM has accelerated networking enabled, you're only able to resize it to a VM that supports accelerated
networking.
A VM with accelerated networking enabled can't be resized to a VM instance that doesn't support accelerated
networking using the resize operation. Instead, to resize one of these VMs:
1. Stop or deallocate the VM. For an availability set or scale set, stop or deallocate all the VMs in the availability
set or scale set.
2. Disable accelerated networking on the NIC of the VM. For an availability set or scale set, disable accelerated
networking on the NICs of all VMs in the availability set or scale set.
3. After you disable accelerated networking, move the VM, availability set, or scale set to a new size that
doesn't support accelerated networking, and then restart them.
Create a fully qualified domain name in the Azure
portal for a Linux VM
11/6/2020 • 2 minutes to read • Edit Online
When you create a virtual machine (VM) in the Azure portal, a public IP resource for the virtual machine is
automatically created. You use this IP address to remotely access the VM. Although the portal does not create a fully
qualified domain name, or FQDN, you can add one once the VM is created. This article demonstrates the steps to
create a DNS name or FQDN.
Create a FQDN
This article assumes that you have already created a VM. If needed, you can create a Linux or Windows VM in the
portal. Follow these steps once your VM is up and running:
1. Select your VM in the portal. Under DNS name , select Configure .
2. Enter the DNS name and then select Save at the top of the page.
3. To return to the VM overview blade, close the Configuration blade by selecting the X in the upper right corner.
4. Verify that the DNS name is now shown correctly.
Next steps
Now that your VM has a public IP and DNS name, you can deploy common application frameworks or services
such as nginx, MongoDB, and Docker.
You can also read more about using Resource Manager for tips on building your Azure deployments.
How Azure DNS works with other Azure services
2/1/2020 • 2 minutes to read • Edit Online
Azure DNS is a hosted DNS management and name resolution service. You can use it to create public DNS names
for other applications and services that you deploy in Azure. Creating a name for an Azure service in your custom
domain is simple. You just add a record of the correct type for your service.
For dynamically allocated IP addresses, you can create a DNS CNAME record that maps to the DNS name that
Azure created for your service. DNS standards prevent you from using a CNAME record for the zone apex. You
can use an alias record instead. For more information, see Tutorial: Configure an alias record to refer to an Azure
Public IP address.
For statically allocated IP addresses, you can create a DNS A record by using any name, which includes a naked
domain name at the zone apex.
The following table outlines the supported record types you can use for various Azure services. As the table shows,
Azure DNS supports only DNS records for Internet-facing network resources. Azure DNS can't be used for name
resolution of internal, private addresses.
Azure Application Gateway Front-end public IP You can create a DNS A or CNAME
record.
Azure Load Balancer Front-end public IP You can create a DNS A or CNAME
record. Load Balancer can have an IPv6
public IP address that's dynamically
assigned. Create a CNAME record for an
IPv6 address.
Azure Traffic Manager Public name You can create an alias record that maps
to the trafficmanager.net name assigned
to your Traffic Manager profile. For
more information, see Tutorial:
Configure an alias record to support
apex domain names with Traffic
Manager.
Azure Resource Manager VMs Public IP Resource Manager VMs can have public
IP addresses. A VM with a public IP
address also can be behind a load
balancer. You can create a DNS A,
CNAME, or alias record for the public
address. You can use this custom name
to bypass the VIP on the load balancer.
Managed identities for Azure resources is a feature of Azure Active Directory. Each of the Azure services that
support managed identities for Azure resources are subject to their own timeline. Make sure you review the
availability status of managed identities for your resource and known issues before you begin.
Managed identities for Azure resources provides Azure services with an automatically managed identity in Azure
Active Directory. You can use this identity to authenticate to any service that supports Azure AD authentication,
without having credentials in your code.
In this article, you learn how to enable and disable system and user-assigned managed identities for an Azure
Virtual Machine (VM), using the Azure portal.
Prerequisites
If you're unfamiliar with managed identities for Azure resources, check out the overview section.
If you don't already have an Azure account, sign up for a free account before continuing.
Next steps
Using the Azure portal, give an Azure VM's managed identity access to another Azure resource.
Configure managed identities for Azure resources on
an Azure VM using Azure CLI
11/2/2020 • 7 minutes to read • Edit Online
Managed identities for Azure resources is a feature of Azure Active Directory. Each of the Azure services that
support managed identities for Azure resources are subject to their own timeline. Make sure you review the
availability status of managed identities for your resource and known issues before you begin.
Managed identities for Azure resources provide Azure services with an automatically managed identity in Azure
Active Directory. You can use this identity to authenticate to any service that supports Azure AD authentication,
without having credentials in your code.
In this article, using the Azure CLI, you learn how to perform the following managed identities for Azure resources
operations on an Azure VM:
Enable and disable the system-assigned managed identity on an Azure VM
Add and remove a user-assigned managed identity on an Azure VM
If you don't already have an Azure account, sign up for a free account before continuing.
Prerequisites
If you're unfamiliar with managed identities for Azure resources, see What are managed identities for Azure
resources?. To learn about system-assigned and user-assigned managed identity types, see Managed identity
types.
Use Azure Cloud Shell using the bash environment.
If you prefer, install the Azure CLI to run CLI reference commands.
If you're using a local install, sign in with Azure CLI by using the az login command. To finish the
authentication process, follow the steps displayed in your terminal. See Sign in with Azure CLI for
additional sign-in options.
When you're prompted, install Azure CLI extensions on first use. For more information about extensions,
see Use extensions with Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the latest
version, run az upgrade.
2. Create a VM using az vm create. The following example creates a VM named myVM with a system-assigned
managed identity, as requested by the --assign-identity parameter. The --admin-username and
--admin-password parameters specify the administrative user name and password account for virtual
machine sign-in. Update these values as appropriate for your environment:
az login
2. Use az vm identity assign with the identity assign command enable the system-assigned identity to an
existing VM:
If you have a virtual machine that no longer needs system-assigned identity and it has no user-assigned identities,
use the following command:
NOTE
The value none is case sensitive. It must be lowercase.
2. Create a user-assigned managed identity using az identity create. The -g parameter specifies the resource
group where the user-assigned managed identity is created, and the -n parameter specifies its name.
IMPORTANT
When creating user assigned identities, only alphanumeric characters (0-9, a-z, A-Z), the underscore (_) and the
hyphen (-) are supported. Additionally, the name should be atleast 3 characters and up to 128 characters in length
for the assignment to VM/VMSS to work properly. Check back for updates. For more information, see FAQs and
known issues.
The response contains details for the user-assigned managed identity created, similar to the following. The
resource ID value assigned to the user-assigned managed identity is used in the following step.
{
"clientId": "73444643-8088-4d70-9532-c3a0fdc190fz",
"clientSecretUrl": "https://control-westcentralus.identity.azure.net/subscriptions/<SUBSCRIPTON
ID>/resourcegroups/<RESOURCE
GROUP>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<myUserAssignedIdentity>/credentials?
tid=5678&oid=9012&aid=73444643-8088-4d70-9532-c3a0fdc190fz",
"id": "/subscriptions/<SUBSCRIPTON ID>/resourcegroups/<RESOURCE
GROUP>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<USER ASSIGNED IDENTITY NAME>",
"location": "westcentralus",
"name": "<USER ASSIGNED IDENTITY NAME>",
"principalId": "e5fdfdc1-ed84-4d48-8551-fe9fb9dedfll",
"resourceGroup": "<RESOURCE GROUP>",
"tags": {},
"tenantId": "733a8f0e-ec41-4e69-8ad8-971fc4b533bl",
"type": "Microsoft.ManagedIdentity/userAssignedIdentities"
}
3. Create a VM using az vm create. The following example creates a VM associated with the new user-assigned
identity, as specified by the --assign-identity parameter. Be sure to replace the <RESOURCE GROUP> ,
<VM NAME> , <USER NAME> , <PASSWORD> , and <USER ASSIGNED IDENTITY NAME> parameter values with your own
values.
az vm create --resource-group <RESOURCE GROUP> --name <VM NAME> --image UbuntuLTS --admin-username <USER
NAME> --admin-password <PASSWORD> --assign-identity <USER ASSIGNED IDENTITY NAME>
IMPORTANT
Creating user-assigned managed identities with special characters (i.e. underscore) in the name is not currently
supported. Please use alphanumeric characters. Check back for updates. For more information, see FAQs and known
issues
The response contains details for the user-assigned managed identity created, similar to the following.
{
"clientId": "73444643-8088-4d70-9532-c3a0fdc190fz",
"clientSecretUrl": "https://control-westcentralus.identity.azure.net/subscriptions/<SUBSCRIPTON
ID>/resourcegroups/<RESOURCE GROUP>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<USER
ASSIGNED IDENTITY NAME>/credentials?tid=5678&oid=9012&aid=73444643-8088-4d70-9532-c3a0fdc190fz",
"id": "/subscriptions/<SUBSCRIPTON ID>/resourcegroups/<RESOURCE
GROUP>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<USER ASSIGNED IDENTITY NAME>",
"location": "westcentralus",
"name": "<USER ASSIGNED IDENTITY NAME>",
"principalId": "e5fdfdc1-ed84-4d48-8551-fe9fb9dedfll",
"resourceGroup": "<RESOURCE GROUP>",
"tags": {},
"tenantId": "733a8f0e-ec41-4e69-8ad8-971fc4b533bl",
"type": "Microsoft.ManagedIdentity/userAssignedIdentities"
}
2. Assign the user-assigned identity to your VM using az vm identity assign. Be sure to replace the
<RESOURCE GROUP> and <VM NAME> parameter values with your own values. The
<USER ASSIGNED IDENTITY NAME> is the user-assigned managed identity's resource name property, as created
in the previous step. If you created your user-assigned managed identity in a different RG than your VM.
You'll have to use the URL of your managed identity.
az vm identity assign -g <RESOURCE GROUP> -n <VM NAME> --identities <USER ASSIGNED IDENTITY>
az vm identity remove -g <RESOURCE GROUP> -n <VM NAME> --identities <USER ASSIGNED IDENTITY>
If your VM does not have a system-assigned managed identity and you want to remove all user-assigned identities
from it, use the following command:
NOTE
The value none is case sensitive. It must be lowercase.
If your VM has both system-assigned and user-assigned identities, you can remove all the user-assigned identities
by switching to use only system-assigned. Use the following command:
Next steps
Managed identities for Azure resources overview
For the full Azure VM creation Quickstarts, see:
Create a Windows virtual machine with CLI
Create a Linux virtual machine with CLI
Configure managed identities for Azure resources on
an Azure VM using PowerShell
11/2/2020 • 6 minutes to read • Edit Online
Managed identities for Azure resources is a feature of Azure Active Directory. Each of the Azure services that
support managed identities for Azure resources are subject to their own timeline. Make sure you review the
availability status of managed identities for your resource and known issues before you begin.
Managed identities for Azure resources provide Azure services with an automatically managed identity in Azure
Active Directory. You can use this identity to authenticate to any service that supports Azure AD authentication,
without having credentials in your code.
In this article, using PowerShell, you learn how to perform the following managed identities for Azure resources
operations on an Azure VM.
NOTE
This article has been updated to use the new Azure PowerShell Az module. You can still use the AzureRM module, which will
continue to receive bug fixes until at least December 2020. To learn more about the new Az module and AzureRM
compatibility, see Introducing the new Azure PowerShell Az module. For Az module installation instructions, see Install Azure
PowerShell.
Prerequisites
If you're unfamiliar with managed identities for Azure resources, check out the overview section. Be sure to
review the difference between a system-assigned and user-assigned managed identity .
If you don't already have an Azure account, sign up for a free account before continuing.
To run the example scripts, you have two options:
Use the Azure Cloud Shell, which you can open using the Tr y It button on the top right corner of code
blocks.
Run scripts locally by installing the latest version of Azure PowerShell, then sign in to Azure using
Connect-AzAccount .
2. Retrieve and note the ObjectID (as specified in the Id field of the returned values) of the group:
If you have a virtual machine that no longer needs system-assigned managed identity and it has no user-assigned
managed identities, use the following commands:
$vm = Get-AzVM -ResourceGroupName myResourceGroup -Name myVM
Update-AzVm -ResourceGroupName myResourceGroup -VM $vm -IdentityType None
IMPORTANT
Creating user-assigned managed identities only supports alphanumeric, underscore and hyphen (0-9 or a-z or A-Z, _
or -) characters. Additionally, name should be limited from 3 to 128 character length for the assignment to VM/VMSS
to work properly. For more information see FAQs and known issues
2. Retrieve the VM properties using the Get-AzVM cmdlet. Then to assign a user-assigned managed identity to
the Azure VM, use the -IdentityType and -IdentityID switch on the Update-AzVM cmdlet. The value for
the -IdentityId parameter is the Id you noted in the previous step. Replace <VM NAME> , <SUBSCRIPTION ID>
, <RESROURCE GROUP> , and <USER ASSIGNED IDENTITY NAME> with your own values.
WARNING
To retain any previously user-assigned managed identities assigned to the VM, query the Identity property of the
VM object (for example, $vm.Identity ). If any user assigned managed identities are returned, include them in the
following command along with the new user assigned managed identity you would like to assign to the VM.
If your VM does not have a system-assigned managed identity and you want to remove all user-assigned managed
identities from it, use the following command:
If your VM has both system-assigned and user-assigned managed identities, you can remove all the user-assigned
managed identities by switching to use only system-assigned managed identities.
Next steps
Managed identities for Azure resources overview
For the full Azure VM creation Quickstarts, see:
Create a Windows virtual machine with PowerShell
Create a Linux virtual machine with PowerShell
Configure managed identities for Azure resources on
an Azure VM using templates
11/2/2020 • 7 minutes to read • Edit Online
Managed identities for Azure resources is a feature of Azure Active Directory. Each of the Azure services that
support managed identities for Azure resources are subject to their own timeline. Make sure you review the
availability status of managed identities for your resource and known issues before you begin.
Managed identities for Azure resources provide Azure services with an automatically managed identity in Azure
Active Directory. You can use this identity to authenticate to any service that supports Azure AD authentication,
without having credentials in your code.
In this article, using the Azure Resource Manager deployment template, you learn how to perform the following
managed identities for Azure resources operations on an Azure VM:
Prerequisites
If you're unfamiliar with using Azure Resource Manager deployment template, check out the overview section.
Be sure to review the difference between a system-assigned and user-assigned managed identity .
If you don't already have an Azure account, sign up for a free account before continuing.
"identity": {
"type": "SystemAssigned"
},
3. When you're done, the following sections should be added to the resource section of your template and it
should resemble the following:
"resources": [
{
//other resource provider properties...
"apiVersion": "2018-06-01",
"type": "Microsoft.Compute/virtualMachines",
"name": "[variables('vmName')]",
"location": "[resourceGroup().location]",
"identity": {
"type": "SystemAssigned",
},
},
//The following appears only if you provisioned the optional VM extension (to be deprecated)
{
"type": "Microsoft.Compute/virtualMachines/extensions",
"name": "[concat(variables('vmName'),'/ManagedIdentityExtensionForWindows')]",
"apiVersion": "2018-06-01",
"location": "[resourceGroup().location]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', variables('vmName'))]"
],
"properties": {
"publisher": "Microsoft.ManagedIdentity",
"type": "ManagedIdentityExtensionForWindows",
"typeHandlerVersion": "1.0",
"autoUpgradeMinorVersion": true,
"settings": {
"port": 50342
}
}
}
]
{
"apiVersion": "2017-09-01",
"type": "Microsoft.Authorization/roleAssignments",
"name": "[parameters('rbacGuid')]",
"properties": {
"roleDefinitionId": "[variables(parameters('builtInRoleType'))]",
"principalId": "[reference(variables('vmResourceId'), '2017-12-01',
'Full').identity.principalId]",
"scope": "[resourceGroup().id]"
},
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', parameters('vmName'))]"
]
}
NOTE
To create a user-assigned managed identity using an Azure Resource Manager Template, see Create a user-assigned managed
identity.
{
"apiVersion": "2018-06-01",
"type": "Microsoft.Compute/virtualMachines",
"name": "[variables('vmName')]",
"location": "[resourceGroup().location]",
"identity": {
"type": "userAssigned",
"userAssignedIdentities": {
"
[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/',variables('<USERASSIGNEDIDENTITYNAME>'))
]": {}
}
}
}
2. When you're done, the following sections should be added to the resource section of your template and it
should resemble the following:
Microsoft.Compute/vir tualMachines API version 2018-06-01
"resources": [
{
//other resource provider properties...
"apiVersion": "2018-06-01",
"type": "Microsoft.Compute/virtualMachines",
"name": "[variables('vmName')]",
"location": "[resourceGroup().location]",
"identity": {
"type": "userAssigned",
"userAssignedIdentities": {
"
[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/',variables('<USERASSIGNEDIDENTITYNAME>'))
]": {}
}
}
},
//The following appears only if you provisioned the optional VM extension (to be deprecated)
{
"type": "Microsoft.Compute/virtualMachines/extensions",
"name": "[concat(variables('vmName'),'/ManagedIdentityExtensionForWindows')]",
"apiVersion": "2018-06-01-preview",
"location": "[resourceGroup().location]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', variables('vmName'))]"
],
"properties": {
"publisher": "Microsoft.ManagedIdentity",
"type": "ManagedIdentityExtensionForWindows",
"typeHandlerVersion": "1.0",
"autoUpgradeMinorVersion": true,
"settings": {
"port": 50342
}
}
}
]
//The following appears only if you provisioned the optional VM extension (to be deprecated)
{
"type": "Microsoft.Compute/virtualMachines/extensions",
"name": "[concat(variables('vmName'),'/ManagedIdentityExtensionForWindows')]",
"apiVersion": "2015-05-01-preview",
"location": "[resourceGroup().location]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', variables('vmName'))]"
],
"properties": {
"publisher": "Microsoft.ManagedIdentity",
"type": "ManagedIdentityExtensionForWindows",
"typeHandlerVersion": "1.0",
"autoUpgradeMinorVersion": true,
"settings": {
"port": 50342
}
}
}
]
{
"apiVersion": "2018-06-01",
"type": "Microsoft.Compute/virtualMachines",
"name": "[parameters('vmName')]",
"location": "[resourceGroup().location]",
"identity": {
"type": "None"
},
}
Microsoft.Compute/vir tualMachines API version 2018-06-01
To remove a single user-assigned managed identity from a VM, remove it from the useraAssignedIdentities
dictionary.
If you have a system-assigned managed identity, keep it in the type value under the identity value.
Microsoft.Compute/vir tualMachines API version 2017-12-01
To remove a single user-assigned managed identity from a VM, remove it from the identityIds array.
If you have a system-assigned managed identity, keep it in the type value under the identity value.
Next steps
Managed identities for Azure resources overview.
Configure Managed identities for Azure resources on
an Azure VM using REST API calls
11/2/2020 • 14 minutes to read • Edit Online
Managed identities for Azure resources is a feature of Azure Active Directory. Each of the Azure services that
support managed identities for Azure resources are subject to their own timeline. Make sure you review the
availability status of managed identities for your resource and known issues before you begin.
Managed identities for Azure resources provide Azure services with an automatically managed system identity in
Azure Active Directory. You can use this identity to authenticate to any service that supports Azure AD
authentication, without having credentials in your code.
In this article, using CURL to make calls to the Azure Resource Manager REST endpoint, you learn how to perform
the following managed identities for Azure resources operations on an Azure VM:
Enable and disable the system-assigned managed identity on an Azure VM
Add and remove a user-assigned managed identity on an Azure VM
If you don't already have an Azure account, sign up for a free account before continuing.
Prerequisites
If you're unfamiliar with managed identities for Azure resources, see What are managed identities for Azure
resources?. To learn about system-assigned and user-assigned managed identity types, see Managed identity
types.
Use Azure Cloud Shell using the bash environment.
If you prefer, install the Azure CLI to run CLI reference commands.
If you're using a local install, sign in with Azure CLI by using the az login command. To finish the
authentication process, follow the steps displayed in your terminal. See Sign in with Azure CLI for
additional sign-in options.
When you're prompted, install Azure CLI extensions on first use. For more information about extensions,
see Use extensions with Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the latest
version, run az upgrade.
3. Retrieve a Bearer access token, which you will use in the next step in the Authorization header to create your
VM with a system-assigned managed identity.
az account get-access-token
4. Using Azure Cloud Shell, create a VM using CURL to call the Azure Resource Manager REST endpoint. The
following example creates a VM named myVM with a system-assigned managed identity, as identified in the
request body by the value "identity":{"type":"SystemAssigned"} . Replace <ACCESS TOKEN> with the value
you received in the previous step when you requested a Bearer access token and the <SUBSCRIPTION ID>
value as appropriate for your environment.
curl 'https://management.azure.com/subscriptions/<SUBSCRIPTION
ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM?api-version=2018-06-
01' -X PUT -d '{"location":"westus","name":"myVM","identity":{"type":"SystemAssigned"},"properties":
{"hardwareProfile":{"vmSize":"Standard_D2_v2"},"storageProfile":{"imageReference":{"sku":"2016-
Datacenter","publisher":"MicrosoftWindowsServer","version":"latest","offer":"WindowsServer"},"osDisk":
{"caching":"ReadWrite","managedDisk":
{"storageAccountType":"Standard_LRS"},"name":"myVM3osdisk","createOption":"FromImage"},"dataDisks":
[{"diskSizeGB":1023,"createOption":"Empty","lun":0},
{"diskSizeGB":1023,"createOption":"Empty","lun":1}]},"osProfile":
{"adminUsername":"azureuser","computerName":"myVM","adminPassword":"<SECURE PASSWORD
STRING>"},"networkProfile":{"networkInterfaces":[{"id":"/subscriptions/<SUBSCRIPTION
ID>/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkInterfaces/myNic","properties":
{"primary":true}}]}}}' -H "Content-Type: application/json" -H "Authorization: Bearer <ACCESS TOKEN>"
PUT https://management.azure.com/subscriptions/<SUBSCRIPTION
ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM?api-version=2018-06-
01 HTTP/1.1
Request headers
Request body
{
"location":"westus",
"name":"myVM",
"identity":{
"type":"SystemAssigned"
},
"properties":{
"hardwareProfile":{
"vmSize":"Standard_D2_v2"
},
"storageProfile":{
"imageReference":{
"sku":"2016-Datacenter",
"publisher":"MicrosoftWindowsServer",
"version":"latest",
"offer":"WindowsServer"
},
"osDisk":{
"caching":"ReadWrite",
"managedDisk":{
"storageAccountType":"Standard_LRS"
},
"name":"myVM3osdisk",
"createOption":"FromImage"
},
"dataDisks":[
{
"diskSizeGB":1023,
"createOption":"Empty",
"lun":0
},
{
"diskSizeGB":1023,
"createOption":"Empty",
"lun":1
}
]
},
"osProfile":{
"adminUsername":"azureuser",
"computerName":"myVM",
"adminPassword":"myPassword12"
},
"networkProfile":{
"networkInterfaces":[
{
"id":"/subscriptions/<SUBSCRIPTION
ID>/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkInterfaces/myNic",
"properties":{
"primary":true
}
}
]
}
}
}
2. Use the following CURL command to call the Azure Resource Manager REST endpoint to enable system-
assigned managed identity on your VM as identified in the request body by the value
{"identity":{"type":"SystemAssigned"} for a VM named myVM. Replace <ACCESS TOKEN> with the value you
received in the previous step when you requested a Bearer access token and the <SUBSCRIPTION ID> value as
appropriate for your environment.
IMPORTANT
To ensure you don't delete any existing user-assigned managed identities that are assigned to the VM, you need to
list the user-assigned managed identities by using this CURL command:
curl 'https://management.azure.com/subscriptions/<SUBSCRIPTION ID>/resourceGroups/<RESOURCE
GROUP>/providers/Microsoft.Compute/virtualMachines/<VM NAME>?api-version=2018-06-01' -H
"Authorization: Bearer <ACCESS TOKEN>"
. If you have any user-assigned managed identities assigned to the VM as identified in the identity value in the
response, skip to step 3 that shows you how to retain user-assigned managed identities while enabling system-
assigned managed identity on your VM.
curl 'https://management.azure.com/subscriptions/<SUBSCRIPTION
ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM?api-version=2018-06-
01' -X PATCH -d '{"identity":{"type":"SystemAssigned"}}' -H "Content-Type: application/json" -H
"Authorization:Bearer <ACCESS TOKEN>"
PATCH https://management.azure.com/subscriptions/<SUBSCRIPTION
ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM?api-version=2018-06-
01 HTTP/1.1
Request headers
Request body
{
"identity":{
"type":"SystemAssigned"
}
}
3. To enable system-assigned managed identity on a VM with existing user-assigned managed identities, you
need to add SystemAssigned to the type value.
For example, if your VM has the user-assigned managed identities ID1 and ID2 assigned to it, and you
would like to add system-assigned managed identity to the VM, use the following CURL call. Replace
<ACCESS TOKEN> and <SUBSCRIPTION ID> with values appropriate to your environment.
API version 2018-06-01 stores user-assigned managed identities in the userAssignedIdentities value in a
dictionary format as opposed to the identityIds value in an array format used in API version 2017-12-01 .
API VERSION 2018-06-01
curl 'https://management.azure.com/subscriptions/<SUBSCRIPTION
ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM?api-version=2018-06-
01' -X PATCH -d '{"identity":{"type":"SystemAssigned, UserAssigned", "userAssignedIdentities":
{"/subscriptions/<<SUBSCRIPTION
ID>>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ID1":
{},"/subscriptions/<SUBSCRIPTION
ID>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ID2":
{}}}}' -H "Content-Type: application/json" -H "Authorization:Bearer <ACCESS TOKEN>"
PATCH https://management.azure.com/subscriptions/<SUBSCRIPTION
ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM?api-version=2018-06-
01 HTTP/1.1
Request headers
Request body
{
"identity":{
"type":"SystemAssigned, UserAssigned",
"userAssignedIdentities":{
"/subscriptions/<<SUBSCRIPTION
ID>>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ID1":{
},
"/subscriptions/<SUBSCRIPTION
ID>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ID2":{
}
}
}
}
curl 'https://management.azure.com/subscriptions/<SUBSCRIPTION
ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM?api-version=2017-12-
01' -X PATCH -d '{"identity":{"type":"SystemAssigned, UserAssigned", "identityIds":
["/subscriptions/<<SUBSCRIPTION
ID>>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ID1","/su
bscriptions/<SUBSCRIPTION
ID>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ID2"]}}' -
H "Content-Type: application/json" -H "Authorization:Bearer <ACCESS TOKEN>"
PATCH https://management.azure.com/subscriptions/<SUBSCRIPTION
ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM?api-version=2017-12-
01 HTTP/1.1
Request headers
REQ UEST H EA DER DESC RIP T IO N
Request body
{
"identity":{
"type":"SystemAssigned, UserAssigned",
"identityIds":[
"/subscriptions/<<SUBSCRIPTION
ID>>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ID1",
"/subscriptions/<SUBSCRIPTION
ID>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ID2"
]
}
}
az account get-access-token
2. Update the VM using CURL to call the Azure Resource Manager REST endpoint to disable system-assigned
managed identity. The following example disables system-assigned managed identity as identified in the
request body by the value {"identity":{"type":"None"}} from a VM named myVM. Replace <ACCESS TOKEN>
with the value you received in the previous step when you requested a Bearer access token and the
<SUBSCRIPTION ID> value as appropriate for your environment.
IMPORTANT
To ensure you don't delete any existing user-assigned managed identities that are assigned to the VM, you need to
list the user-assigned managed identities by using this CURL command:
curl 'https://management.azure.com/subscriptions/<SUBSCRIPTION ID>/resourceGroups/<RESOURCE
GROUP>/providers/Microsoft.Compute/virtualMachines/<VM NAME>?api-version=2018-06-01' -H
"Authorization: Bearer <ACCESS TOKEN>"
. If you have any user-assigned managed identities assigned to the VM as identified in the identity value in the
response, skip to step 3 that shows you how to retain user-assigned managed identities while disabling system-
assigned managed identity on your VM.
curl 'https://management.azure.com/subscriptions/<SUBSCRIPTION
ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM?api-version=2018-06-
01' -X PATCH -d '{"identity":{"type":"None"}}' -H "Content-Type: application/json" -H
"Authorization:Bearer <ACCESS TOKEN>"
PATCH https://management.azure.com/subscriptions/<SUBSCRIPTION
ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM?api-version=2018-06-
01 HTTP/1.1
Request headers
Request body
{
"identity":{
"type":"None"
}
}
To remove system-assigned managed identity from a virtual machine that has user-assigned managed
identities, remove SystemAssigned from the {"identity":{"type:" "}} value while keeping the
UserAssigned value and the userAssignedIdentities dictionary values if you are using API version 2018-
06-01 . If you are using API version 2017-12-01 or earlier, keep the identityIds array.
az account get-access-token
3. Retrieve a Bearer access token, which you will use in the next step in the Authorization header to create your
VM with a system-assigned managed identity.
az account get-access-token
4. Create a user-assigned managed identity using the instructions found here: Create a user-assigned managed
identity.
5. Create a VM using CURL to call the Azure Resource Manager REST endpoint. The following example creates
a VM named myVM in the resource group myResourceGroup with a user-assigned managed identity ID1 ,
as identified in the request body by the value "identity":{"type":"UserAssigned"} . Replace <ACCESS TOKEN>
with the value you received in the previous step when you requested a Bearer access token and the
<SUBSCRIPTION ID> value as appropriate for your environment.
curl 'https://management.azure.com/subscriptions/<SUBSCRIPTION
ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM?api-version=2018-06-
01' -X PUT -d '{"location":"westus","name":"myVM","identity":{"type":"UserAssigned","identityIds":
["/subscriptions/<SUBSCRIPTION
ID>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ID1"]},"pr
operties":{"hardwareProfile":{"vmSize":"Standard_D2_v2"},"storageProfile":{"imageReference":
{"sku":"2016-
Datacenter","publisher":"MicrosoftWindowsServer","version":"latest","offer":"WindowsServer"},"osDisk":
{"caching":"ReadWrite","managedDisk":
{"storageAccountType":"Standard_LRS"},"name":"myVM3osdisk","createOption":"FromImage"},"dataDisks":
[{"diskSizeGB":1023,"createOption":"Empty","lun":0},
{"diskSizeGB":1023,"createOption":"Empty","lun":1}]},"osProfile":
{"adminUsername":"azureuser","computerName":"myVM","adminPassword":"myPassword12"},"networkProfile":
{"networkInterfaces":[{"id":"/subscriptions/<SUBSCRIPTION
ID>/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkInterfaces/myNic","properties":
{"primary":true}}]}}}' -H "Content-Type: application/json" -H "Authorization: Bearer <ACCESS TOKEN>"
PUT https://management.azure.com/subscriptions/<SUBSCRIPTION
ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM?api-version=2018-06-
01 HTTP/1.1
Request headers
Request body
{
"location":"westus",
"name":"myVM",
"identity":{
"type":"UserAssigned",
"identityIds":[
"/subscriptions/<SUBSCRIPTION
ID>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ID1"
]
},
"properties":{
"hardwareProfile":{
"vmSize":"Standard_D2_v2"
},
"storageProfile":{
"imageReference":{
"sku":"2016-Datacenter",
"publisher":"MicrosoftWindowsServer",
"version":"latest",
"offer":"WindowsServer"
},
"osDisk":{
"caching":"ReadWrite",
"managedDisk":{
"storageAccountType":"Standard_LRS"
},
"name":"myVM3osdisk",
"createOption":"FromImage"
},
"dataDisks":[
{
"diskSizeGB":1023,
"createOption":"Empty",
"lun":0
},
{
"diskSizeGB":1023,
"createOption":"Empty",
"lun":1
}
]
},
"osProfile":{
"adminUsername":"azureuser",
"computerName":"myVM",
"adminPassword":"myPassword12"
},
"networkProfile":{
"networkInterfaces":[
{
"id":"/subscriptions/<SUBSCRIPTION
ID>/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkInterfaces/myNic",
"properties":{
"primary":true
}
}
]
}
}
}
PUT https://management.azure.com/subscriptions/<SUBSCRIPTION
ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM?api-version=2017-12-
01 HTTP/1.1
Request headers
Request body
{
"location":"westus",
"name":"myVM",
"identity":{
"type":"UserAssigned",
"identityIds":[
"/subscriptions/<SUBSCRIPTION
ID>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ID1"
]
},
"properties":{
"hardwareProfile":{
"vmSize":"Standard_D2_v2"
},
"storageProfile":{
"imageReference":{
"sku":"2016-Datacenter",
"publisher":"MicrosoftWindowsServer",
"version":"latest",
"offer":"WindowsServer"
},
"osDisk":{
"caching":"ReadWrite",
"managedDisk":{
"storageAccountType":"Standard_LRS"
},
"name":"myVM3osdisk",
"createOption":"FromImage"
},
"dataDisks":[
{
"diskSizeGB":1023,
"createOption":"Empty",
"lun":0
},
{
"diskSizeGB":1023,
"createOption":"Empty",
"lun":1
}
]
},
"osProfile":{
"adminUsername":"azureuser",
"computerName":"myVM",
"adminPassword":"myPassword12"
},
"networkProfile":{
"networkInterfaces":[
{
"id":"/subscriptions/<SUBSCRIPTION
ID>/resourceGroups/myResourceGroup/providers/Microsoft.Network/networkInterfaces/myNic",
"properties":{
"primary":true
}
}
]
}
}
}
az account get-access-token
2. Create a user-assigned managed identity using the instructions found here, Create a user-assigned managed
identity.
3. To ensure you don't delete existing user or system-assigned managed identities that are assigned to the VM,
you need to list the identity types assigned to the VM by using the following CURL command. If you have
managed identities assigned to the virtual machine scale set, they are listed under in the identity value.
Request headers
If you have any user or system-assigned managed identities assigned to the VM as identified in the
identity value in the response, skip to step 5 that shows you how to retain the system-assigned managed
identity while adding a user-assigned managed identity on your VM.
4. If you don't have any user-assigned managed identities assigned to your VM, use the following CURL
command to call the Azure Resource Manager REST endpoint to assign the first user-assigned managed
identity to the VM.
The following example assigns a user-assigned managed identity, ID1 to a VM named myVM in the
resource group myResourceGroup. Replace <ACCESS TOKEN> with the value you received in the previous step
when you requested a Bearer access token and the <SUBSCRIPTION ID> value as appropriate for your
environment.
API VERSION 2018-06-01
curl 'https://management.azure.com/subscriptions/<SUBSCRIPTION
ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM?api-version=2018-06-
01' -X PATCH -d '{"identity":{"type":"UserAssigned", "userAssignedIdentities":
{"/subscriptions/<SUBSCRIPTION
ID>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ID1":
{}}}}' -H "Content-Type: application/json" -H "Authorization:Bearer <ACCESS TOKEN>"
PATCH https://management.azure.com/subscriptions/<SUBSCRIPTION
ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM?api-version=2018-06-
01 HTTP/1.1
Request headers
REQ UEST H EA DER DESC RIP T IO N
Request body
{
"identity":{
"type":"UserAssigned",
"userAssignedIdentities":{
"/subscriptions/<SUBSCRIPTION
ID>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ID1":{
}
}
}
}
curl 'https://management.azure.com/subscriptions/<SUBSCRIPTION
ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM?api-version=2017-12-
01' -X PATCH -d '{"identity":{"type":"userAssigned", "identityIds":["/subscriptions/<SUBSCRIPTION
ID>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ID1"]}}' -
H "Content-Type: application/json" -H "Authorization:Bearer <ACCESS TOKEN>"
PATCH https://management.azure.com/subscriptions/<SUBSCRIPTION
ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM?api-version=2017-12-
01 HTTP/1.1
Request headers
Request body
{
"identity":{
"type":"userAssigned",
"identityIds":[
"/subscriptions/<SUBSCRIPTION
ID>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ID1"
]
}
}
5. If you have an existing user-assigned or system-assigned managed identity assigned to your VM:
API VERSION 2018-06-01
Add the user-assigned managed identity to the userAssignedIdentities dictionary value.
For example, if you have system-assigned managed identity and the user-assigned managed identity ID1
currently assigned to your VM and would like to add the user-assigned managed identity ID2 to it:
curl 'https://management.azure.com/subscriptions/<SUBSCRIPTION
ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM?api-version=2018-06-
01' -X PATCH -d '{"identity":{"type":"SystemAssigned, UserAssigned", "userAssignedIdentities":
{"/subscriptions/<SUBSCRIPTION
ID>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ID1":
{},"/subscriptions/<SUBSCRIPTION
ID>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ID2":
{}}}}' -H "Content-Type: application/json" -H "Authorization:Bearer <ACCESS TOKEN>"
PATCH https://management.azure.com/subscriptions/<SUBSCRIPTION
ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM?api-version=2018-06-
01 HTTP/1.1
Request headers
Request body
{
"identity":{
"type":"SystemAssigned, UserAssigned",
"userAssignedIdentities":{
"/subscriptions/<SUBSCRIPTION
ID>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ID1":{
},
"/subscriptions/<SUBSCRIPTION
ID>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ID2":{
}
}
}
}
PATCH https://management.azure.com/subscriptions/<SUBSCRIPTION
ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM?api-version=2017-12-
01 HTTP/1.1
Request headers
Request body
{
"identity":{
"type":"SystemAssigned,UserAssigned",
"identityIds":[
"/subscriptions/<SUBSCRIPTION
ID>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ID1",
"/subscriptions/<SUBSCRIPTION
ID>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ID2"
]
}
}
az account get-access-token
2. To ensure you don't delete any existing user-assigned managed identities that you would like to keep
assigned to the VM or remove the system-assigned managed identity, you need to list the managed
identities by using the following CURL command:
If you have managed identities assigned to the VM, they are listed in the response in the identity value.
For example, if you have user-assigned managed identities ID1 and ID2 assigned to your VM, and you
only want to keep ID1 assigned and retain the system-assigned identity:
API VERSION 2018-06-01
Add null to the user-assigned managed identity you would like to remove:
curl 'https://management.azure.com/subscriptions/<SUBSCRIPTION
ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM?api-version=2018-06-
01' -X PATCH -d '{"identity":{"type":"SystemAssigned, UserAssigned", "userAssignedIdentities":
{"/subscriptions/<SUBSCRIPTION
ID>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ID2":null}
}}' -H "Content-Type: application/json" -H "Authorization:Bearer <ACCESS TOKEN>"
PATCH https://management.azure.com/subscriptions/<SUBSCRIPTION
ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM?api-version=2018-06-
01 HTTP/1.1
Request headers
Request body
{
"identity":{
"type":"SystemAssigned, UserAssigned",
"userAssignedIdentities":{
"/subscriptions/<SUBSCRIPTION
ID>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ID2":null
}
}
}
PATCH https://management.azure.com/subscriptions/<SUBSCRIPTION
ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM?api-version=2017-12-
01 HTTP/1.1
Request headers
Request body
{
"identity":{
"type":"SystemAssigned, UserAssigned",
"identityIds":[
"/subscriptions/<SUBSCRIPTION
ID>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ID1"
]
}
}
If your VM has both system-assigned and user-assigned managed identities, you can remove all the user-assigned
managed identities by switching to use only system-assigned managed identity using the following command:
curl 'https://management.azure.com/subscriptions/<SUBSCRIPTION
ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM?api-version=2018-06-01' -X
PATCH -d '{"identity":{"type":"SystemAssigned"}}' -H "Content-Type: application/json" -H "Authorization:Bearer
<ACCESS TOKEN>"
PATCH https://management.azure.com/subscriptions/<SUBSCRIPTION
ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM?api-version=2018-06-01
HTTP/1.1
Request headers
Request body
{
"identity":{
"type":"SystemAssigned"
}
}
If your VM has only user-assigned managed identities and you would like to remove them all, use the following
command:
curl 'https://management.azure.com/subscriptions/<SUBSCRIPTION
ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM?api-version=2018-06-01' -X
PATCH -d '{"identity":{"type":"None"}}' -H "Content-Type: application/json" -H Authorization:"Bearer <ACCESS
TOKEN>"
PATCH https://management.azure.com/subscriptions/<SUBSCRIPTION
ID>/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM?api-version=2018-06-01
HTTP/1.1
Request headers
Request body
{
"identity":{
"type":"None"
}
}
Next steps
For information on how to create, list, or delete user-assigned managed identities using REST see:
Create, list or delete a user-assigned managed identities using REST API calls
Configure a VM with managed identities for Azure
resources using an Azure SDK
11/6/2020 • 2 minutes to read • Edit Online
Managed identities for Azure resources is a feature of Azure Active Directory. Each of the Azure services that
support managed identities for Azure resources are subject to their own timeline. Make sure you review the
availability status of managed identities for your resource and known issues before you begin.
Managed identities for Azure resources provides Azure services with an automatically managed identity in Azure
Active Directory (AD). You can use this identity to authenticate to any service that supports Azure AD
authentication, without having credentials in your code.
In this article, you learn how to enable and remove managed identities for Azure resources for an Azure VM, using
an Azure SDK.
Prerequisites
If you're not familiar with the managed identities for Azure resources feature, see this overview. If you don't
have an Azure account, sign up for a free account before you continue.
SDK SA M P L E
Next steps
See related articles under Configure Identity for an Azure VM , to learn how you can also use the Azure
portal, PowerShell, CLI, and resource templates.
Use Azure Policy to restrict extensions installation on
Windows VMs
11/2/2020 • 2 minutes to read • Edit Online
If you want to prevent the use or installation of certain extensions on your Windows VMs, you can create an Azure
Policy definition using PowerShell to restrict extensions for VMs within a resource group.
This tutorial uses Azure PowerShell within the Cloud Shell, which is constantly updated to the latest version.
nano $home/clouddrive/rules.json
{
"if": {
"allOf": [
{
"field": "type",
"equals": "Microsoft.Compute/virtualMachines/extensions"
},
{
"field": "Microsoft.Compute/virtualMachines/extensions/publisher",
"equals": "Microsoft.Compute"
},
{
"field": "Microsoft.Compute/virtualMachines/extensions/type",
"in": "[parameters('notAllowedExtensions')]"
}
]
},
"then": {
"effect": "deny"
}
}
When you are done, hit the Ctrl + O and then Enter to save the file. Hit Ctrl + X to close the file and exit.
nano $home/clouddrive/parameters.json
{
"notAllowedExtensions": {
"type": "Array",
"metadata": {
"description": "The list of extensions that will be denied.",
"displayName": "Denied extension"
}
}
}
When you are done, hit the Ctrl + O and then Enter to save the file. Hit Ctrl + X to close the file and exit.
$definition = New-AzPolicyDefinition `
-Name "not-allowed-vmextension-windows" `
-DisplayName "Not allowed VM Extensions" `
-description "This policy governs which VM extensions that are explicitly denied." `
-Policy 'C:\Users\ContainerAdministrator\clouddrive\rules.json' `
-Parameter 'C:\Users\ContainerAdministrator\clouddrive\parameters.json'
Set-AzVMAccessExtension `
-ResourceGroupName "myResourceGroup" `
-VMName "myVM" `
-Name "myVMAccess" `
-Location EastUS
In the portal, the password change should fail with the "The template deployment failed because of policy
violation." message.
Next steps
For more information, see Azure Policy.
Exporting Resource Groups that contain VM
extensions
11/2/2020 • 3 minutes to read • Edit Online
Azure Resource Groups can be exported into a new Resource Manager template that can then be redeployed. The
export process interprets existing resources, and creates a Resource Manager template that when deployed results
in a similar Resource Group. When using the Resource Group export option against a Resource Group containing
Virtual Machine extensions, several items need to be considered such as extension compatibility and protected
settings.
This document details how the Resource Group export process works regarding virtual machine extensions,
including a list of supported extensions, and details on handling secured data.
Acronis Backup, Acronis Backup Linux, Bg Info, BMC CTM Agent Linux, BMC CTM Agent Windows, Chef Client,
Custom Script, Custom Script Extension, Custom Script for Linux, Datadog Linux Agent, Datadog Windows
Agent, Docker Extension, DSC Extension, Dynatrace Linux, Dynatrace Windows, HPE Security Application
Defender, IaaS Antimalware, IaaS Diagnostics, Linux Chef Client, Linux Diagnostic, OS Patching For Linux, Puppet
Agent, Site 24x7 Apm Insight, Site 24x7 Linux Server, Site 24x7 Windows Server, Trend Micro DSA, Trend Micro
DSA Linux, VM Access For Linux, VM Access For Linux, VM Snapshot, VM Snapshot Linux
"extensions_extensionname_protectedSettings": {
"defaultValue": null,
"type": "SecureObject"
}
"protectedSettings": {
"storageAccountName": "[parameters('storageAccountName')]",
"storageAccountKey": "[parameters('storageAccountKey')]",
"storageAccountEndPoint": "https://core.windows.net"
}
The final extension resource looks similar to the following JSON example:
{
"name": "Microsoft.Insights.VMDiagnosticsSettings",
"type": "extensions",
"location": "[resourceGroup().location]",
"apiVersion": "[variables('apiVersion')]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', variables('vmName'))]"
],
"tags": {
"displayName": "AzureDiagnostics"
},
"properties": {
"publisher": "Microsoft.Azure.Diagnostics",
"type": "IaaSDiagnostics",
"typeHandlerVersion": "1.5",
"autoUpgradeMinorVersion": true,
"settings": {
"xmlCfg": "[base64(concat(variables('wadcfgxstart'), variables('wadmetricsresourceid'), variables('vmName'),
variables('wadcfgxend')))]",
"storageAccount": "[parameters('existingdiagnosticsStorageAccountName')]"
},
"protectedSettings": {
"storageAccountName": "[parameters('storageAccountName')]",
"storageAccountKey": "[parameters('storageAccountKey')]",
"storageAccountEndPoint": "https://core.windows.net"
}
}
}
If using template parameters to provide property values, these need to be created. When creating template
parameters for protected setting values, make sure to use the SecureString parameter type so that sensitive values
are secured. For more information on using parameters, see Authoring Azure Resource Manager templates.
In the example of the IaasDiagnostic extension, the following parameters would be created in the parameters
section of the Resource Manager template.
"storageAccountName": {
"defaultValue": null,
"type": "SecureString"
},
"storageAccountKey": {
"defaultValue": null,
"type": "SecureString"
}
At this point, the template can be deployed using any template deployment method.
Troubleshooting Azure Windows VM extension
failures
11/2/2020 • 2 minutes to read • Edit Online
Extensions: {
"ExtensionType": "Microsoft.Compute.CustomScriptExtension",
"Name": "myCustomScriptExtension",
"SubStatuses": [
{
"Code": "ComponentStatus/StdOut/succeeded",
"DisplayStatus": "Provisioning succeeded",
"Level": "Info",
"Message": " Directory: C:\\temp\\n\\n\\nMode LastWriteTime Length Name
\\n---- ------------- ------ ---- \\n-a---
9/1/2015 2:03 AM 11
test.txt \\n\\n",
"Time": null
},
{
"Code": "ComponentStatus/StdErr/succeeded",
"DisplayStatus": "Provisioning succeeded",
"Level": "Info",
"Message": "",
"Time": null
}
]
}
Once the extension has been removed, the template can be re-executed to run the scripts on the VM.
Trigger a new GoalState to the VM
You might notice that an extension hasn't been executed, or is failing to execute because of a missing "Windows
Azure CRP Certificate Generator" (that certificate is used to secure the transport of the extension's protected
settings). That certificate will be automatically re-generated by restarting the Windows Guest Agent from inside the
Virtual Machine:
Open the Task Manager
Go to the Details tab
Locate the WindowsAzureGuestAgent.exe process
Right-click, and select "End Task". The process will be automatically restarted
You can also trigger a new GoalState to the VM, by executing an "empty update":
Azure PowerShell:
Azure CLI:
If an "empty update" didn't work, you can add a new empty Data Disk to the VM from the Azure Management
Portal, and then remove it later once the certificate has been added back.
Azure Disk Encryption for Windows
(Microsoft.Azure.Security.AzureDiskEncryption)
3/20/2020 • 2 minutes to read • Edit Online
Overview
Azure Disk Encryption leverages BitLocker to provide full disk encryption on Azure virtual machines running
Windows. This solution is integrated with Azure Key Vault to manage disk encryption keys and secrets in your key
vault subscription.
Prerequisites
For a full list of prerequisites, see Azure Disk Encryption for Windows VMs, specifically the following sections:
Supported VMs and operating systems
Networking requirements
Group Policy requirements
Extension Schema
There are two versions of extension schema for Azure Disk Encryption (ADE):
v2.2 - A newer recommended schema that does not use Azure Active Directory (AAD) properties.
v1.1 - An older schema that requires Azure Active Directory (AAD) properties.
To select a target schema, the typeHandlerVersion property must be set equal to version of schema you want to
use.
Schema v2.2: No AAD (recommended)
The v2.2 schema is recommended for all new VMs and does not require Azure Active Directory properties.
{
"type": "extensions",
"name": "[name]",
"apiVersion": "2019-07-01",
"location": "[location]",
"properties": {
"publisher": "Microsoft.Azure.Security",
"type": "AzureDiskEncryption",
"typeHandlerVersion": "2.2",
"autoUpgradeMinorVersion": true,
"settings": {
"EncryptionOperation": "[encryptionOperation]",
"KeyEncryptionAlgorithm": "[keyEncryptionAlgorithm]",
"KeyVaultURL": "[keyVaultURL]",
"KekVaultResourceId": "[keyVaultResourceID]",
"KeyEncryptionKeyURL": "[keyEncryptionKeyURL]",
"KeyVaultResourceId": "[keyVaultResourceID]",
"SequenceVersion": "sequenceVersion]",
"VolumeType": "[volumeType]"
}
}
}
Schema v1.1: with AAD
The 1.1 schema requires aadClientID and either aadClientSecret or AADClientCertificate and is not
recommended for new VMs.
Using aadClientSecret :
{
"type": "extensions",
"name": "[name]",
"apiVersion": "2019-07-01",
"location": "[location]",
"properties": {
"protectedSettings": {
"AADClientSecret": "[aadClientSecret]"
},
"publisher": "Microsoft.Azure.Security",
"type": "AzureDiskEncryption",
"typeHandlerVersion": "1.1",
"settings": {
"AADClientID": "[aadClientID]",
"EncryptionOperation": "[encryptionOperation]",
"KeyEncryptionAlgorithm": "[keyEncryptionAlgorithm]",
"KeyVaultURL": "[keyVaultURL]",
"KeyVaultResourceId": "[keyVaultResourceID]",
"KekVaultResourceId": "[keyVaultResourceID]",
"KeyEncryptionKeyURL": "[keyEncryptionKeyURL]",
"SequenceVersion": "sequenceVersion]",
"VolumeType": "[volumeType]"
}
}
}
Using AADClientCertificate :
{
"type": "extensions",
"name": "[name]",
"apiVersion": "2019-07-01",
"location": "[location]",
"properties": {
"protectedSettings": {
"AADClientCertificate": "[aadClientCertificate]"
},
"publisher": "Microsoft.Azure.Security",
"type": "AzureDiskEncryption",
"typeHandlerVersion": "1.1",
"settings": {
"AADClientID": "[aadClientID]",
"EncryptionOperation": "[encryptionOperation]",
"KeyEncryptionAlgorithm": "[keyEncryptionAlgorithm]",
"KeyVaultURL": "[keyVaultURL]",
"KeyVaultResourceId": "[keyVaultResourceID]",
"KekVaultResourceId": "[keyVaultResourceID]",
"KeyEncryptionKeyURL": "[keyEncryptionKeyURL]",
"SequenceVersion": "sequenceVersion]",
"VolumeType": "[volumeType]"
}
}
}
Property values
NAME VA L UE / EXA M P L E DATA T Y P E
Template deployment
For an example of template deployment based on schema v2.2, see Azure QuickStart Template 201-encrypt-
running-windows-vm-without-aad.
For an example of template deployment based on schema v1.1, see Azure QuickStart Template 201-encrypt-
running-windows-vm.
NOTE
Also if VolumeType parameter is set to All, data disks will be encrypted only if they are properly formatted.
Next steps
For more information about extensions, see Virtual machine extensions and features for Windows.
For more information about Azure Disk Encryption for Windows, see Windows virtual machines.
Key Vault virtual machine extension for Windows
11/2/2020 • 5 minutes to read • Edit Online
The Key Vault VM extension provides automatic refresh of certificates stored in an Azure key vault. Specifically, the
extension monitors a list of observed certificates stored in key vaults, and, upon detecting a change, retrieves, and
installs the corresponding certificates. This document details the supported platforms, configurations, and
deployment options for the Key Vault VM extension for Windows.
Operating system
The Key Vault VM extension supports below versions of Windows:
Windows Server 2019
Windows Server 2016
Windows Server 2012
The Key Vault VM extensio is also supported on custom local VM that is uploaded and converted into a specialized
image for use in Azure using Windows Server 2019 core install.
Supported certificate content types
PKCS #12
PEM
Prerequisities
Key Vault instance with certificate. See Create a Key Vault
VM/VMSS must have assigned managed identity
The Key Vault Access Policy must be set with secrets get and list permission for VM/VMSS managed identity
to retrieve a secret's portion of certificate. See How to Authenticate to Key Vault and Assign a Key Vault access
policy.
Extension schema
The following JSON shows the schema for the Key Vault VM extension. The extension does not require protected
settings - all its settings are considered public information. The extension requires a list of monitored certificates,
polling frequency, and the destination certificate store. Specifically:
{
"type": "Microsoft.Compute/virtualMachines/extensions",
"name": "KVVMExtensionForWindows",
"apiVersion": "2019-07-01",
"location": "<location>",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', <vmName>)]"
],
"properties": {
"publisher": "Microsoft.Azure.KeyVault",
"type": "KeyVaultForWindows",
"typeHandlerVersion": "1.0",
"autoUpgradeMinorVersion": true,
"settings": {
"secretsManagementSettings": {
"pollingIntervalInS": <polling interval in seconds, e.g: "3600">,
"certificateStoreName": <certificate store name, e.g.: "MY">,
"linkOnRenewal": <Only Windows. This feature ensures s-channel binding when certificate renews,
without necessitating a re-deployment. e.g.: false>,
"certificateStoreLocation": <certificate store location, currently it works locally only e.g.:
"LocalMachine">,
"requireInitialSync": <initial synchronization of certificates e..g: true>,
"observedCertificates": <list of KeyVault URIs representing monitored certificates, e.g.:
"https://myvault.vault.azure.net/secrets/mycertificate"
},
"authenticationSettings": {
"msiEndpoint": <Optional MSI endpoint e.g.: "http://169.254.169.254/metadata/identity">,
"msiClientId": <Optional MSI identity e.g.: "c7373ae5-91c2-4165-8ab6-7381d6e75619">
}
}
}
}
NOTE
Your observed certificates URLs should be of the form https://myVaultName.vault.azure.net/secrets/myCertName .
This is because the /secrets path returns the full certificate, including the private key, while the /certificates path does
not. More information about certificates can be found here: Key Vault Certificates
IMPORTANT
The 'authenticationSettings' property is required only for VMs with user assigned identities . It specifies identity to use
for authentication to Key Vault.
Property values
NAME VA L UE / EXA M P L E DATA T Y P E
certificateStoreName MY string
Template deployment
Azure VM extensions can be deployed with Azure Resource Manager templates. Templates are ideal when
deploying one or more virtual machines that require post deployment refresh of certificates. The extension can be
deployed to individual VMs or virtual machine scale sets. The schema and configuration are common to both
template types.
The JSON configuration for a virtual machine extension must be nested inside the virtual machine resource
fragment of the template, specifically "resources": [] object for the virtual machine template and in case of virtual
machine scale set under "virtualMachineProfile":"extensionProfile":{"extensions" :[] object.
NOTE
The VM extension would require system or user managed identity to be assigned to authenticate to Key vault. See How to
authenticate to Key Vault and assign a Key Vault access policy.
{
"type": "Microsoft.Compute/virtualMachines/extensions",
"name": "KeyVaultForWindows",
"apiVersion": "2019-07-01",
"location": "<location>",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', <vmName>)]"
],
"properties": {
"publisher": "Microsoft.Azure.KeyVault",
"type": "KeyVaultForWindows",
"typeHandlerVersion": "1.0",
"autoUpgradeMinorVersion": true,
"settings": {
"secretsManagementSettings": {
"pollingIntervalInS": <polling interval in seconds, e.g: "3600">,
"certificateStoreName": <certificate store name, e.g.: "MY">,
"certificateStoreLocation": <certificate store location, currently it works locally only e.g.:
"LocalMachine">,
"observedCertificates": <list of KeyVault URIs representing monitored certificates, e.g.:
["https://myvault.vault.azure.net/secrets/mycertificate",
"https://myvault.vault.azure.net/secrets/mycertificate2"]>
}
}
}
}
The Azure PowerShell can be used to deploy the Key Vault VM extension to an existing virtual machine or virtual
machine scale set.
To deploy the extension on a VM:
# Build settings
$settings = '{"secretsManagementSettings":
{ "pollingIntervalInS": "' + <pollingInterval> +
'", "certificateStoreName": "' + <certStoreName> +
'", "certificateStoreLocation": "' + <certStoreLoc> +
'", "observedCertificates": ["' + <observedCert1> + '","' + <observedCert2> + '"] } }'
$extName = "KeyVaultForWindows"
$extPublisher = "Microsoft.Azure.KeyVault"
$extType = "KeyVaultForWindows"
Azure CLI
%windrive%\WindowsAzure\Logs\Plugins\Microsoft.Azure.KeyVault.KeyVaultForWindows\
<version>\akvvm_service_<date>.log
Support
If you need more help at any point in this article, you can contact the Azure experts on the MSDN Azure and Stack
Overflow forums. Alternatively, you can file an Azure support incident. Go to the Azure support site and select Get
support. For information about using Azure Support, read the Microsoft Azure support FAQ.
Azure Backup for SQL Server running in Azure VM
11/2/2020 • 2 minutes to read • Edit Online
Azure Backup, amongst other offerings, provides support for backing up workloads such as SQL Server running in
Azure VMs. Since the SQL application is running within an Azure VM, the backup service needs permission to
access the application and fetch the necessary details. To do that, Azure Backup installs the
AzureBackupWindowsWorkload extension on the VM, in which the SQL Server is running, during the
registration process triggered by the user.
Prerequisites
For the list of supported scenarios, refer to the supportability matrix supported by Azure Backup.
Network connectivity
Azure Backup supports NSG Tags, deploying a proxy server or listed IP ranges; for details on each of the methods,
refer this article.
Extension schema
The extension schema and property values are the configuration values (runtime settings) that service is passing to
CRP API. These config values are used during registration and upgrade. AzureBackupWindowsWorkload
extension also uses this schema. The schema is pre-set; a new parameter can be added in the objectStr field
"runtimeSettings": [{
"handlerSettings": {
"protectedSettingsCertThumbprint": "",
"protectedSettings": {
"objectStr": "",
"logsBlobUri": "",
"statusBlobUri": ""
}
},
"publicSettings": {
"locale": "en-us",
"taskId": "1c0ae461-9d3b-418c-a505-bb31dfe2095d",
"objectStr": "",
"commandStartTimeUTCTicks": "636295005824665976",
"vmType": "vmType"
}
}]
}
The following JSON shows the schema for the WorkloadBackup extension.
{
"type": "extensions",
"name": "WorkloadBackup",
"location":"<myLocation>",
"properties": {
"publisher": "Microsoft.RecoveryServices",
"type": "AzureBackupWindowsWorkload",
"typeHandlerVersion": "1.1",
"autoUpgradeMinorVersion": true,
"settings": {
"locale":"<location>",
"taskId":"<TaskId used by Azure Backup service to communicate with extension>",
Property values
NAME VA L UE/ EXA M P L E DATA T Y P E
Template deployment
We recommended adding AzureBackupWindowsWorkload extension to a virtual machine is by enabling SQL
Server backup on the virtual machine. This can be achieved through the Resource Manager template designed for
automating backup on a SQL Server VM.
PowerShell deployment
You need to'register'the Azure VM that contains the SQL application with a Recovery services vault. During
registration, AzureBackupWindowsWorkload extension gets installed on the VM. UseRegister-
AzRecoveryServicesBackupContainerPS cmdlet to register the VM.
Next steps
Learn More about Azure SQL Server VM backup troubleshooting guidelines
Common questions about backing up SQL Server databases that run on Azure virtual machines (VMs) and that
use the Azure Backup service.
Custom Script Extension for Windows
11/2/2020 • 11 minutes to read • Edit Online
The Custom Script Extension downloads and executes scripts on Azure virtual machines. This extension is useful for post deployment configuration, software
installation, or any other configuration or management tasks. Scripts can be downloaded from Azure storage or GitHub, or provided to the Azure portal at extension
run time. The Custom Script Extension integrates with Azure Resource Manager templates, and can be run using the Azure CLI, PowerShell, Azure portal, or the Azure
Virtual Machine REST API.
This document details how to use the Custom Script Extension using the Azure PowerShell module, Azure Resource Manager templates, and details troubleshooting
steps on Windows systems.
Prerequisites
NOTE
Do not use Custom Script Extension to run Update-AzVM with the same VM as its parameter, since it will wait on itself.
Operating System
The Custom Script Extension for Windows will run on the extension supported extension OSs;
Windows
Windows Server 2008 R2
Windows Server 2012
Windows Server 2012 R2
Windows 10
Windows Server 2016
Windows Server 2016 Core
Windows Server 2019
Windows Server 2019 Core
Script Location
You can configure the extension to use your Azure Blob storage credentials to access Azure Blob storage. The script location can be anywhere, as long as the VM can
route to that end point, such as GitHub or an internal file server.
Internet Connectivity
If you need to download a script externally such as from GitHub or Azure Storage, then additional firewall and Network Security Group ports need to be opened. For
example, if your script is located in Azure Storage, you can allow access using Azure NSG Service Tags for Storage.
If your script is on a local server, then you may still need additional firewall and Network Security Group ports need to be opened.
Tips and Tricks
The highest failure rate for this extension is because of syntax errors in the script, test the script runs without error, and also put in additional logging into the
script to make it easier to find where it failed.
Write scripts that are idempotent. This ensures that if they run again accidentally, it will not cause system changes.
Ensure the scripts don't require user input when they run.
There's 90 minutes allowed for the script to run, anything longer will result in a failed provision of the extension.
Don't put reboots inside the script, this action will cause issues with other extensions that are being installed. Post reboot, the extension won't continue after the
restart.
If you have a script that will cause a reboot, then install applications and run scripts, you can schedule the reboot using a Windows Scheduled Task, or use tools
such as DSC, Chef, or Puppet extensions.
It is not recommended to run a script that will cause a stop or update of the VM Agent. This can let the extension in a Transitioning state, leading to a timeout.
The extension will only run a script once, if you want to run a script on every boot, then you need to use the extension to create a Windows Scheduled Task.
If you want to schedule when a script will run, you should use the extension to create a Windows Scheduled Task.
When the script is running, you will only see a 'transitioning' extension status from the Azure portal or CLI. If you want more frequent status updates of a running
script, you'll need to create your own solution.
Custom Script extension does not natively support proxy servers, however you can use a file transfer tool that supports proxy servers within your script, such as
Curl
Be aware of non-default directory locations that your scripts or commands may rely on, have logic to handle this situation.
Custom Script Extension will run under the LocalSystem Account
If you plan to use the storageAccountName and storageAccountKey properties, these properties must be collocated in protectedSettings.
Extension schema
The Custom Script Extension configuration specifies things like script location and the command to be run. You can store this configuration in configuration files,
specify it on the command line, or specify it in an Azure Resource Manager template.
You can store sensitive data in a protected configuration, which is encrypted and only decrypted inside the virtual machine. The protected configuration is useful
when the execution command includes secrets such as a password.
These items should be treated as sensitive data and specified in the extensions protected setting configuration. Azure VM extension protected setting data is
encrypted, and only decrypted on the target virtual machine.
{
"apiVersion": "2018-06-01",
"type": "Microsoft.Compute/virtualMachines/extensions",
"name": "virtualMachineName/config-app",
"location": "[resourceGroup().location]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', variables('vmName'),copyindex())]",
"[variables('musicstoresqlName')]"
],
"tags": {
"displayName": "config-app"
},
"properties": {
"publisher": "Microsoft.Compute",
"type": "CustomScriptExtension",
"typeHandlerVersion": "1.10",
"autoUpgradeMinorVersion": true,
"settings": {
"fileUris": [
"script location"
],
"timestamp":123456789
},
"protectedSettings": {
"commandToExecute": "myExecutionCommand",
"storageAccountName": "myStorageAccountName",
"storageAccountKey": "myStorageAccountKey",
"managedIdentity" : {}
}
}
}
NOTE
managedIdentity property must not be used in conjunction with storageAccountName or storageAccountKey properties
NOTE
Only one version of an extension can be installed on a VM at a point in time, specifying custom script twice in the same Resource Manager template for the same VM will fail.
NOTE
We can use this schema inside the VirtualMachine resource or as a standalone resource. The name of the resource has to be in this format "virtualMachineName/extensionName", if
this extension is used as a standalone resource in the ARM template.
Property values
NAME VA L UE / EXA M P L E DATA T Y P E
NOTE
These property names are case-sensitive. To avoid deployment problems, use the names as shown here.
The following values can be set in either public or protected settings, the extension will reject any configuration where the values below are set in both public and
protected settings.
commandToExecute
Using public settings maybe useful for debugging, but it's recommended that you use protected settings.
Public settings are sent in clear text to the VM where the script will be executed. Protected settings are encrypted using a key known only to the Azure and the VM.
The settings are saved to the VM as they were sent, that is, if the settings were encrypted they're saved encrypted on the VM. The certificate used to decrypt the
encrypted values is stored on the VM, and used to decrypt settings (if necessary) at runtime.
Property: managedIdentity
NOTE
This property must be specified in protected settings only.
CustomScript (version 1.10 onwards) supports managed identity for downloading file(s) from URLs provided in the "fileUris" setting. It allows CustomScript to
access Azure Storage private blobs or containers without the user having to pass secrets like SAS tokens or storage account keys.
To use this feature, the user must add a system-assigned or user-assigned identity to the VM or VMSS where CustomScript is expected to run, and grant the
managed identity access to the Azure Storage container or blob.
To use the system-assigned identity on the target VM/VMSS, set "managedidentity" field to an empty json object.
Example:
{
"fileUris": ["https://mystorage.blob.core.windows.net/privatecontainer/script1.ps1"],
"commandToExecute": "powershell.exe script1.ps1",
"managedIdentity" : {}
}
To use the user-assigned identity on the target VM/VMSS, configure "managedidentity" field with the client ID or the object ID of the managed identity.
Examples:
{
"fileUris": ["https://mystorage.blob.core.windows.net/privatecontainer/script1.ps1"],
"commandToExecute": "powershell.exe script1.ps1",
"managedIdentity" : { "clientId": "31b403aa-c364-4240-a7ff-d85fb6cd7232" }
}
{
"fileUris": ["https://mystorage.blob.core.windows.net/privatecontainer/script1.ps1"],
"commandToExecute": "powershell.exe script1.ps1",
"managedIdentity" : { "objectId": "12dd289c-0583-46e5-b9b4-115d5c19ef4b" }
}
NOTE
managedIdentity property must not be used in conjunction with storageAccountName or storageAccountKey properties
Template deployment
Azure VM extensions can be deployed with Azure Resource Manager templates. The JSON schema, which is detailed in the previous section can be used in an Azure
Resource Manager template to run the Custom Script Extension during deployment. The following samples show how to use the Custom Script extension:
Tutorial: Deploy virtual machine extensions with Azure Resource Manager templates
Deploy Two Tier Application on Windows and Azure SQL DB
PowerShell deployment
The Set-AzVMCustomScriptExtension command can be used to add the Custom Script extension to an existing virtual machine. For more information, see Set-
AzVMCustomScriptExtension.
Set-AzVMCustomScriptExtension -ResourceGroupName <resourceGroupName> `
-VMName <vmName> `
-Location myLocation `
-FileUri <fileUrl> `
-Run 'myScript.ps1' `
-Name DemoScriptExtension
Additional examples
Using multiple scripts
In this example, you have three scripts that are used to build your server. The commandToExecute calls the first script, then you have options on how the others are
called. For example, you can have a master script that controls the execution, with the right error handling, logging, and state management. The scripts are
downloaded to the local machine for running. For example in 1_Add_Tools.ps1 you would call 2_Add_Features.ps1 by adding .\2_Add_Features.ps1 to the script, and
repeat this process for the other scripts you define in $settings .
$fileUri = @("https://xxxxxxx.blob.core.windows.net/buildServer1/1_Add_Tools.ps1",
"https://xxxxxxx.blob.core.windows.net/buildServer1/2_Add_Features.ps1",
"https://xxxxxxx.blob.core.windows.net/buildServer1/3_CompleteInstall.ps1")
$storageAcctName = "xxxxxxx"
$storageKey = "1234ABCD"
$protectedSettings = @{"storageAccountName" = $storageAcctName; "storageAccountKey" = $storageKey; "commandToExecute" = "powershell -ExecutionPolicy
Unrestricted -File 1_Add_Tools.ps1"};
#run command
Set-AzVMExtension -ResourceGroupName <resourceGroupName> `
-Location <locationName> `
-VMName <vmName> `
-Name "buildserver1" `
-Publisher "Microsoft.Compute" `
-ExtensionType "CustomScriptExtension" `
-TypeHandlerVersion "1.10" `
-Settings $settings `
-ProtectedSettings $protectedSettings `
The response content cannot be parsed because the Internet Explorer engine is not available, or Internet Explorer's first-launch configuration is not complete.
Specify the UseBasicParsing parameter and try again.
Classic VMs
IMPORTANT
Classic VMs will be retired on March 1, 2023.
If you use IaaS resources from ASM, please complete your migration by March 1, 2023. We encourage you to make the switch sooner to take advantage of the many feature
enhancements in Azure Resource Manager.
For more information, see Migrate your IaaS resources to Azure Resource Manager by March 1, 2023.
To deploy the Custom Script Extension on classic VMs, you can use the Azure portal or the Classic Azure PowerShell cmdlets.
Azure portal
Navigate to your Classic VM resource. Select Extensions under Settings .
Click + Add and in the list of resources choose Custom Script Extension .
On the Install extension page, select the local PowerShell file, and fill out any arguments and click Ok .
PowerShell
Use the Set-AzureVMCustomScriptExtension cmdlet can be used to add the Custom Script extension to an existing virtual machine.
# create vm object
$vm = Get-AzureVM -Name <vmName> -ServiceName <cloudServiceName>
# set extension
Set-AzureVMCustomScriptExtension -VM $vm -FileUri $fileUri -Run 'Create-File.ps1'
# update vm
$vm | Update-AzureVM
Extension output is logged to files found under the following folder on the target virtual machine.
C:\WindowsAzure\Logs\Plugins\Microsoft.Compute.CustomScriptExtension
The specified files are downloaded into the following folder on the target virtual machine.
C:\Packages\Plugins\Microsoft.Compute.CustomScriptExtension\1.*\Downloads\<n>
where <n> is a decimal integer, which may change between executions of the extension. The 1.* value matches the actual, current typeHandlerVersion value of the
extension. For example, the actual directory could be C:\Packages\Plugins\Microsoft.Compute.CustomScriptExtension\1.8\Downloads\2 .
When executing the commandToExecute command, the extension sets this directory (for example, ...\Downloads\2 ) as the current working directory. This process
enables the use of relative paths to locate the files downloaded via the fileURIs property. See the table below for examples.
Since the absolute download path may vary over time, it's better to opt for relative script/file paths in the commandToExecute string, whenever possible. For example:
Path information after the first URI segment is kept for files downloaded via the fileUris property list. As shown in the table below, downloaded files are mapped
into download subdirectories to reflect the structure of the fileUris values.
Examples of Downloaded Files
https://someAcct.blob.core.windows.net/aContainer/scripts/myscript.ps1
./scripts/myscript.ps1 C:\Packages\Plugins\Microsoft.Compute.CustomScriptExtension\1.8\Downlo
https://someAcct.blob.core.windows.net/aContainer/topLevel.ps1
./topLevel.ps1 C:\Packages\Plugins\Microsoft.Compute.CustomScriptExtension\1.8\Downlo
1 The absolute directory paths change over the lifetime of the VM, but not within a single execution of the CustomScript extension.
Support
If you need more help at any point in this article, you can contact the Azure experts on the MSDN Azure and Stack Overflow forums. You can also file an Azure
support incident. Go to the Azure support site and select Get support. For information about using Azure Support, read the Microsoft Azure support FAQ.
Microsoft Antimalware Extension for Windows
11/2/2020 • 4 minutes to read • Edit Online
Overview
The modern threat landscape for cloud environments is extremely dynamic, increasing the pressure on business IT
cloud subscribers to maintain effective protection in order to meet compliance and security requirements.
Microsoft Antimalware for Azure is free real-time protection capability that helps identify and remove viruses,
spyware, and other malicious software, with configurable alerts when known malicious or unwanted software
attempts to install itself or run on your Azure systems. The solution is built on the same antimalware platform as
Microsoft Security Essentials (MSE), Microsoft Forefront Endpoint Protection, Microsoft System Center Endpoint
Protection, Windows Intune, and Windows Defender for Windows 8.0 and higher. Microsoft Antimalware for Azure
is a single-agent solution for applications and tenant environments, designed to run in the background without
human intervention. You can deploy protection based on the needs of your application workloads, with either basic
secure-by-default or advanced custom configuration, including antimalware monitoring.
Prerequisites
Operating system
The Microsoft Antimalware for Azure solution includes the Microsoft Antimalware Client, and Service, Antimalware
classic deployment model, Antimalware PowerShell cmdlets, and Azure Diagnostics Extension. The Microsoft
Antimalware solution is supported on Windows Server 2008 R2, Windows Server 2012, and Windows Server 2012
R2 operating system families. It is not supported on the Windows Server 2008 operating system, and also is not
supported in Linux.
Windows Defender is the built-in Antimalware enabled in Windows Server 2016. The Windows Defender Interface
is also enabled by default on some Windows Server 2016 SKU's. The Azure VM Antimalware extension can still be
added to a Windows Server 2016 Azure VM with Windows Defender, but in this scenario the extension will apply
any optional configuration policies to be used by Windows Defender, the extension will not deploy any additional
antimalware service. You can read more about this update here.
Internet connectivity
The Microsoft Antimalware for Windows requires that the target virtual machine is connected to the internet to
receive regular engine and signature updates.
Template deployment
Azure VM extensions can be deployed with Azure Resource Manager templates. Templates are ideal when
deploying one or more virtual machines that require post deployment configuration such as onboarding to Azure
Antimalware.
The JSON configuration for a virtual machine extension can be nested inside the virtual machine resource, or
placed at the root or top level of a Resource Manager JSON template. The placement of the JSON configuration
affects the value of the resource name and type. For more information, see Set name and type for child resources.
The following example assumes the VM extension is nested inside the virtual machine resource. When nesting the
extension resource, the JSON is placed in the "resources": [] object of the virtual machine.
{
"type": "Microsoft.Compute/virtualMachines/extensions",
"name": "[concat(parameters('vmName'),'/', parameters('vmExtensionName'))]",
"apiVersion": "2019-07-01",
"location": "[resourceGroup().location]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', parameters('vmName'))]"
],
"properties": {
"publisher": "Microsoft.Azure.Security",
"type": "IaaSAntimalware",
"typeHandlerVersion": "1.3",
"autoUpgradeMinorVersion": true,
"settings": {
"AntimalwareEnabled": "true",
"Exclusions": {
"Extensions": ".log;.ldf",
"Paths": "D:\\IISlogs;D:\\DatabaseLogs",
"Processes": "mssence.svc"
},
"RealtimeProtectionEnabled": "true",
"ScheduledScanSettings": {
"isEnabled": "true",
"scanType": "Quick",
"day": "7",
"time": "120"
}
},
"protectedSettings": null
}
}
PowerShell deployment
Depends on your type of deployment, use the corresponding commands to deploy the Azure Antimalware virtual
machine extension to an existing virtual machine.
Azure Resource Manager based Virtual Machine
Azure Service Fabric Clusters
Classic Cloud Service
-2147156224 MSI is busy with different installation Try running installation latter
-2147156221 MSE setup already running Run only one instance at a time
ERRO R C O DE M EA N IN G P O SSIB L E A C T IO N
-2147156208 Low disk space < 200 MB Delete unused files, and retry
installation
-2147156121 Setup tried to remove competitor Try to remove the competitor product
product. But competitor product manually, reboot, and retry installation
uninstall failed
-2147156116 Policy file validation failed Make sure you pass a valid policy XML
file to setup
-2147156095 Setup couldn't start the Antimalware Verify all binaries are correctly signed,
service and right licensing file is installed
-2147023293 A fatal error occurred during MSI logs from EPP.msi are required here
installation. In most cases, it will. for future investigation
Epp.msi, can’t register\start\stop AM
service or mini filter driver
-2147023277 Installation package could not be Verify that the package exists, and is
opened accessible, or contact the application
vendor to verify that this is a valid
Windows Installer package
1 Incorrect Function
Support
If you need more help at any point in this article, you can contact the Azure experts on the MSDN Azure and Stack
Overflow forums. Alternatively, you can file an Azure support incident. Go to the Azure support site, and select Get
support. For information about using Azure Support, read the Microsoft Azure support FAQ.
VM Snapshot Windows extension for Azure Backup
11/2/2020 • 3 minutes to read • Edit Online
Azure Backup provides support for backing up workloads from on-premises to cloud and backing up cloud
resources to Recovery Services vault. Azure Backup uses VM snapshot extension to take an application consistent
backup of the Azure virtual machine without the need to shutdown the VM. VM Snapshot extension is published
and supported by Microsoft as part of Azure Backup service. Azure Backup will install the extension as part of first
scheduled backup triggered post enabling backup. This document details the supported platforms, configurations,
and deployment options for the VM Snapshot extension.
The VMSnapshot extension appears in the Azure portal only for non-managed VMs.
Prerequisites
Operating system
For a list of supported operating systems, refer to Operating Systems supported by Azure Backup
Extension schema
The following JSON shows the schema for the VM snapshot extension. The extension requires the task ID - this
identifies the backup job which triggered snapshot on the VM, status blob uri - where status of the snapshot
operation is written, scheduled start time of the snapshot, logs blob uri - where logs corresponding to snapshot
task are written, objstr- representation of VM disks and meta data. Because these settings should be treated as
sensitive data, it should be stored in a protected setting configuration. Azure VM extension protected setting data is
encrypted, and only decrypted on the target virtual machine. Note that these settings are recommended to be
passed from Azure Backup service only as part of Backup job.
{
"type": "extensions",
"name": "VMSnapshot",
"location":"<myLocation>",
"properties": {
"publisher": "Microsoft.Azure.RecoveryServices",
"type": "VMSnapshot",
"typeHandlerVersion": "1.9",
"autoUpgradeMinorVersion": true,
"settings": {
"locale":"<location>",
"taskId":"<taskId used by Azure Backup service to communicate with extension>",
"commandToExecute": "snapshot",
"commandStartTimeUTCTicks": "<scheduled start time of the snapshot task>",
"vmType": "microsoft.compute/virtualmachines"
},
"protectedSettings": {
"objectStr": "<blob SAS uri representation of VM sent by Azure Backup service to extension>",
"logsBlobUri": "<blob uri where logs of command execution by extension are written to>",
"statusBlobUri": "<blob uri where status of the command executed by extension is written>"
}
}
}
Property values
NAME VA L UE / EXA M P L E DATA T Y P E
Template deployment
Azure VM extensions can be deployed with Azure Resource Manager templates. However, the recommended way
of adding a VM snapshot extension to a virtual machine is by enabling backup on the virtual machine. This can be
achieved through a Resource Manager template. A sample Resource Manager template that enables backup on a
virtual machine can be found on the Azure Quick Start Gallery.
Overview
Azure Network Watcher is a network performance monitoring, diagnostic, and analytics service that allows
monitoring of Azure networks. The Network Watcher Agent virtual machine extension is a requirement for
capturing network traffic on demand, and other advanced functionality on Azure virtual machines.
This document details the supported platforms and deployment options for the Network Watcher Agent virtual
machine extension for Windows. Installation of the agent does not disrupt, or require a reboot, of the virtual
machine. You can deploy the extension into virtual machines that you deploy. If the virtual machine is deployed by
an Azure service, check the documentation for the service to determine whether or not it permits installing
extensions in the virtual machine.
Prerequisites
Operating system
The Network Watcher Agent extension for Windows can be run against Windows Server 2008 R2, 2012, 2012 R2,
2016 and 2019 releases. Nano Server is not supported at this time.
Internet connectivity
Some of the Network Watcher Agent functionality requires that the target virtual machine be connected to the
Internet. Without the ability to establish outgoing connections, the Network Watcher Agent will not be able to
upload packet captures to your storage account. For more details, please see the Network Watcher documentation.
Extension schema
The following JSON shows the schema for the Network Watcher Agent extension. The extension neither requires,
nor supports, any user-supplied settings, and relies on its default configuration.
{
"type": "extensions",
"name": "Microsoft.Azure.NetworkWatcher",
"apiVersion": "[variables('apiVersion')]",
"location": "[resourceGroup().location]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', variables('vmName'))]"
],
"properties": {
"publisher": "Microsoft.Azure.NetworkWatcher",
"type": "NetworkWatcherAgentWindows",
"typeHandlerVersion": "1.4",
"autoUpgradeMinorVersion": true
}
}
Property values
NAME VA L UE / EXA M P L E
apiVersion 2015-06-15
publisher Microsoft.Azure.NetworkWatcher
type NetworkWatcherAgentWindows
typeHandlerVersion 1.4
Template deployment
You can deploy Azure VM extensions with Azure Resource Manager templates. You can use the JSON schema
detailed in the previous section in an Azure Resource Manager template to run the Network Watcher Agent
extension during an Azure Resource Manager template deployment.
PowerShell deployment
Use the Set-AzVMExtension command to deploy the Network Watcher Agent virtual machine extension to an
existing virtual machine:
Set-AzVMExtension `
-ResourceGroupName "myResourceGroup1" `
-Location "WestUS" `
-VMName "myVM1" `
-Name "networkWatcherAgent" `
-Publisher "Microsoft.Azure.NetworkWatcher" `
-Type "NetworkWatcherAgentWindows" `
-TypeHandlerVersion "1.4"
C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.NetworkWatcher.NetworkWatcherAgentWindows\
Support
If you need more help at any point in this article, you can refer to the Network Watcher User Guide documentation
or contact the Azure experts on the MSDN Azure and Stack Overflow forums. Alternatively, you can file an Azure
support incident. Go to the Azure support site and select Get support. For information about using Azure Support,
read the Microsoft Azure support FAQ.
NVIDIA GPU Driver Extension for Windows
11/2/2020 • 3 minutes to read • Edit Online
Overview
This extension installs NVIDIA GPU drivers on Windows N-series VMs. Depending on the VM family, the
extension installs CUDA or GRID drivers. When you install NVIDIA drivers using this extension, you are
accepting and agreeing to the terms of the NVIDIA End-User License Agreement. During the installation
process, the VM may reboot to complete the driver setup.
Instructions on manual installation of the drivers and the current supported versions are available here. An
extension is also available to install NVIDIA GPU drivers on Linux N-series VMs.
Prerequisites
Operating system
This extension supports the following OSs:
Windows 10 Core
Internet connectivity
The Microsoft Azure Extension for NVIDIA GPU Drivers requires that the target VM is connected to the internet
and have access.
Extension schema
The following JSON shows the schema for the extension.
{
"name": "<myExtensionName>",
"type": "extensions",
"apiVersion": "2015-06-15",
"location": "<location>",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', <myVM>)]"
],
"properties": {
"publisher": "Microsoft.HpcCompute",
"type": "NvidiaGpuDriverWindows",
"typeHandlerVersion": "1.3",
"autoUpgradeMinorVersion": true,
"settings": {
}
}
}
Properties
NAME VA L UE / EXA M P L E DATA T Y P E
Deployment
Azure Resource Manager Template
Azure VM extensions can be deployed with Azure Resource Manager templates. Templates are ideal when
deploying one or more virtual machines that require post deployment configuration.
The JSON configuration for a virtual machine extension can be nested inside the virtual machine resource, or
placed at the root or top level of a Resource Manager JSON template. The placement of the JSON configuration
affects the value of the resource name and type. For more information, see Set name and type for child
resources.
The following example assumes the extension is nested inside the virtual machine resource. When nesting the
extension resource, the JSON is placed in the "resources": [] object of the virtual machine.
{
"name": "myExtensionName",
"type": "extensions",
"location": "[resourceGroup().location]",
"apiVersion": "2015-06-15",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', myVM)]"
],
"properties": {
"publisher": "Microsoft.HpcCompute",
"type": "NvidiaGpuDriverWindows",
"typeHandlerVersion": "1.3",
"autoUpgradeMinorVersion": true,
"settings": {
}
}
}
PowerShell
Set-AzVMExtension
-ResourceGroupName "myResourceGroup" `
-VMName "myVM" `
-Location "southcentralus" `
-Publisher "Microsoft.HpcCompute" `
-ExtensionName "NvidiaGpuDriverWindows" `
-ExtensionType "NvidiaGpuDriverWindows" `
-TypeHandlerVersion 1.3 `
-SettingString '{ `
}'
Azure CLI
az vm extension set \
--resource-group myResourceGroup \
--vm-name myVM \
--name NvidiaGpuDriverWindows \
--publisher Microsoft.HpcCompute \
--version 1.3 \
--settings '{ \
}'
C:\WindowsAzure\Logs\Plugins\Microsoft.HpcCompute.NvidiaGpuDriverWindows\
Error codes
ERRO R C O DE M EA N IN G P O SSIB L E A C T IO N
0 Operation successful
100 Operation not supported or could not Possible causes: PowerShell version not
be completed. supported, VM size is not an N-series
VM, Failure downloading data. Check
the log files to determine cause of
error.
-5x Operation interrupted due to pending Reboot VM. Installation will continue
reboot. after reboot. Uninstall should be
invoked manually.
Support
If you need more help at any point in this article, you can contact the Azure experts on the MSDN Azure and
Stack Overflow forums. Alternatively, you can file an Azure support incident. Go to the Azure support site and
select Get support. For information about using Azure Support, read the Microsoft Azure support FAQ.
Next steps
For more information about extensions, see Virtual machine extensions and features for Windows.
For more information about N-series VMs, see GPU optimized virtual machine sizes.
AMD GPU driver extension for Windows
11/2/2020 • 3 minutes to read • Edit Online
This article provides an overview of the VM extension to deploy AMD GPU drivers on Windows NVv4-series VMs.
When you install AMD drivers using this extension, you are accepting and agreeing to the terms of the AMD End-
User License Agreement. During the installation process, the VM may reboot to complete the driver setup.
Instructions on manual installation of the drivers and the current supported versions are available here.
Prerequisites
Operating system
This extension supports the following OSs:
Internet connectivity
The Microsoft Azure Extension for AMD GPU Drivers requires that the target VM is connected to the internet and
have access.
Extension schema
The following JSON shows the schema for the extension.
{
"name": "<myExtensionName>",
"type": "extensions",
"apiVersion": "2015-06-15",
"location": "<location>",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', <myVM>)]"
],
"properties": {
"publisher": "Microsoft.HpcCompute",
"type": "AmdGpuDriverWindows",
"typeHandlerVersion": "1.0",
"autoUpgradeMinorVersion": true,
"settings": {
}
}
}
Properties
NAME VA L UE / EXA M P L E DATA T Y P E
Deployment
Azure Resource Manager Template
Azure VM extensions can be deployed with Azure Resource Manager templates. Templates are ideal when
deploying one or more virtual machines that require post deployment configuration.
The JSON configuration for a virtual machine extension can be nested inside the virtual machine resource, or
placed at the root or top level of a Resource Manager JSON template. The placement of the JSON configuration
affects the value of the resource name and type. For more information, see Set name and type for child resources.
The following example assumes the extension is nested inside the virtual machine resource. When nesting the
extension resource, the JSON is placed in the "resources": [] object of the virtual machine.
{
"name": "myExtensionName",
"type": "extensions",
"location": "[resourceGroup().location]",
"apiVersion": "2015-06-15",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', myVM)]"
],
"properties": {
"publisher": "Microsoft.HpcCompute",
"type": "AmdGpuDriverWindows",
"typeHandlerVersion": "1.0",
"autoUpgradeMinorVersion": true,
"settings": {
}
}
}
PowerShell
Set-AzVMExtension
-ResourceGroupName "myResourceGroup" `
-VMName "myVM" `
-Location "southcentralus" `
-Publisher "Microsoft.HpcCompute" `
-ExtensionName "AmdGpuDriverWindows" `
-ExtensionType "AmdGpuDriverWindows" `
-TypeHandlerVersion 1.0 `
-SettingString '{ `
}'
Azure CLI
az vm extension set `
--resource-group myResourceGroup `
--vm-name myVM `
--name AmdGpuDriverWindows `
--publisher Microsoft.HpcCompute `
--version 1.0 `
--settings '{ `
}'
C:\WindowsAzure\Logs\Plugins\Microsoft.HpcCompute.NvidiaGpuDriverMicrosoft\
Error codes
ERRO R C O DE M EA N IN G P O SSIB L E A C T IO N
0 Operation successful
100 Operation not supported or could not Possible causes: PowerShell version not
be completed. supported, VM size is not an N-series
VM, Failure downloading data. Check
the log files to determine cause of error.
-5x Operation interrupted due to pending Reboot VM. Installation will continue
reboot. after reboot. Uninstall should be
invoked manually.
Support
If you need more help at any point in this article, you can contact the Azure experts on the MSDN Azure and Stack
Overflow forums. Alternatively, you can file an Azure support incident. Go to the Azure support site and select Get
support. For information about using Azure Support, read the Microsoft Azure support FAQ.
Next steps
For more information about extensions, see Virtual machine extensions and features for Windows.
For more information about N-series VMs, see GPU optimized virtual machine sizes.
Azure Diagnostics extension overview
11/2/2020 • 4 minutes to read • Edit Online
Azure Diagnostics extension is an agent in Azure Monitor that collects monitoring data from the guest
operating system of Azure compute resources including virtual machines. This article provides an overview of
Azure Diagnostics extension including specific functionality that it supports and options for installation and
configuration.
NOTE
Azure Diagnostics extension is one of the agents available to collect monitoring data from the guest operating system of
compute resources. See Overview of the Azure Monitor agents for a description of the different agents and guidance on
selecting the appropriate agents for your requirements.
Primary scenarios
The primary scenarios addressed by the diagnostics extension are:
Collect guest metrics into Azure Monitor Metrics.
Send guest logs and metrics to Azure storage for archiving.
Send guest logs and metrics to Azure event hubs to send outside of Azure.
Costs
There is no cost for Azure Diagnostic Extension, but you may incur charges for the data ingested. Check Azure
Monitor pricing for the destination where you're collecting data.
Data collected
The following tables list the data that can be collected by the Windows and Linux diagnostics extension.
Windows diagnostics extension (WAD)
DATA SO URC E DESC RIP T IO N
IIS Logs Usage information for IIS web sites running on the guest
operating system.
.NET EventSource logs Code writing events using the .NET EventSource class
Manifest based ETW logs Event Tracing for Windows events generated by any process.
Crash dumps (logs) Information about the state of the process if an application
crashes.
Data destinations
The Azure Diagnostic extension for both Windows and Linux always collect data into an Azure Storage account.
See Install and configure Windows Azure diagnostics extension (WAD) and Use Linux Diagnostic Extension to
monitor metrics and logs for a list of specific tables and blobs where this data is collected.
Configure one or more data sinks to send data to other additional destinations. The following sections list the
sinks available for the Windows and Linux diagnostics extension.
Windows diagnostics extension (WAD)
DEST IN AT IO N DESC RIP T IO N
Azure Monitor Metrics Collect performance data to Azure Monitor Metrics. See
Send Guest OS metrics to the Azure Monitor metric
database.
Event hubs Use Azure Event Hubs to send data outside of Azure. See
Streaming Azure Diagnostics data to Event Hubs
DEST IN AT IO N DESC RIP T IO N
Azure Storage blobs Write to data to blobs in Azure Storage in addition to tables.
You can also collect WAD data from storage into a Log Analytics workspace to analyze it with Azure Monitor
Logs although the Log Analytics agent is typically used for this functionality. It can send data directly to a Log
Analytics workspace and supports solutions and insights that provide additional functionality. See Collect Azure
diagnostic logs from Azure Storage.
Linux diagnostics extension (LAD)
LAD writes data to tables in Azure Storage. It supports the sinks in the following table.
Event hubs Use Azure Event Hubs to send data outside of Azure.
Azure Storage blobs Write to data to blobs in Azure Storage in addition to tables.
Azure Monitor Metrics Install the Telegraf agent in addition to LAD. See Collect
custom metrics for a Linux VM with the InfluxData Telegraf
agent.
Other documentation
Azure Cloud Service (classic) Web and Worker Roles
Introduction to Cloud Service Monitoring
Enabling Azure Diagnostics in Azure Cloud Services
Application Insights for Azure cloud services
Trace the flow of a Cloud Services application with Azure Diagnostics
Azure Service Fabric
Monitor and diagnose services in a local machine development setup
Next steps
Learn to use Performance Counters in Azure Diagnostics.
If you have trouble with diagnostics starting or finding your data in Azure storage tables, see
TroubleShooting Azure Diagnostics
Log Analytics virtual machine extension for Windows
11/2/2020 • 5 minutes to read • Edit Online
Azure Monitor Logs provides monitoring capabilities across cloud and on-premises assets. The Log Analytics
agent virtual machine extension for Windows is published and supported by Microsoft. The extension installs the
Log Analytics agent on Azure virtual machines, and enrolls virtual machines into an existing Log Analytics
workspace. This document details the supported platforms, configurations, and deployment options for the Log
Analytics virtual machine extension for Windows.
Prerequisites
Operating system
For details about the supported Windows operating systems, refer to the Overview of Azure Monitor agents
article.
Agent and VM Extension version
The following table provides a mapping of the version of the Windows Log Analytics VM extension and Log
Analytics agent bundle for each release.
LO G A N A LY T IC S W IN DO W S LO G A N A LY T IC S W IN DO W S
A GEN T B UN DL E VERSIO N VM EXT EN SIO N VERSIO N REL EA SE DAT E REL EA SE N OT ES
Extension schema
The following JSON shows the schema for the Log Analytics agent extension. The extension requires the
workspace ID and workspace key from the target Log Analytics workspace. These can be found in the settings for
the workspace in the Azure portal. Because the workspace key should be treated as sensitive data, it should be
stored in a protected setting configuration. Azure VM extension protected setting data is encrypted, and only
decrypted on the target virtual machine. Note that workspaceId and workspaceKey are case-sensitive.
{
"type": "extensions",
"name": "OMSExtension",
"apiVersion": "[variables('apiVersion')]",
"location": "[resourceGroup().location]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', variables('vmName'))]"
],
"properties": {
"publisher": "Microsoft.EnterpriseCloud.Monitoring",
"type": "MicrosoftMonitoringAgent",
"typeHandlerVersion": "1.0",
"autoUpgradeMinorVersion": true,
"settings": {
"workspaceId": "myWorkSpaceId"
},
"protectedSettings": {
"workspaceKey": "myWorkspaceKey"
}
}
}
Property values
NAME VA L UE / EXA M P L E
apiVersion 2015-06-15
publisher Microsoft.EnterpriseCloud.Monitoring
type MicrosoftMonitoringAgent
typeHandlerVersion 1.0
NOTE
For additional properties see Azure Connect Windows Computers to Azure Monitor.
Template deployment
Azure VM extensions can be deployed with Azure Resource Manager templates. The JSON schema detailed in the
previous section can be used in an Azure Resource Manager template to run the Log Analytics agent extension
during an Azure Resource Manager template deployment. A sample template that includes the Log Analytics
agent VM extension can be found on the Azure Quickstart Gallery.
NOTE
The template does not support specifying more than one workspace ID and workspace key when you want to configure
the agent to report to multiple workspaces. To configure the agent to report to multiple workspaces, see Adding or
removing a workspace.
The JSON for a virtual machine extension can be nested inside the virtual machine resource, or placed at the root
or top level of a Resource Manager JSON template. The placement of the JSON affects the value of the resource
name and type. For more information, see Set name and type for child resources.
The following example assumes the Log Analytics extension is nested inside the virtual machine resource. When
nesting the extension resource, the JSON is placed in the "resources": [] object of the virtual machine.
{
"type": "extensions",
"name": "OMSExtension",
"apiVersion": "[variables('apiVersion')]",
"location": "[resourceGroup().location]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', variables('vmName'))]"
],
"properties": {
"publisher": "Microsoft.EnterpriseCloud.Monitoring",
"type": "MicrosoftMonitoringAgent",
"typeHandlerVersion": "1.0",
"autoUpgradeMinorVersion": true,
"settings": {
"workspaceId": "myWorkSpaceId"
},
"protectedSettings": {
"workspaceKey": "myWorkspaceKey"
}
}
}
When placing the extension JSON at the root of the template, the resource name includes a reference to the
parent virtual machine, and the type reflects the nested configuration.
{
"type": "Microsoft.Compute/virtualMachines/extensions",
"name": "<parentVmResource>/OMSExtension",
"apiVersion": "[variables('apiVersion')]",
"location": "[resourceGroup().location]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', variables('vmName'))]"
],
"properties": {
"publisher": "Microsoft.EnterpriseCloud.Monitoring",
"type": "MicrosoftMonitoringAgent",
"typeHandlerVersion": "1.0",
"autoUpgradeMinorVersion": true,
"settings": {
"workspaceId": "myWorkSpaceId"
},
"protectedSettings": {
"workspaceKey": "myWorkspaceKey"
}
}
}
PowerShell deployment
The Set-AzVMExtension command can be used to deploy the Log Analytics agent virtual machine extension to an
existing virtual machine. Before running the command, the public and private configurations need to be stored in
a PowerShell hash table.
C:\WindowsAzure\Logs\Plugins\Microsoft.EnterpriseCloud.Monitoring.MicrosoftMonitoringAgent\
Support
If you need more help at any point in this article, you can contact the Azure experts on the MSDN Azure and
Stack Overflow forums. Alternatively, you can file an Azure support incident. Go to the Azure support site and
select Get support. For information about using Azure Support, read the Microsoft Azure support FAQ.
Azure Monitor Dependency virtual machine
extension for Windows
1/14/2020 • 2 minutes to read • Edit Online
The Azure Monitor for VMs Map feature gets its data from the Microsoft Dependency agent. The Azure VM
Dependency agent virtual machine extension for Windows is published and supported by Microsoft. The extension
installs the Dependency agent on Azure virtual machines. This document details the supported platforms,
configurations, and deployment options for the Azure VM Dependency agent virtual machine extension for
Windows.
Operating system
The Azure VM Dependency agent extension for Windows can be run against the supported operating systems
listed in the Supported operating systems section of the Azure Monitor for VMs deployment article.
Extension schema
The following JSON shows the schema for the Azure VM Dependency agent extension on an Azure Windows VM.
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"vmName": {
"type": "string",
"metadata": {
"description": "The name of existing Azure VM. Supported Windows Server versions: 2008 R2 and above
(x64)."
}
}
},
"variables": {
"vmExtensionsApiVersion": "2017-03-30"
},
"resources": [
{
"type": "Microsoft.Compute/virtualMachines/extensions",
"name": "[concat(parameters('vmName'),'/DAExtension')]",
"apiVersion": "[variables('vmExtensionsApiVersion')]",
"location": "[resourceGroup().location]",
"dependsOn": [
],
"properties": {
"publisher": "Microsoft.Azure.Monitoring.DependencyAgent",
"type": "DependencyAgentWindows",
"typeHandlerVersion": "9.5",
"autoUpgradeMinorVersion": true
}
}
],
"outputs": {
}
}
Property values
NAME VA L UE/ EXA M P L E
apiVersion 2015-01-01
publisher Microsoft.Azure.Monitoring.DependencyAgent
type DependencyAgentWindows
typeHandlerVersion 9.5
Template deployment
You can deploy the Azure VM extensions with Azure Resource Manager templates. You can use the JSON schema
detailed in the previous section in an Azure Resource Manager template to run the Azure VM Dependency agent
extension during an Azure Resource Manager template deployment.
The JSON for a virtual machine extension can be nested inside the virtual machine resource. Or, you can place it at
the root or top level of a Resource Manager JSON template. The placement of the JSON affects the value of the
resource name and type. For more information, see Set name and type for child resources.
The following example assumes the Dependency agent extension is nested inside the virtual machine resource.
When you nest the extension resource, the JSON is placed in the "resources": [] object of the virtual machine.
{
"type": "extensions",
"name": "DAExtension",
"apiVersion": "[variables('apiVersion')]",
"location": "[resourceGroup().location]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', variables('vmName'))]"
],
"properties": {
"publisher": "Microsoft.Azure.Monitoring.DependencyAgent",
"type": "DependencyAgentWindows",
"typeHandlerVersion": "9.5",
"autoUpgradeMinorVersion": true
}
}
When you place the extension JSON at the root of the template, the resource name includes a reference to the
parent virtual machine. The type reflects the nested configuration.
{
"type": "Microsoft.Compute/virtualMachines/extensions",
"name": "<parentVmResource>/DAExtension",
"apiVersion": "[variables('apiVersion')]",
"location": "[resourceGroup().location]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', variables('vmName'))]"
],
"properties": {
"publisher": "Microsoft.Azure.Monitoring.DependencyAgent",
"type": "DependencyAgentWindows",
"typeHandlerVersion": "9.5",
"autoUpgradeMinorVersion": true
}
}
PowerShell deployment
You can use the Set-AzVMExtension command to deploy the Dependency agent virtual machine extension to an
existing virtual machine. Before you run the command, the public and private configurations need to be stored in a
PowerShell hash table.
C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.Monitoring.DependencyAgent\
Support
If you need more help at any point in this article, you can contact the Azure experts on the MSDN Azure and Stack
Overflow forums. Or, you can file an Azure support incident. Go to the Azure support site and select Get suppor t .
For information about how to use Azure Support, read the Microsoft Azure support FAQ.
Introduction to the Azure Desired State
Configuration extension handler
11/2/2020 • 9 minutes to read • Edit Online
The Azure VM Agent and associated extensions are part of Microsoft Azure infrastructure services. VM extensions
are software components that extend VM functionality and simplify various VM management operations.
The primary use case for the Azure Desired State Configuration (DSC) extension is to bootstrap a VM to the Azure
Automation State Configuration (DSC) service. The service provides benefits that include ongoing management of
the VM configuration and integration with other operational tools, such as Azure Monitoring. Using the extension
to register VM's to the service provides a flexible solution that even works across Azure subscriptions.
You can use the DSC extension independently of the Automation DSC service. However, this will only push a
configuration to the VM. No ongoing reporting is available, other than locally in the VM.
This article provides information about both scenarios: using the DSC extension for Automation onboarding, and
using the DSC extension as a tool for assigning configurations to VMs by using the Azure SDK.
Prerequisites
Local machine : To interact with the Azure VM extension, you must use either the Azure portal or the Azure
PowerShell SDK.
Guest Agent : The Azure VM that's configured by the DSC configuration must be an OS that supports
Windows Management Framework (WMF) 4.0 or later. For the full list of supported OS versions, see the DSC
extension version history.
Architecture
The Azure DSC extension uses the Azure VM Agent framework to deliver, enact, and report on DSC configurations
running on Azure VMs. The DSC extension accepts a configuration document and a set of parameters. If no file is
provided, a default configuration script is embedded with the extension. The default configuration script is used
only to set metadata in Local Configuration Manager.
When the extension is called for the first time, it installs a version of WMF by using the following logic:
If the Azure VM OS is Windows Server 2016, no action is taken. Windows Server 2016 already has the latest
version of PowerShell installed.
If the wmfVersion property is specified, that version of WMF is installed, unless that version is incompatible
with the VM's OS.
If no wmfVersion property is specified, the latest applicable version of WMF is installed.
Installing WMF requires a restart. After restarting, the extension downloads the .zip file that's specified in the
modulesUrl property, if provided. If this location is in Azure Blob storage, you can specify an SAS token in the
sasToken property to access the file. After the .zip is downloaded and unpacked, the configuration function
defined in configurationFunction runs to generate an .mof(Managed Object Format) file. The extension then
runs Start-DscConfiguration -Force by using the generated .mof file. The extension captures output and writes it
to the Azure status channel.
Default configuration script
The Azure DSC extension includes a default configuration script that's intended to be used when you onboard a
VM to the Azure Automation DSC service. The script parameters are aligned with the configurable properties of
Local Configuration Manager. For script parameters, see Default configuration script in Desired State
Configuration extension with Azure Resource Manager templates. For the full script, see the Azure quickstart
template in GitHub.
For the Node Configuration name, make sure the node configuration exists in Azure State Configuration. If it does
not, the extension deployment will return a failure. Also make sure you are using the name of the Node
Configuration and not the Configuration. A Configuration is defined in a script that is used to compile the Node
Configuration (MOF file). The name will always be the Configuration followed by a period . and either
localhost or a specific computer name.
configuration IISInstall
{
node "localhost"
{
WindowsFeature IIS
{
Ensure = "Present"
Name = "Web-Server"
}
}
}
The following commands place the iisInstall.ps1 script on the specified VM. The commands also execute the
configuration, and then report back on status.
$resourceGroup = 'dscVmDemo'
$vmName = 'myVM'
$storageName = 'demostorage'
#Publish the configuration script to user storage
Publish-AzVMDscConfiguration -ConfigurationPath .\iisInstall.ps1 -ResourceGroupName $resourceGroup -
StorageAccountName $storageName -force
#Set the VM to run the DSC configuration
Set-AzVMDscExtension -Version '2.76' -ResourceGroupName $resourceGroup -VMName $vmName -
ArchiveStorageAccountName $storageName -ArchiveBlobName 'iisInstall.ps1.zip' -AutoUpdate -ConfigurationName
'IISInstall'
az vm extension set \
--resource-group myResourceGroup \
--vm-name myVM \
--name Microsoft.Powershell.DSC \
--publisher Microsoft.Powershell \
--version 2.77 --protected-settings '{}' \
--settings '{}'
az vm extension set \
--resource-group myResourceGroup \
--vm-name myVM \
--name DSCForLinux \
--publisher Microsoft.OSTCExtensions \
--version 2.7 --protected-settings '{}' \
--settings '{}'
Logs
Logs for the extension are stored in the following location:
C:\WindowsAzure\Logs\Plugins\Microsoft.Powershell.DSC\<version number>
Next steps
For more information about PowerShell DSC, go to the PowerShell documentation center.
Examine the Resource Manager template for the DSC extension.
For more functionality that you can manage by using PowerShell DSC, and for more DSC resources, browse
the PowerShell gallery.
For details about passing sensitive parameters into configurations, see Manage credentials securely with the
DSC extension handler.
PowerShell DSC Extension
11/13/2019 • 4 minutes to read • Edit Online
Overview
The PowerShell DSC Extension for Windows is published and supported by Microsoft. The extension uploads and
applies a PowerShell DSC Configuration on an Azure VM. The DSC Extension calls into PowerShell DSC to enact the
received DSC configuration on the VM. This document details the supported platforms, configurations, and
deployment options for the DSC virtual machine extension for Windows.
Prerequisites
Operating system
The DSC Extension supports the following OS's
Windows Server 2019, Windows Server 2016, Windows Server 2012R2, Windows Server 2012, Windows Server
2008 R2 SP1, Windows Client 7/8.1/10
Internet connectivity
The DSC extension for Windows requires that the target virtual machine is able to communicate with Azure and the
location of the configuration package (.zip file) if it is stored in a location outside of Azure.
Extension schema
The following JSON shows the schema for the settings portion of the DSC Extension in an Azure Resource Manager
template.
{
"type": "Microsoft.Compute/virtualMachines/extensions",
"name": "Microsoft.Powershell.DSC",
"apiVersion": "2018-10-01",
"location": "<location>",
"properties": {
"publisher": "Microsoft.Powershell",
"type": "DSC",
"typeHandlerVersion": "2.77",
"autoUpgradeMinorVersion": true,
"settings": {
"wmfVersion": "latest",
"configuration": {
"url": "http://validURLToConfigLocation",
"script": "ConfigurationScript.ps1",
"function": "ConfigurationFunction"
},
"configurationArguments": {
"argument1": "Value1",
"argument2": "Value2"
},
"configurationData": {
"url": "https://foo.psd1"
},
"privacy": {
"dataCollection": "enable"
},
"advancedOptions": {
"forcePullAndApply": false,
"downloadMappings": {
"specificDependencyKey": "https://myCustomDependencyLocation"
}
}
},
"protectedSettings": {
"configurationArguments": {
"parameterOfTypePSCredential1": {
"userName": "UsernameValue1",
"password": "PasswordValue1"
},
"parameterOfTypePSCredential2": {
"userName": "UsernameValue2",
"password": "PasswordValue2"
}
},
"configurationUrlSasToken": "?g!bber1sht0k3n",
"configurationDataUrlSasToken": "?dataAcC355T0k3N"
}
}
}
Property values
NAME VA L UE / EXA M P L E DATA T Y P E
Template deployment
Azure VM extensions can be deployed with Azure Resource Manager templates. Templates are ideal when
deploying one or more virtual machines that require post deployment configuration. A sample Resource Manager
template that includes the DSC extension for Windows can be found on the Azure Quick Start Gallery.
C:\Packages\Plugins\{Extension_Name}\{Extension_Version}
Extension status file contains the sub status and status success/error codes along with the detailed error and
description for each extension run.
C:\WindowsAzure\Logs\Plugins\{Extension_Name}\{Extension_Version}
1004 Invalid Zip Package Invalid zip ; Error unpacking the zip
Support
If you need more help at any point in this article, you can contact the Azure experts on the MSDN Azure and Stack
Overflow forums. Alternatively, you can file an Azure support incident. Go to the Azure support site and select Get
support. For information about using Azure Support, read the Microsoft Azure support FAQ.
Pass credentials to the Azure DSCExtension handler
11/7/2019 • 2 minutes to read • Edit Online
This article covers the Desired State Configuration (DSC) extension for Azure. For an overview of the DSC extension
handler, see Introduction to the Azure Desired State Configuration extension handler.
Pass in credentials
As part of the configuration process, you might need to set up user accounts, access services, or install a program
in a user context. To do these things, you need to provide credentials.
You can use DSC to set up parameterized configurations. In a parameterized configuration, credentials are passed
into the configuration and securely stored in .mof files. The Azure extension handler simplifies credential
management by providing automatic management of certificates.
The following DSC configuration script creates a local user account with the specified password:
configuration Main
{
param(
[Parameter(Mandatory=$true)]
[ValidateNotNullorEmpty()]
[PSCredential]
$Credential
)
Node localhost {
User LocalUserAccount
{
Username = $Credential.UserName
Password = $Credential
Disabled = $false
Ensure = "Present"
FullName = "Local User Account"
Description = "Local User Account"
PasswordNeverExpires = $true
}
}
}
It's important to include node localhost as part of the configuration. The extension handler specifically looks for
the node localhost statement. If this statement is missing, the following steps don't work. It's also important to
include the typecast [PsCredential] . This specific type triggers the extension to encrypt the credential.
To publish this script to Azure Blob storage:
Publish-AzVMDscConfiguration -ConfigurationPath .\user_configuration.ps1
$vm | Update-AzVM
Next steps
Get an introduction to Azure DSC extension handler.
Examine the Azure Resource Manager template for the DSC extension.
For more information about PowerShell DSC, go to the PowerShell documentation center.
For more functionality that you can manage by using PowerShell DSC, and for more DSC resources, browse the
PowerShell gallery.
Desired State Configuration extension with Azure
Resource Manager templates
11/2/2020 • 9 minutes to read • Edit Online
This article describes the Azure Resource Manager template for the Desired State Configuration (DSC) extension
handler. Many of the examples use RegistrationURL (provided as a String) and RegistrationKey (provided as a
PSCredential to onboard with Azure Automation. For details about obtaining those values, see Onboarding
machines for management by Azure Automation State Configuration - Secure registration.
NOTE
You might encounter slightly different schema examples. The change in schema occurred in the October 2016 release. For
details, see Update from a previous format.
Details
P RO P ERT Y N A M E TYPE DESC RIP T IO N
"protectedSettings": {
"configurationArguments": {
"parameterOfTypePSCredential1": {
"userName": "UsernameValue1",
"password": "PasswordValue1"
}
}
}
"settings": {
"configuration": {
"url": "https://demo.blob.core.windows.net/iisinstall.zip",
"script": "IisInstall.ps1",
"function": "IISInstall"
}
},
"protectedSettings": {
"configurationUrlSasToken": "odLPL/U1p9lvcnp..."
}
"settings": {
"RegistrationUrl" : "[reference(concat('Microsoft.Automation/automationAccounts/',
parameters('automationAccountName'))).registrationUrl]",
"NodeConfigurationName" : "[parameters('NodeConfigName')]"
},
"protectedSettings": {
"configurationArguments": {
"RegistrationKey": {
"userName": "NOT_USED",
"Password": "[listKeys(resourceId('Microsoft.Automation/automationAccounts/',
parameters('automationAccountName')), '2018-01-15').Keys[0].value]"
}
}
}
Update from a previous format
Any settings in a previous format of the extension (and which have the public properties ModulesUrl ,
ModuleSource , ModuleVersion , ConfigurationFunction , SasToken , or Proper ties ) automatically adapt to
the current format of the extension. They run just as they did before.
The following schema shows what the previous settings schema looked like:
"settings": {
"WMFVersion": "latest",
"ModulesUrl": "https://UrlToZipContainingConfigurationScript.ps1.zip",
"SasToken": "SAS Token if ModulesUrl points to private Azure Blob Storage",
"ConfigurationFunction": "ConfigurationScript.ps1\\ConfigurationFunction",
"Properties": {
"ParameterToConfigurationFunction1": "Value1",
"ParameterToConfigurationFunction2": "Value2",
"ParameterOfTypePSCredential1": {
"UserName": "UsernameValue1",
"Password": "PrivateSettingsRef:Key1"
},
"ParameterOfTypePSCredential2": {
"UserName": "UsernameValue2",
"Password": "PrivateSettingsRef:Key2"
}
}
},
"protectedSettings": {
"Items": {
"Key1": "PasswordValue1",
"Key2": "PasswordValue2"
},
"DataBlobUri": "https://UrlToConfigurationDataWithOptionalSasToken.psd1"
}
settings.wmfVersion settings.WMFVersion
settings.configuration.url settings.ModulesUrl
settings.configuration.module.name settings.ModuleSource
settings.configuration.module.version settings.ModuleVersion
settings.configurationArguments settings.Properties
settings.privacy.dataCollection settings.Privacy.dataCollection
settings.advancedOptions.downloadMappings settings.AdvancedOptions.DownloadMappings
C URREN T P RO P ERT Y N A M E P REVIO US SC H EM A EQ UIVA L EN T
protectedSettings.configurationArguments protectedSettings.Properties
protectedSettings.configurationUrlSasToken settings.SasToken
Troubleshooting
Here are some of the errors you might run into and how you can fix them.
Invalid values
"Privacy.dataCollection is '{0}'. The only possible values are '', 'Enable', and 'Disable'". "WmfVersion is '{0}'. Only
possible values are … and 'latest'".
Problem : A provided value is not allowed.
Solution : Change the invalid value to a valid value. For more information, see the table in Details.
Invalid URL
"ConfigurationData.url is '{0}'. This is not a valid URL" "DataBlobUri is '{0}'. This is not a valid URL"
"Configuration.url is '{0}'. This is not a valid URL"
Problem : A provided URL is not valid.
Solution : Check all your provided URLs. Ensure that all URLs resolve to valid locations that the extension can
access on the remote machine.
Invalid RegistrationKey type
"Invalid type for parameter RegistrationKey of type PSCredential."
Problem : The RegistrationKey value in protectedSettings.configurationArguments cannot be provided as any type
other than a PSCredential.
Solution : Change your protectedSettings.configurationArguments entry for RegistrationKey to a PSCredential
type using the following format:
"configurationArguments": {
"RegistrationKey": {
"userName": "NOT_USED",
"Password": "RegistrationKey"
}
}
Next steps
Learn about using virtual machine scale sets with the Azure DSC extension.
Find more details about DSC's secure credential management.
Get an introduction to the Azure DSC extension handler.
For more information about PowerShell DSC, go to the PowerShell documentation center.
Manage administrative users, SSH, and check or
repair disks on Linux VMs using the VMAccess
Extension with the Azure CLI
11/2/2020 • 5 minutes to read • Edit Online
Overview
The disk on your Linux VM is showing errors. You somehow reset the root password for your Linux VM or
accidentally deleted your SSH private key. If that happened back in the days of the datacenter, you would need to
drive there and then open the KVM to get at the server console. Think of the Azure VMAccess extension as that
KVM switch that allows you to access the console to reset access to Linux or perform disk level maintenance.
This article shows you how to use the Azure VMAccess Extension to check or repair a disk, reset user access,
manage administrative user accounts, or update the SSH configuration on Linux when they are running as Azure
Resource Manager virtual machines. If you need to manage Classic virtual machines - you can follow the
instructions found in the classic VM documentation.
NOTE
If you use the VMAccess Extension to reset the password of your VM after installing the AAD Login Extension you will need
to rerun the AAD Login Extension to re-enable AAD Login for your machine.
Prerequisites
Operating system
The VM Access extension can be run against these Linux distributions:
Suse 11 and 12
CoreOS 494.4.0+
az vm user update \
--resource-group myResourceGroup \
--name myVM \
--username azureuser \
--ssh-key-value ~/.ssh/id_rsa.pub
NOTE: The az vm user update command appends the new public key text to the ~/.ssh/authorized_keys file
for the admin user on the VM. This does not replace or remove any existing SSH keys. This will not remove
prior keys set at deployment time or subsequent updates via the VMAccess Extension.
Reset password
The following example resets the password for the user azureuser on the VM named myVM :
az vm user update \
--resource-group myResourceGroup \
--name myVM \
--username azureuser \
--password myNewPassword
Restart SSH
The following example restarts the SSH daemon and resets the SSH configuration to default values on a VM named
myVM :
az vm user reset-ssh \
--resource-group myResourceGroup \
--name myVM
az vm user update \
--resource-group myResourceGroup \
--name myVM \
--username myNewUser \
--ssh-key-value ~/.ssh/id_rsa.pub
Delete a user
The following example deletes a user named myNewUser on the VM named myVM :
az vm user delete \
--resource-group myResourceGroup \
--name myVM \
--username myNewUser
{
"username":"azureuser",
"ssh_key":"ssh-rsa
AAAAB3NzaC1yc2EAAAADAQABAAACAQCZ3S7gGp3rcbKmG2Y4vGZFMuMZCwoUzZNGxxxxxx2XV2x9FfAhy8iGD+lF8UdjFX3t5ebMm6BnnMh8fHw
kTRdOt3LDQq8o8ElTBrZaKPxZN2thMZnODs5Hlemb2UX0oRIGRcvWqsd4oJmxsXa/Si98Wa6RHWbc9QZhw80KAcOVhmndZAZAGR+Wq6yslNo5TM
Or1/ZyQAook5C4FtcSGn3Y+WczaoGWIxG4ZaWk128g79VIeJcIQqOjPodHvQAhll7qDlItVvBfMOben3GyhYTm7k4YwlEdkONm4yV/UIW0la1rm
yztSBQIm9sZmSq44XXgjVmDHNF8UfCZ1ToE4r2SdwTmZv00T2i5faeYnHzxiLPA3Enub7xxxxxxwFArnqad7MO1SY1kLemhX9eFjLWN4mJe56Fu
4NiWJkR9APSZQrYeKaqru4KUC68QpVasNJHbuxPSf/PcjF3cjO1+X+4x6L1H5HTPuqUkyZGgDO4ynUHbko4dhlanALcriF7tIfQR9i2r2xOyv5g
xJEW/zztGqWma/d4rBoPjnf6tO7rLFHXMt/DVTkAfn5wxxtLDwkn5FMyvThRmex3BDf0gujoI1y6cOWLe9Y5geNX0oj+MXg/W0cXAtzSFocstV1
PoVqy883hNoeQZ3mIGB3Q0rIUm5d9MA2bMMt31m1g3Sin6EQ== azureuser@myVM"
}
az vm extension set \
--resource-group myResourceGroup \
--vm-name myVM \
--name VMAccessForLinux \
--publisher Microsoft.OSTCExtensions \
--version 1.4 \
--protected-settings update_ssh_key.json
To reset a user password, create a file named reset_user_password.json and add settings in the following format.
Substitute your own values for the username and password parameters:
{
"username":"azureuser",
"password":"myNewPassword"
}
Restart SSH
To restart the SSH daemon and reset the SSH configuration to default values, create a file named reset_sshd.json .
Add the following content:
{
"reset_ssh": true
}
az vm extension set \
--resource-group myResourceGroup \
--vm-name myVM \
--name VMAccessForLinux \
--publisher Microsoft.OSTCExtensions \
--version 1.4 \
--protected-settings reset_sshd.json
{
"username":"myNewUser",
"ssh_key":"ssh-rsa
AAAAB3NzaC1yc2EAAAADAQABAAACAQCZ3S7gGp3rcbKmG2Y4vGZFMuMZCwoUzZNG1vHY7P2XV2x9FfAhy8iGD+lF8UdjFX3t5ebMm6BnnMh8fHw
kTRdOt3LDQq8o8ElTBrZaKPxZN2thMZnODs5Hlemb2UX0oRIGRcvWqsd4oJmxsXa/Si98Wa6RHWbc9QZhw80KAcOVhmndZAZAGR+Wq6yslNo5TM
Or1/ZyQAook5C4FtcSGn3Y+WczaoGWIxG4ZaWk128g79VIeJcIQqOjPodHvQAhll7qDlItVvBfMOben3GyhYTm7k4YwlEdkONm4yV/UIW0la1rm
yztSBQIm9sZmSq44XXgjVmDHNF8UfCZ1ToE4r2SdwTmZv00T2i5faeYnHzxiLPA3Enub7iUo5IdwFArnqad7MO1SY1kLemhX9eFjLWN4mJe56Fu
4NiWJkR9APSZQrYeKaqru4KUC68QpVasNJHbuxPSf/PcjF3cjO1+X+4x6L1H5HTPuqUkyZGgDO4ynUHbko4dhlanALcriF7tIfQR9i2r2xOyv5g
xJEW/zztGqWma/d4rBoPjnf6tO7rLFHXMt/DVTkAfn5woYtLDwkn5FMyvThRmex3BDf0gujoI1y6cOWLe9Y5geNX0oj+MXg/W0cXAtzSFocstV1
PoVqy883hNoeQZ3mIGB3Q0rIUm5d9MA2bMMt31m1g3Sin6EQ== myNewUser@myVM",
"password":"myNewUserPassword"
}
az vm extension set \
--resource-group myResourceGroup \
--vm-name myVM \
--name VMAccessForLinux \
--publisher Microsoft.OSTCExtensions \
--version 1.4 \
--protected-settings create_new_user.json
To delete a user, create a file named delete_user.json and add the following content. Substitute your own value for
the remove_user parameter:
{
"remove_user":"myNewUser"
}
az vm extension set \
--resource-group myResourceGroup \
--vm-name myVM \
--name VMAccessForLinux \
--publisher Microsoft.OSTCExtensions \
--version 1.4 \
--protected-settings delete_user.json
{
"check_disk": "true",
"repair_disk": "true, mydiskname"
}
az vm extension set \
--resource-group myResourceGroup \
--vm-name myVM \
--name VMAccessForLinux \
--publisher Microsoft.OSTCExtensions \
--version 1.4 \
--protected-settings disk_check_repair.json
Support
If you need more help at any point in this article, you can contact the Azure experts on the MSDN Azure and Stack
Overflow forums. Alternatively, you can file an Azure support incident. Go to the Azure support site and select Get
support. For information about using Azure Support, read the Microsoft Azure support FAQ.
Chef VM Extension for Linux and Windows
11/2/2020 • 3 minutes to read • Edit Online
Chef Software provides a DevOps automation platform for Linux and Windows that enables the management of
both physical and virtual server configurations. The Chef VM Extension is an extension that enables Chef on virtual
machines.
Prerequisites
Operating system
The Chef VM Extension is supported on all the Extension Supported OS's in Azure.
Internet connectivity
The Chef VM Extension requires that the target virtual machine is connected to the internet in order to retrieve the
Chef Client payload from the content delivery network (CDN).
Extension schema
The following JSON shows the schema for the Chef VM Extension. The extension requires at a minimum the Chef
Server URL, the Validation Client Name and the Validation Key for the Chef Server; these values can be found in the
knife.rb file in the starter-kit.zip that is downloaded when you install Chef Automate or a standalone Chef Server.
Because the validation key should be treated as sensitive data, it should be configured under the
protectedSettings element, meaning that it will only be decrypted on the target virtual machine.
{
"type": "Microsoft.Compute/virtualMachines/extensions",
"name": "[concat(variables('vmName'),'/', parameters('chef_vm_extension_type'))]",
"apiVersion": "2017-12-01",
"location": "[parameters('location')]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', variables('vmName'))]"
],
"properties": {
"publisher": "Chef.Bootstrap.WindowsAzure",
"type": "[parameters('chef_vm_extension_type')]",
"typeHandlerVersion": "1210.13",
"settings": {
"bootstrap_options": {
"chef_server_url": "[parameters('chef_server_url')]",
"validation_client_name": "[parameters('chef_validation_client_name')]"
},
"runlist": "[parameters('chef_runlist')]"
},
"protectedSettings": {
"validation_key": "[parameters('chef_validation_key')]"
}
}
}
Settings
NAME VA L UE / EXA M P L E DATA T Y P E REQ UIRED?
Protected settings
NAME EXA M P L E DATA T Y P E REQ UIRED?
Template deployment
Azure VM extensions can be deployed with Azure Resource Manager templates. Templates can be used to deploy
one or more virtual machines, install the Chef Client, connect to the Chef Server and the perform the initial
configuration on the server as defined by the Run-list
A sample Resource Manager template that includes the Chef VM Extension can be found in the Azure quickstart
gallery.
The JSON configuration for a virtual machine extension can be nested inside the virtual machine resource, or
placed at the root or top level of a Resource Manager JSON template. The placement of the JSON configuration
affects the value of the resource name and type. For more information, see Set name and type for child resources.
/var/lib/waagent/Chef.Bootstrap.WindowsAzure.LinuxChefClient
Windows
C:\Packages\Plugins\Chef.Bootstrap.WindowsAzure.ChefClient\
NOTE
For anything else directly related to Chef, contact Chef Support.
Next steps
If you need more help at any point in this article, you can contact the Azure experts on the MSDN Azure and Stack
Overflow forums. Alternatively, you can file an Azure support incident. Go to the Azure support site and select Get
support. For information about using Azure Support, read the Microsoft Azure support FAQ.
Stackify Retrace Linux Agent Extension
11/13/2019 • 3 minutes to read • Edit Online
Overview
Stackify provides products that track details about your application to help find and fix problems quickly. For
developer teams, Retrace is a fully integrated, multi-environment, app performance super-power. It combines
several tools every development team needs.
Retrace is the ONLY tool that delivers all of the following capabilities across all environments in a single platform.
Application performance management (APM)
Application and server logging
Error tracking and monitoring
Server, application, and custom metrics
About Stackify Linux Agent Extension
This extension provides an install path for the Linux Agent for Retrace.
Prerequisites
Operating system
The Retrace agent can be run against these Linux distributions
Internet connectivity
The Stackify Agent extension for Linux requires that the target virtual machine is connected to the internet.
You may need to adjust your network configuration to allow connections to Stackify, see
https://support.stackify.com/hc/en-us/articles/207891903-Adding-Exceptions-to-a-Firewall.
Extension schema
The following JSON shows the schema for the Stackify Retrace Agent extension. The extension requires the
environment and activationKey .
{
"type": "extensions",
"name": "StackifyExtension",
"apiVersion": "[variables('apiVersion')]",
"location": "[resourceGroup().location]",
"dependsOn": [
"[resourceId('Microsoft.Compute/virtualMachines',variables('vmName'))]"
],
"properties": {
"publisher": "Stackify.LinuxAgent.Extension",
"type": "StackifyLinuxAgentExtension",
"typeHandlerVersion": "1.0",
"autoUpgradeMinorVersion": true,
"settings": {
"environment": "myEnvironment"
},
"protectedSettings": {
"activationKey": "myActivationKey"
}
}
}
Template deployment
Azure VM extensions can be deployed with Azure Resource Manager templates. The JSON schema detailed in the
previous section can be used in an Azure Resource Manager template to run the Stackify Retrace Linux Agent
extension during an Azure Resource Manager template deployment.
The JSON for a virtual machine extension can be nested inside the virtual machine resource, or placed at the root
or top level of a Resource Manager JSON template. The placement of the JSON affects the value of the resource
name and type. For more information, see Set name and type for child resources.
The following example assumes the Stackify Retrace Linux extension is nested inside the virtual machine resource.
When nesting the extension resource, the JSON is placed in the "resources": [] object of the virtual machine.
The extension requires the environment and activationKey .
{
"type": "extensions",
"name": "StackifyExtension",
"apiVersion": "[variables('apiVersion')]",
"location": "[resourceGroup().location]",
"dependsOn": [
"[resourceId('Microsoft.Compute/virtualMachines',variables('vmName'))]"
],
"properties": {
"publisher": "Stackify.LinuxAgent.Extension",
"type": "StackifyLinuxAgentExtension",
"typeHandlerVersion": "1.0",
"autoUpgradeMinorVersion": true,
"settings": {
"environment": "myEnvironment"
},
"protectedSettings": {
"activationKey": "myActivationKey"
}
}
}
When placing the extension JSON at the root of the template, the resource name includes a reference to the parent
virtual machine, and the type reflects the nested configuration.
{
"type": "Microsoft.Compute/virtualMachines/extensions",
"name": "<parentVmResource>/StackifyExtension",
"apiVersion": "[variables('apiVersion')]",
"location": "[resourceGroup().location]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', variables('vmName'))]"
],
"properties": {
"publisher": "Stackify.LinuxAgent.Extension",
"type": "StackifyLinuxAgentExtension",
"typeHandlerVersion": "1.0",
"autoUpgradeMinorVersion": true,
"settings": {
"environment": "myEnvironment"
},
"protectedSettings": {
"activationKey": "myActivationKey"
}
}
}
PowerShell deployment
The Set-AzVMExtension command can be used to deploy the Stackify Retrace Linux Agent virtual machine extension
to an existing virtual machine. Before running the command, the public and private configurations need to be
stored in a PowerShell hash table.
The extension requires the environment and activationKey .
If you need more help you can contact Stackify support at https://support.stackify.com.
How to install and configure Symantec Endpoint
Protection on a Windows VM
11/2/2020 • 2 minutes to read • Edit Online
IMPORTANT
Classic VMs will be retired on March 1, 2023.
If you use IaaS resources from ASM, please complete your migration by March 1, 2023. We encourage you to make the
switch sooner to take advantage of the many feature enhancements in Azure Resource Manager.
For more information, see Migrate your IaaS resources to Azure Resource Manager by March 1, 2023.
Azure has two different deployment models for creating and working with resources: Resource Manager and
Classic. This article covers using the Classic deployment model. Microsoft recommends that most new deployments
use the Resource Manager model.
This article shows you how to install and configure the Symantec Endpoint Protection client on an existing virtual
machine (VM) running Windows Server. This full client includes services such as virus and spyware protection,
firewall, and intrusion prevention. The client is installed as a security extension by using the VM Agent.
If you have an existing subscription from Symantec for an on-premises solution, you can use it to protect your
Azure virtual machines. If you're not a customer yet, you can sign up for a trial subscription. For more information
about this solution, see Symantec Endpoint Protection on Microsoft's Azure platform. This page also has links to
licensing information and instructions for installing the client if you're already a Symantec customer.
TIP
If you don't know the cloud service and virtual machine names, run Get-AzureVM to list the names for all virtual machines
in your current subscription.
To verify that the Symantec security extension has been installed and is up-to-date:
1. Log on to the virtual machine. For instructions, see How to Log on to a Virtual Machine Running Windows
Server.
2. For Windows Server 2008 R2, click Star t > Symantec Endpoint Protection . For Windows Server 2012 or
Windows Server 2012 R2, from the start screen, type Symantec , and then click Symantec Endpoint
Protection .
3. From the Status tab of the Status-Symantec Endpoint Protection window, apply updates or restart if
needed.
Additional resources
How to Log on to a Virtual Machine Running Windows Server
Azure VM Extensions and Features
Move a Windows VM to another Azure subscription
or resource group
11/2/2020 • 2 minutes to read • Edit Online
This article walks you through how to move a Windows virtual machine (VM) between resource groups or
subscriptions. Moving between subscriptions can be handy if you originally created a VM in a personal subscription
and now want to move it to your company's subscription to continue your work. You do not need to stop the VM in
order to move it and it should continue to run during the move.
IMPORTANT
New resource IDs are created as part of the move. After the VM has been moved, you will need to update your tools and
scripts to use the new resource IDs.
You can use the output of the previous command to create a comma-separated list of resource IDs to Move-
AzResource to move each resource to the destination.
When you are asked to confirm that you want to move the specified resources, enter Y to confirm.
Next steps
You can move many different types of resources between resource groups and subscriptions. For more
information, see Move resources to a new resource group or subscription.
Move VMs to another Azure region
11/6/2020 • 7 minutes to read • Edit Online
There are scenarios in which you'd want to move your existing Azure IaaS virtual machines (VMs) from one region
to another. For example, you want to improve reliability and availability of your existing VMs, to improve
manageability, or to move for governance reasons. For more information, see the Azure VM move overview.
You can use Azure Site Recovery service to move Azure VMs to a secondary region.
In this tutorial, you learn how to:
Verify prerequisites for the move
Prepare the source VMs and the target region
Copy the data and enable replication
Test the configuration and perform the move
Delete the resources in the source region
IMPORTANT
To move Azure VMs to another region, we now recommend using Azure Resource Mover. Resource Mover is in public
preview and provides:
A single hub for moving resources across regions.
Reduced move time and complexity. Everything you need is in a single location.
A simple and consistent experience for moving different types of Azure resources.
An easy way to identify dependencies across resources you want to move. This helps you to move related resources
together, so that everything works as expected in the target region, after the move.
Automatic cleanup of resources in the source region, if you want to delete them after the move.
Testing. You can try out a move, and then discard it if you don't want to do a full move.
NOTE
This tutorial shows you how to move Azure VMs from one region to another as is. If you need to improve availability by
moving VMs in an availability set to zone pinned VMs in a different region, see the Move Azure VMs into Availability Zones
tutorial.
Prerequisites
Make sure that the Azure VMs are in the Azure region from which you want to move.
Verify that your choice of source region - target region combination is supported, and make an informed
decision about the target region.
Make sure that you understand the scenario architecture and components.
Review the support limitations and requirements.
Verify account permissions. If you created your free Azure account, you're the administrator of your
subscription. If you're not the subscription administrator, work with the administrator to assign the
permissions that you need. To enable replication for a VM and essentially copy data by using Azure Site
Recovery, you must have:
Permissions to create a VM in Azure resources. The Virtual Machine Contributor built-in role has
these permissions, which include:
Permission to create a VM in the selected resource group
Permission to create a VM in the selected virtual network
Permission to write to the selected storage account
Permissions to manage Azure Site Recovery operations. The Site Recovery Contributor role has all the
permissions that are required to manage Site Recovery operations in a Recovery Services vault.
Make sure that all the latest root certificates are on the Azure VMs that you want to move. If the latest root
certificates aren't on the VM, security constraints will prevent the data copy to the target region.
For Windows VMs, install all the latest Windows updates on the VM, so that all the trusted root certificates
are on the machine. In a disconnected environment, follow the standard Windows Update and certificate
update processes for your organization.
For Linux VMs, follow the guidance provided by your Linux distributor to get the latest trusted root
certificates and certificate revocation list on the VM.
Make sure that you're not using an authentication proxy to control network connectivity for VMs that you
want to move.
If the VM that you're trying to move doesn't have access to the internet, or it's using a firewall proxy to
control outbound access, check the requirements.
Identify the source networking layout and all the resources that you're currently using. This includes but isn't
limited to load balancers, network security groups (NSGs), and public IPs.
Verify that your Azure subscription allows you to create VMs in the target region that's used for disaster
recovery. Contact support to enable the required quota.
Make sure that your subscription has enough resources to support VMs with sizes that match your source
VMs. If you're using Site Recovery to copy data to the target, Site Recovery chooses the same size or the
closest possible size for the target VM.
Make sure that you create a target resource for every component that's identified in the source networking
layout. This step is important to ensure that your VMs have all the functionality and features in the target
region that you had in the source region.
NOTE
Azure Site Recovery automatically discovers and creates a virtual network when you enable replication for the source
VM. You can also pre-create a network and assign it to the VM in the user flow for enable replication. As mentioned
later, you need to manually create any other resources in the target region.
To create the most commonly used network resources that are relevant for you based on the source VM
configuration, see the following documentation:
Network security groups
Load balancers
Public IP
For any other networking components, see the networking documentation.
Prepare
The following steps shows how to prepare the virtual machine for the move using Azure Site Recovery as a
solution.
Create the vault in any region, except the source region
1. Sign in to the Azure portal
2. In search, type Recovery Services > click Recovery Services vaults
3. In Recovery Services vaults menu, click +Add.
4. In Name , specify the friendly name ContosoVMVault . If you have more than one subscription, select the
appropriate one.
5. Create the resource group ContosoRG .
6. Specify an Azure region. To check supported regions, see geographic availability in Azure Site Recovery pricing
details.
7. In Recover y Ser vices vaults , select ContosoVMVault > Replicated items > +Replicate .
8. In the dropdown, select Azure Vir tual Machines .
9. In Source location , select the source Azure region where your VMs are currently running.
10. Select the Resource Manager deployment model. Then select the Source subscription and Source resource
group .
11. Select OK to save the settings.
Enable replication for Azure VMs and start copying the data
Site Recovery retrieves a list of the VMs that are associated with the subscription and resource group.
1. In the next step, select the VM that you want to move, then select OK .
2. In Settings , select Disaster recover y .
3. In Configure disaster recover y > Target region , select the target region to which you'll replicate.
4. For this tutorial, accept the other default settings.
5. Select Enable replication . This step starts a job to enable replication for the VM.
Move
The following steps shows how to perform the move to the target region.
1. Go to the vault. In Settings > Replicated items , select the VM, and then select Failover .
2. In Failover , select Latest .
3. Select Shut down machine before beginning failover . Site Recovery attempts to shut down the source VM
before triggering the failover. Failover continues even if shutdown fails. You can follow the failover progress on
the Jobs page.
4. After the job is finished, check that the VM appears in the target Azure region as expected.
Discard
In case you checked the moved VM and need to make changed to point of failover or want to go back to a previous
point, in the Replicated items , right-select the VM > Change recover y point . This step provides you the option
to specify a different recovery point and failover to that one.
Commit
Once you have checked the moved VM and are ready to commit the change, in the Replicated items , right-select
the VM > Commit . This step finishes the move process to the target region. Wait until the commit job finishes.
Clean up
The following steps will guide you through how to clean up the source region as well as related resources that were
used for the move.
For all resources that were used for the move:
Go to the VM. Select Disable Replication . This step stops the process from copying the data for the VM.
IMPORTANT
It's important to perform this step to avoid being charged for Azure Site Recovery replication.
If you have no plans to reuse any of the source resources, complete these additional steps:
1. Delete all the relevant network resources in the source region that you identified in prerequisites.
2. Delete the corresponding storage account in the source region.
Next steps
In this tutorial, you moved an Azure VM to a different Azure region. Now you can configure disaster recovery for
the VM that you moved.
Set up disaster recovery after migration
Move Azure VMs into Availability Zones
11/6/2020 • 8 minutes to read • Edit Online
This articles describes how to move Azure VMs to an availability zone in a different region. If you want to move to a
different zone in the same region, review this article.
Availability Zones in Azure help protect your applications and data from datacenter failures. Each Availability Zone
is made up of one or more datacenters equipped with independent power, cooling, and networking. To ensure
resiliency, there’s a minimum of three separate zones in all enabled regions. The physical separation of Availability
Zones within a region helps protect applications and data from datacenter failures. With Availability Zones, Azure
offers a service-level agreement (SLA) of 99.99% for uptime of virtual machines (VMs). Availability Zones are
supported in select regions, as mentioned in Regions that support Availability Zones.
In a scenario where your VMs are deployed as single instance into a specific region, and you want to improve your
availability by moving these VMs into an Availability Zone, you can do so by using Azure Site Recovery. This action
can further be categorized into:
Move single-instance VMs into Availability Zones in a target region
Move VMs in an availability set into Availability Zones in a target region
IMPORTANT
To move Azure VMs to an availability zone in a different region, we now recommend using Azure Resource Mover. Resource
Mover is in public preview and provides:
A single hub for moving resources across regions.
Reduced move time and complexity. Everything you need is in a single location.
A simple and consistent experience for moving different types of Azure resources.
An easy way to identify dependencies across resources you want to move. This helps you to move related resources
together, so that everything works as expected in the target region, after the move.
Automatic cleanup of resources in the source region, if you want to delete them after the move.
Testing. You can try out a move, and then discard it if you don't want to do a full move.
Check prerequisites
Check whether the target region has support for Availability Zones. Check that your choice of source
region/target region combination is supported. Make an informed decision on the target region.
Make sure that you understand the scenario architecture and components.
Review the support limitations and requirements.
Check account permissions. If you just created your free Azure account, you're the admin of your
subscription. If you aren't the subscription admin, work with the admin to assign the permissions you need.
To enable replication for a VM and eventually copy data to the target by using Azure Site Recovery, you must
have:
1. Permission to create a VM in Azure resources. The Virtual Machine Contributor built-in role has these
permissions, which include:
Permission to create a VM in the selected resource group
Permission to create a VM in the selected virtual network
Permission to write to the selected storage account
2. Permission to manage Azure Site Recovery tasks. The Site Recovery Contributor role has all
permissions required to manage Site Recovery actions in a Recovery Services vault.
NOTE
Azure Site Recovery automatically discovers and creates a virtual network and storage account when you enable
replication for the source VM. You can also pre-create these resources and assign to the VM as part of the enable
replication step. But for any other resources, as mentioned later, you need to manually create them in the target
region.
The following documents tell how to create the most commonly used network resources that are relevant to
you, based on the source VM configuration.
Network security groups
Load balancers
Public IP
For any other networking components, refer to the networking documentation.
IMPORTANT
Ensure that you use a zone-redundant load balancer in the target. You can read more at Standard Load Balancer and
Availability Zones.
4. Manually create a non-production network in the target region if you want to test the configuration before
you cut over to the target region. We recommend this approach because it causes minimal interference with
the production environment.
Enable replication
The following steps will guide you when using Azure Site Recovery to enable replication of data to the target
region, before you eventually move them into Availability Zones.
NOTE
These steps are for a single VM. You can extend the same to multiple VMs. Go to the Recovery Services vault, select +
Replicate , and select the relevant VMs together.
1. In the Azure portal, select Vir tual machines , and select the VM you want to move into Availability Zones.
2. In Operations , select Disaster recover y .
3. In Configure disaster recover y > Target region , select the target region to which you'll replicate. Ensure
this region supports Availability Zones.
4. Select Next: Advanced settings .
5. Choose the appropriate values for the target subscription, target VM resource group, and virtual network.
6. In the Availability section, choose the Availability Zone into which you want to move the VM.
NOTE
If you don’t see the option for availability set or Availabilty Zone, ensure that the prerequisites are met and the
preparation of source VMs is complete.
7. Select Enable Replication . This action starts a job to enable replication for the VM.
Check settings
After the replication job has finished, you can check the replication status, modify replication settings, and test the
deployment.
1. In the VM menu, select Disaster recover y .
2. You can check replication health, the recovery points that have been created and the source, and target regions
on the map.
IMPORTANT
We recommend that you use a separate Azure VM network for the test failure, and not the production network in
the target region into which you want to move your VMs.
5. To start testing the move, select OK . To track progress, select the VM to open its properties. Or, you can
select the Test Failover job in the vault name > Settings > Jobs > Site Recover y jobs .
6. After the failover finishes, the replica Azure VM appears in the Azure portal > Vir tual Machines . Make sure
that the VM is running, sized appropriately, and connected to the appropriate network.
7. If you want to delete the VM created as part of testing the move, select Cleanup test failover on the
replicated item. In Notes , record and save any observations associated with the test.
IMPORTANT
Do the preceding step to avoid getting charged for Site Recovery replication after the move. The source replication settings
are cleaned up automatically. Note that the Site Recovery extension that is installed as part of the replication isn't removed
and needs to be removed manually.
Next steps
In this tutorial, you increased the availability of an Azure VM by moving into an availability set or Availability Zone.
Now you can set disaster recovery for the moved VM.
Set up disaster recovery after migration
Move a Maintenance Control configuration to
another region
11/2/2020 • 2 minutes to read • Edit Online
Follow this article to move a Maintenance Control configuration to a different Azure region. You might want to
move a configuration for a number of reasons. For example, to take advantage of a new region, to deploy features
or services available in a specific region, to meet internal policy and governance requirements, or in response to
capacity planning.
Maintenance control, with customized maintenance configurations, allows you to control how platform updates are
applied to Windows and Linux VMs, and to Azure Dedicated Hosts. There are a couple of scenarios for moving
maintenance control across regions:
To move your maintenance control configuration, but not the resources associated with the configuration,
follow the instructions in this article.
To move the resources associated with a maintenance configuration, but not the configuration itself, follow
these instructions.
To move both the maintenance configuration and the resources associated with it, first follow the instructions in
this article. Then, follow these instructions.
Prerequisites
Before you begin moving a maintenance control configuration:
Maintenance configurations are associated with Azure VMs or Azure Dedicated Hosts. Make sure that VM/host
resources exist in the new region before you begin.
Identify:
Existing maintenance control configurations.
The resource groups in which existing configurations currently reside.
The resource groups to which the configurations will be added after moving to the new region.
The resources associated with the maintenance configuration you want to move.
Check that the resources in the new region are the same as those associated with the current
maintenance configurations. The configurations can have the same names in the new region as they did
in the old, but this isn't required.
2. Review the returned table list of configuration records within the subscription. Here's an example. Your list
will contain values for your specific environment.
NAME LO C AT IO N RESO URC E GRO UP
3. Save your list for reference. As you move the configurations, it helps you to verify that everything's been
moved.
4. As a reference, map each configuration/resource group to the new resource group in the new region.
5. Create new maintenance configurations in the new region using PowerShell, or CLI.
6. Associate the configurations with the resources in the new region, using PowerShell, or CLI.
Next steps
Follow these instructions if you need to move resources associated with maintenance configurations.
Move resources in a Maintenance Control
configuration to another region
11/2/2020 • 2 minutes to read • Edit Online
Follow this article to move resources associated with a Maintenance Control configuration to a different Azure
region. You might want to move a configuration for a number of reasons. For example, to take advantage of a new
region, to deploy features or services available in a specific region, to meet internal policy and governance
requirements, or in response to capacity planning.
Maintenance control, with customized maintenance configurations, allows you to control how platform updates
are applied to Windows and Linux VMs, and to Azure Dedicated Hosts. There are a couple of scenarios for moving
maintenance control across regions:
To move the resources associated with a maintenance configuration, but not the configuration itself, follow this
article.
To move your maintenance control configuration, but not the resources associated with the configuration,
follow these instructions.
To move both the maintenance configuration and the resources associated with it, first follow these
instructions. Then, follow the instructions in this article.
Prerequisites
Before you begin moving the resources associated with a Maintenance Control configuration:
Make sure that the resources you're moving exist in the new region before you begin.
Verify the Maintenance Control configurations associated with the Azure VMs and Azure Dedicated Hosts that
you want to move. Check each resource individually. There's currently no way to retrieve configurations for
multiple resources.
When retrieving configurations for a resource:
Make sure you use the subscription ID for the account, not an Azure Dedicated Host ID.
CLI: The --output table parameter is used for readability only, and can be deleted or changed.
PowerShell: The Format-Table Name parameter is used for readability only, and can be deleted or
changed.
If you use PowerShell, you get an error if you try to list configurations for a resource that doesn't have
any associated configurations. The error will be similar to: "Operation failed with status: 'Not Found'.
Details: 404 Client Error: Not Found for url".
Prepare to move
1. Before you start, define these variables. We've provided an example for each.
3. To retrieve the maintenance configurations using the CLI az maintenance assignment command:
For Azure Dedicated Hosts:
Move
1. Follow these instructions to move the Azure VMs to the new region.
2. After the resources are moved, reapply maintenance configurations to the resources in the new region as
appropriate, depending on whether you moved the maintenance configurations. You can apply a maintenance
configuration to a resource using PowerShell or CLI.
You can upload VHD files from AWS or on-premises virtualization solutions to Azure to create VMs that take
advantage of Managed Disks. Azure Managed Disks removes the need to manage storage accounts for Azure IaaS
VMs. You have to only specify the type (Premium or Standard) and size of disk you need, and Azure creates and
manages the disk for you.
You can upload either generalized and specialized VHDs.
Generalized VHD - has had all of your personal account information removed using Sysprep.
Specialized VHD - maintains the user accounts, applications, and other state data from your original VM.
IMPORTANT
Before uploading any VHD to Azure, you should follow Prepare a Windows VHD or VHDX to upload to Azure
SC EN A RIO DO C UM EN TAT IO N
You have existing AWS EC2 instances that you would like to Move a VM from Amazon Web Services (AWS) to Azure
migrate to Azure VMs using managed disks
You have a VM from another virtualization platform that you Upload a generalized VHD and use it to create a new VM in
would like to use as an image to create multiple Azure VMs. Azure
You have a uniquely customized VM that you would like to Upload a specialized VHD to Azure and create a new VM
recreate in Azure.
P REM IUM
DISK S
TYPE P4 P6 P 10 P 15 P 20 P 30 P 40 P 50
IOPS per 120 240 500 1100 2300 5000 7500 7500
disk
STA N DA R
D DISK S4 S6 S10 S20 S30 S40 S50
TYPE S15
IOPS per 500 500 500 500 500 500 500 500
disk
Throughp 60 MB 60 MB 60 MB 60 MB 60 MB 60 MB 60 MB 60 MB
ut per per per per per per per per per
disk second second second second second second second second
Next Steps
Before uploading any VHD to Azure, you should follow Prepare a Windows VHD or VHDX to upload to Azure
Upload a generalized VHD and use it to create new
VMs in Azure
11/2/2020 • 2 minutes to read • Edit Online
This article walks you through using PowerShell to upload a VHD of a generalized VM to Azure, create an image
from the VHD, and create a new VM from that image. You can upload a VHD exported from an on-premises
virtualization tool or from another cloud. Using Managed Disks for the new VM simplifies the VM management
and provides better availability when the VM is placed in an availability set.
For a sample script, see Sample script to upload a VHD to Azure and create a new VM.
IMPORTANT
If you plan to run Sysprep before uploading your VHD to Azure for the first time, make sure you have prepared your VM.
$imageConfig = New-AzImageConfig `
-Location $location
$imageConfig = Set-AzImageOsDisk `
-Image $imageConfig `
-OsState Generalized `
-OsType Windows `
-ManagedDiskId $disk.Id
$image = New-AzImage `
-ImageName $imageName `
-ResourceGroupName $rgName `
-Image $imageConfig
Create the VM
Now that you have an image, you can create one or more new VMs from the image. This example creates a VM
named myVM from myImage, in myResourceGroup.
New-AzVm `
-ResourceGroupName $rgName `
-Name "myVM" `
-Image $image.Id `
-Location $location `
-VirtualNetworkName "myVnet" `
-SubnetName "mySubnet" `
-SecurityGroupName "myNSG" `
-PublicIpAddressName "myPIP" `
-OpenPorts 3389
Next steps
Sign in to your new virtual machine. For more information, see How to connect and log on to an Azure virtual
machine running Windows.
Move a Windows VM from Amazon Web Services
(AWS) to an Azure virtual machine
11/2/2020 • 2 minutes to read • Edit Online
If you are evaluating Azure virtual machines for hosting your workloads, you can export an existing Amazon Web
Services (AWS) EC2 Windows VM instance then upload the virtual hard disk (VHD) to Azure. Once the VHD is
uploaded, you can create a new VM in Azure from the VHD.
This article covers moving a single VM from AWS to Azure. If you want to move VMs from AWS to Azure at scale,
see Migrate virtual machines in Amazon Web Services (AWS) to Azure with Azure Site Recovery.
Prepare the VM
You can upload both generalized and specialized VHDs to Azure. Each type requires that you prepare the VM before
exporting from AWS.
Generalized VHD - a generalized VHD has had all of your personal account information removed using
Sysprep. If you intend to use the VHD as an image to create new VMs from, you should:
Prepare a Windows VM.
Generalize the virtual machine using Sysprep.
Specialized VHD - a specialized VHD maintains the user accounts, applications and other state data from
your original VM. If you intend to use the VHD as-is to create a new VM, ensure the following steps are
completed.
Prepare a Windows VHD to upload to Azure. Do not generalize the VM using Sysprep.
Remove any guest virtualization tools and agents that are installed on the VM (i.e. VMware tools).
Ensure the VM is configured to pull its IP address and DNS settings via DHCP. This ensures that the server
obtains an IP address within the VNet when it starts up.
Once the VHD has been exported, follow the instructions in How Do I Download an Object from an S3 Bucket? to
download the VHD file from the S3 bucket.
IMPORTANT
AWS charges data transfer fees for downloading the VHD. See Amazon S3 Pricing for more information.
Next steps
Now you can upload the VHD to Azure and create a new VM.
If you ran Sysprep on your source to generalize it before exporting, see Upload a generalized VHD and use it
to create a new VMs in Azure
If you did not run Sysprep before exporting, the VHD is considered specialized , see Upload a specialized VHD
to Azure and create a new VM
Migrating to Azure
11/2/2020 • 2 minutes to read • Edit Online
For migration, we recommend that you use the Azure Migrate service to migrate VMs and servers to Azure, rather
than the Azure Site Recovery service. Learn more about Azure Migrate.
Next steps
Review common questions about Azure Migrate.
Azure Policy built-in definitions for Azure Virtual
Machines
11/2/2020 • 36 minutes to read • Edit Online
This page is an index of Azure Policy built-in policy definitions for Azure Virtual Machines. For additional Azure
Policy built-ins for other services, see Azure Policy built-in definitions.
The name of each built-in policy definition links to the policy definition in the Azure portal. Use the link in the
Version column to view the source on the Azure Policy GitHub repo.
Microsoft.Compute
NAME VERSIO N
( A ZURE PO RTA L) DESC RIP T IO N EF F EC T ( S) ( GIT HUB)
All network ports should be Azure Security Center has AuditIfNotExists, Disabled 2.0.1
restricted on network identified some of your
security groups associated network security groups'
to your virtual machine inbound rules to be too
permissive. Inbound rules
should not allow access from
'Any' or 'Internet' ranges.
This can potentially enable
attackers to target your
resources.
NAME VERSIO N
DESC RIP T IO N EF F EC T ( S)
Allowed virtual machine size This policy enables you to Deny 1.0.1
SKUs specify a set of virtual
machine size SKUs that your
organization can deploy.
Audit Linux machines that Requires that prerequisites AuditIfNotExists, Disabled 1.0.0
allow remote connections are deployed to the policy
from accounts without assignment scope. For
passwords details, visit
https://aka.ms/gcpol.
Machines are non-compliant
if Linux machines that allow
remote connections from
accounts without passwords
Audit Linux machines that Requires that prerequisites AuditIfNotExists, Disabled 2.0.0
are not using SSH key for are deployed to the policy
authentication assignment scope. For
details, visit
https://aka.ms/gcpol.
Machines are non-compliant
if Non-compliant if the
machine allows passwords
for authenticating through
SSH
NAME VERSIO N
DESC RIP T IO N EF F EC T ( S)
Audit Linux machines that Requires that prerequisites AuditIfNotExists, Disabled 1.0.0
do not have the passwd file are deployed to the policy
permissions set to 0644 assignment scope. For
details, visit
https://aka.ms/gcpol.
Machines are non-compliant
if Linux machines that do
not have the passwd file
permissions set to 0644
Audit Linux machines that Requires that prerequisites AuditIfNotExists, Disabled 1.0.0
have accounts without are deployed to the policy
passwords assignment scope. For
details, visit
https://aka.ms/gcpol.
Machines are non-compliant
if Linux machines that have
accounts without passwords
Audit Linux virtual machines This policy audits Linux AuditIfNotExists, Disabled 1.0.0
on which the Linux Guest virtual machines hosted in
Configuration extension is Azure that are supported by
not enabled Guest Configuration but do
not have the Guest
Configuration extension
enabled. For more
information on Guest
Configuration, visit
https://aka.ms/gcpol.
NAME VERSIO N
DESC RIP T IO N EF F EC T ( S)
Audit VMs that do not use This policy audits VMs that audit 1.0.0
managed disks do not use managed disks
Audit Windows virtual This policy audits Windows AuditIfNotExists, Disabled 2.0.0
machines on which the virtual machines hosted in
Windows Guest Azure that are supported by
Configuration extension is Guest Configuration but do
not enabled not have the Guest
Configuration extension
enabled. For more
information on Guest
Configuration, visit
https://aka.ms/gcpol.
Deploy Log Analytics agent Deploy Log Analytics agent deployIfNotExists 1.2.0
for Linux virtual machine for Linux virtual machine
scale sets scale sets if the VM Image
(OS) is in the list defined and
the agent is not installed.
Note: if your scale set
upgradePolicy is set to
Manual, you need to apply
the extension to the all VMs
in the set by calling upgrade
on them. In CLI this would
be az vmss update-
instances.
NAME VERSIO N
DESC RIP T IO N EF F EC T ( S)
Deploy Log Analytics agent Deploy Log Analytics agent deployIfNotExists 1.2.0
for Linux VMs for Linux VMs if the VM
Image (OS) is in the list
defined and the agent is not
installed.
Deploy Log Analytics agent Deploy Log Analytics agent deployIfNotExists 1.1.0
for Windows virtual machine for Windows virtual machine
scale sets scale sets if the VM Image
(OS) is in the list defined and
the agent is not installed.
The list of OS images will be
updated over time as
support is updated. Note: if
your scale set upgradePolicy
is set to Manual, you need
to apply the extension to the
all VMs in the set by calling
upgrade on them. In CLI this
would be az vmss update-
instances.
Deploy Log Analytics agent Deploy Log Analytics agent deployIfNotExists 1.1.0
for Windows VMs for Windows VMs if the VM
Image (OS) is in the list
defined and the agent is not
installed. The list of OS
images will be updated over
time as support is updated.
Deploy the Linux Guest This policy deploys the Linux deployIfNotExists 1.0.0
Configuration extension to Guest Configuration
enable Guest Configuration extension to Linux virtual
assignments on Linux VMs machines hosted in Azure
that are supported by Guest
Configuration. The Linux
Guest Configuration
extension is a prerequisite
for all Linux Guest
Configuration assignments
and must deployed to
machines before using any
Linux Guest Configuration
policy definition. For more
information on Guest
Configuration, visit
https://aka.ms/gcpol.
NAME VERSIO N
DESC RIP T IO N EF F EC T ( S)
Deploy the Windows Guest This policy deploys the deployIfNotExists 1.0.0
Configuration extension to Windows Guest
enable Guest Configuration Configuration extension to
assignments on Windows Windows virtual machines
VMs hosted in Azure that are
supported by Guest
Configuration. The Windows
Guest Configuration
extension is a prerequisite
for all Windows Guest
Configuration assignments
and must deployed to
machines before using any
Windows Guest
Configuration policy
definition. For more
information on Guest
Configuration, visit
https://aka.ms/gcpol.
Endpoint protection solution Audit the existence and AuditIfNotExists, Disabled 2.0.0
should be installed on virtual health of an endpoint
machine scale sets protection solution on your
virtual machines scale sets,
to protect them from threats
and vulnerabilities.
Linux machines should meet Requires that prerequisites AuditIfNotExists, Disabled 1.0.0-preview
requirements for the Azure are deployed to the policy
security baseline assignment scope. For
details, visit
https://aka.ms/gcpol.
Machines are non-compliant
if Linux machines should
meet the requirements for
the Azure security baseline
Log Analytics agent health Security Center uses the Log AuditIfNotExists, Disabled 1.0.0
issues should be resolved on Analytics agent, formerly
your machines known as the Microsoft
Monitoring Agent (MMA). To
make sure your virtual
machines are successfully
monitored, you need to
make sure the agent is
installed on the virtual
machines and properly
collects security events to
the configured workspace.
Log Analytics agent should This policy audits any AuditIfNotExists, Disabled 1.0.0
be installed on your virtual Windows/Linux virtual
machine for Azure Security machines (VMs) if the Log
Center monitoring Analytics agent is not
installed which Security
Center uses to monitor for
security vulnerabilities and
threats
Log Analytics agent should Security Center collects data AuditIfNotExists, Disabled 1.0.0
be installed on your virtual from your Azure virtual
machine scale sets for Azure machines (VMs) to monitor
Security Center monitoring for security vulnerabilities
and threats.
Microsoft Antimalware for This policy audits any AuditIfNotExists, Disabled 1.0.0
Azure should be configured Windows virtual machine
to automatically update not configured with
protection signatures automatic update of
Microsoft Antimalware
protection signatures.
Network traffic data Security Center uses the AuditIfNotExists, Disabled 1.0.1-preview
collection agent should be Microsoft Dependency agent
installed on Linux virtual to collect network traffic
machines data from your Azure virtual
machines to enable
advanced network
protection features such as
traffic visualization on the
network map, network
hardening recommendations
and specific network threats.
Network traffic data Security Center uses the AuditIfNotExists, Disabled 1.0.1-preview
collection agent should be Microsoft Dependency agent
installed on Windows virtual to collect network traffic
machines data from your Azure virtual
machines to enable
advanced network
protection features such as
traffic visualization on the
network map, network
hardening recommendations
and specific network threats.
NAME VERSIO N
DESC RIP T IO N EF F EC T ( S)
Only approved VM This policy governs the Audit, Deny, Disabled 1.0.0
extensions should be virtual machine extensions
installed that are not approved.
System updates on virtual Audit whether there are any AuditIfNotExists, Disabled 2.0.0
machine scale sets should be missing system security
installed updates and critical updates
that should be installed to
ensure that your Windows
and Linux virtual machine
scale sets are secure.
The Log Analytics agent This policy audits any AuditIfNotExists, Disabled 1.0.0
should be installed on Windows/Linux Virtual
Virtual Machine Scale Sets Machine Scale Sets if the Log
Analytics agent is not
installed.
The Log Analytics agent This policy audits any AuditIfNotExists, Disabled 1.0.0
should be installed on virtual Windows/Linux virtual
machines machines if the Log Analytics
agent is not installed.
Unattached disks should be This policy audits any Audit, Disabled 1.0.0
encrypted unattached disk without
encryption enabled.
NAME VERSIO N
DESC RIP T IO N EF F EC T ( S)
Virtual machines should be Use new Azure Resource Audit, Deny, Disabled 1.0.0
migrated to new Azure Manager for your virtual
Resource Manager resources machines to provide security
enhancements such as:
stronger access control
(RBAC), better auditing,
Azure Resource Manager
based deployment and
governance, access to
managed identities, access
to key vault for secrets,
Azure AD-based
authentication and support
for tags and resource groups
for easier security
management
Microsoft.VirtualMachineImages
NAME VERSIO N
( A ZURE PO RTA L) DESC RIP T IO N EF F EC T ( S) ( GIT HUB)
NAME VERSIO N
DESC RIP T IO N EF F EC T ( S)
Microsoft.ClassicCompute
NAME VERSIO N
( A ZURE PO RTA L) DESC RIP T IO N EF F EC T ( S) ( GIT HUB)
All network ports should be Azure Security Center has AuditIfNotExists, Disabled 2.0.1
restricted on network identified some of your
security groups associated network security groups'
to your virtual machine inbound rules to be too
permissive. Inbound rules
should not allow access from
'Any' or 'Internet' ranges.
This can potentially enable
attackers to target your
resources.
Log Analytics agent health Security Center uses the Log AuditIfNotExists, Disabled 1.0.0
issues should be resolved on Analytics agent, formerly
your machines known as the Microsoft
Monitoring Agent (MMA). To
make sure your virtual
machines are successfully
monitored, you need to
make sure the agent is
installed on the virtual
machines and properly
collects security events to
the configured workspace.
Log Analytics agent should This policy audits any AuditIfNotExists, Disabled 1.0.0
be installed on your virtual Windows/Linux virtual
machine for Azure Security machines (VMs) if the Log
Center monitoring Analytics agent is not
installed which Security
Center uses to monitor for
security vulnerabilities and
threats
Virtual machines should be Use new Azure Resource Audit, Deny, Disabled 1.0.0
migrated to new Azure Manager for your virtual
Resource Manager resources machines to provide security
enhancements such as:
stronger access control
(RBAC), better auditing,
Azure Resource Manager
based deployment and
governance, access to
managed identities, access
to key vault for secrets,
Azure AD-based
authentication and support
for tags and resource groups
for easier security
management
Next steps
See the built-ins on the Azure Policy GitHub repo.
Review the Azure Policy definition structure.
Review Understanding policy effects.
Understand the structure and syntax of ARM templates
11/2/2020 • 13 minutes to read • Edit Online
This article describes the structure of an Azure Resource Manager (ARM) template. It presents the different sections of a
template and the properties that are available in those sections.
This article is intended for users who have some familiarity with ARM templates. It provides detailed information about
the structure of the template. For a step-by-step tutorial that guides you through the process of creating a template, see
Tutorial: Create and deploy your first Azure Resource Manager template.
Template format
In its simplest structure, a template has the following elements:
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "",
"apiProfile": "",
"parameters": { },
"variables": { },
"functions": [ ],
"resources": [ ],
"outputs": { }
}
Each element has properties you can set. This article describes the sections of the template in greater detail.
Parameters
In the parameters section of the template, you specify which values you can input when deploying the resources. You're
limited to 256 parameters in a template. You can reduce the number of parameters by using objects that contain multiple
properties.
The available properties for a parameter are:
"parameters": {
"<parameter-name>" : {
"type" : "<type-of-parameter-value>",
"defaultValue": "<default-value-of-parameter>",
"allowedValues": [ "<array-of-allowed-values>" ],
"minValue": <minimum-value-for-int>,
"maxValue": <maximum-value-for-int>,
"minLength": <minimum-length-for-string-or-array>,
"maxLength": <maximum-length-for-string-or-array-parameters>,
"metadata": {
"description": "<description-of-the parameter>"
}
}
}
For examples of how to use parameters, see Parameters in Azure Resource Manager templates.
Data types
For integers passed as inline parameters, the range of values may be limited by the SDK or command-line tool you use for
deployment. For example, when using PowerShell to deploy a template, integer types can range from -2147483648 to
2147483647. To avoid this limitation, specify large integer values in a parameter file. Resource types apply their own
limits for integer properties.
When specifying boolean and integer values in your template, don't surround the value with quotation marks. Start and
end string values with double quotation marks.
Objects start with a left brace and end with a right brace. Arrays start with a left bracket and end with a right bracket.
When you set a parameter to a secure string or secure object, the value of the parameter isn't saved to the deployment
history and isn't logged. However, if you set that secure value to a property that isn't expecting a secure value, the value
isn't protected. For example, if you set a secure string to a tag, that value is stored as plain text. Use secure strings for
passwords and secrets.
For samples of formatting data types, see Parameter type formats.
Variables
In the variables section, you construct values that can be used throughout your template. You don't need to define
variables, but they often simplify your template by reducing complex expressions.
The following example shows the available options for defining a variable:
"variables": {
"<variable-name>": "<variable-value>",
"<variable-name>": {
<variable-complex-type-value>
},
"<variable-object-name>": {
"copy": [
{
"name": "<name-of-array-property>",
"count": <number-of-iterations>,
"input": <object-or-value-to-repeat>
}
]
},
"copy": [
{
"name": "<variable-array-name>",
"count": <number-of-iterations>,
"input": <object-or-value-to-repeat>
}
]
}
For information about using copy to create several values for a variable, see Variable iteration.
For examples of how to use variables, see Variables in Azure Resource Manager template.
Functions
Within your template, you can create your own functions. These functions are available for use in your template. Typically,
you define complicated expressions that you don't want to repeat throughout your template. You create the user-defined
functions from expressions and functions that are supported in templates.
When defining a user function, there are some restrictions:
The function can't access variables.
The function can only use parameters that are defined in the function. When you use the parameters function within a
user-defined function, you're restricted to the parameters for that function.
The function can't call other user-defined functions.
The function can't use the reference function.
Parameters for the function can't have default values.
"functions": [
{
"namespace": "<namespace-for-functions>",
"members": {
"<function-name>": {
"parameters": [
{
"name": "<parameter-name>",
"type": "<type-of-parameter-value>"
}
],
"output": {
"type": "<type-of-output-value>",
"value": "<function-return-value>"
}
}
}
}
],
For examples of how to use custom functions, see User-defined functions in Azure Resource Manager template.
Resources
In the resources section, you define the resources that are deployed or updated.
You define resources with the following structure:
"resources": [
{
"condition": "<true-to-deploy-this-resource>",
"type": "<resource-provider-namespace/resource-type-name>",
"apiVersion": "<api-version-of-resource>",
"name": "<name-of-the-resource>",
"comments": "<your-reference-notes>",
"location": "<location-of-resource>",
"dependsOn": [
"<array-of-related-resource-names>"
],
"tags": {
"<tag-name1>": "<tag-value1>",
"<tag-name2>": "<tag-value2>"
},
"sku": {
"name": "<sku-name>",
"tier": "<sku-tier>",
"size": "<sku-size>",
"family": "<sku-family>",
"capacity": <sku-capacity>
},
"kind": "<type-of-resource>",
"copy": {
"name": "<name-of-copy-loop>",
"count": <number-of-iterations>,
"mode": "<serial-or-parallel>",
"batchSize": <number-to-deploy-serially>
},
"plan": {
"name": "<plan-name>",
"promotionCode": "<plan-promotion-code>",
"publisher": "<plan-publisher>",
"product": "<plan-product>",
"version": "<plan-version>"
},
"properties": {
"<settings-for-the-resource>",
"copy": [
{
"name": ,
"count": ,
"input": {}
}
]
},
"resources": [
"<array-of-child-resources>"
]
}
]
Outputs
In the Outputs section, you specify values that are returned from deployment. Typically, you return values from resources
that were deployed.
The following example shows the structure of an output definition:
"outputs": {
"<output-name>": {
"condition": "<boolean-value-whether-to-output-value>",
"type": "<type-of-output-value>",
"value": "<output-value-expression>",
"copy": {
"count": <number-of-iterations>,
"input": <values-for-the-variable>
}
}
}
For examples of how to use outputs, see Outputs in Azure Resource Manager template.
NOTE
To deploy templates with comments by using Azure CLI with version 2.3.0 or older, you must use the
--handle-extended-json-format switch.
{
"type": "Microsoft.Compute/virtualMachines",
"apiVersion": "2018-10-01",
"name": "[variables('vmName')]", // to customize name, change it in variables
"location": "[parameters('location')]", //defaults to resource group location
"dependsOn": [ /* storage account and network interface must be deployed first */
"[resourceId('Microsoft.Storage/storageAccounts/', variables('storageAccountName'))]",
"[resourceId('Microsoft.Network/networkInterfaces/', variables('nicName'))]"
],
In Visual Studio Code, the Azure Resource Manager Tools extension can automatically detect Resource Manager template
and change the language mode accordingly. If you see Azure Resource Manager Template at the bottom-right corner
of VS Code, you can use the inline comments. The inline comments are no longer marked as invalid.
Metadata
You can add a metadata object almost anywhere in your template. Resource Manager ignores the object, but your JSON
editor may warn you that the property isn't valid. In the object, define the properties you need.
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"metadata": {
"comments": "This template was developed for demonstration purposes.",
"author": "Example Name"
},
"parameters": {
"adminUsername": {
"type": "string",
"metadata": {
"description": "User name for the Virtual Machine."
}
},
When deploying the template through the portal, the text you provide in the description is automatically used as a tip for
that parameter.
For resources , add a comments element or a metadata object. The following example shows both a comments element
and a metadata object.
"resources": [
{
"type": "Microsoft.Storage/storageAccounts",
"apiVersion": "2018-07-01",
"name": "[concat('storage', uniqueString(resourceGroup().id))]",
"comments": "Storage account used to store VM disks",
"location": "[parameters('location')]",
"metadata": {
"comments": "These tags are needed for policy compliance."
},
"tags": {
"Dept": "[parameters('deptName')]",
"Environment": "[parameters('environment')]"
},
"sku": {
"name": "Standard_LRS"
},
"kind": "Storage",
"properties": {}
}
]
"outputs": {
"hostname": {
"type": "string",
"value": "[reference(variables('publicIPAddressName')).dnsSettings.fqdn]",
"metadata": {
"comments": "Return the fully qualified domain name"
}
},
Multi-line strings
You can break a string into multiple lines. For example, see the location property and one of the comments in the
following JSON example.
{
"type": "Microsoft.Compute/virtualMachines",
"apiVersion": "2018-10-01",
"name": "[variables('vmName')]", // to customize name, change it in variables
"location": "[
parameters('location')
]", //defaults to resource group location
/*
storage account and network interface
must be deployed first
*/
"dependsOn": [
"[resourceId('Microsoft.Storage/storageAccounts/', variables('storageAccountName'))]",
"[resourceId('Microsoft.Network/networkInterfaces/', variables('nicName'))]"
],
To deploy templates with multi-line strings by using Azure CLI with version 2.3.0 or older, you must use the
--handle-extended-json-format switch.
Next steps
To view complete templates for many different types of solutions, see the Azure Quickstart Templates.
For details about the functions you can use from within a template, see Azure Resource Manager Template Functions.
To combine several templates during deployment, see Using linked templates with Azure Resource Manager.
For recommendations about creating templates, see Azure Resource Manager template best practices.
For answers to common questions, see Frequently asked questions about ARM templates.
Frequently asked question about Windows Virtual
Machines
11/2/2020 • 4 minutes to read • Edit Online
This article addresses some common questions about Windows virtual machines created in Azure using the
Resource Manager deployment model. For the Linux version of this topic, see Frequently asked question about
Linux Virtual Machines.
Can I use the temporary disk (the D: drive by default) to store data?
Don't use the temporary disk to store data. It is only temporary storage, so you would risk losing data that can't be
recovered. Data loss can occur when the virtual machine moves to a different host. Resizing a virtual machine,
updating the host, or a hardware failure on the host are some of the reasons a virtual machine might move.
If you have an application that needs to use the D: drive letter, you can reassign drive letters so that the temporary
disk uses something other than D:. For instructions, see Change the drive letter of the Windows temporary disk.
How can I change the drive letter of the temporary disk?
You can change the drive letter by moving the page file and reassigning drive letters, but you need to make sure
you do the steps in a specific order. For instructions, see Change the drive letter of the Windows temporary disk.
Why am I not seeing Canada Central and Canada East regions through
Azure Resource Manager?
The two new regions of Canada Central and Canada East are not automatically registered for virtual machine
creation for existing Azure subscriptions. This registration is done automatically when a virtual machine is
deployed through the Azure portal to any other region using Azure Resource Manager. After a virtual machine is
deployed to any other Azure region, the new regions should be available for subsequent virtual machines.